Category: Analysis

  • Why the traditional college major may be holding students back in a rapidly changing job market

    Source: ForeignAffairs4

    Source: The Conversation – USA (2) – By John Weigand, Professor Emeritus of Architecture and Interior Design, Miami University

    Rethinking the college major could help colleges better understand what employers and students need. Westend61/Getty Images

    Colleges and universities are struggling to stay afloat.

    The reasons are numerous: declining numbers of college-age students in much of the country, rising tuition at public institutions as state funding shrinks, and a growing skepticism about the value of a college degree.

    Pressure is mounting to cut costs by reducing the time it takes to earn a degree from four years to three.

    Students, parents and legislators increasingly prioritize return on investment and degrees that are more likely to lead to gainful employment. This has boosted enrollment in professional programs while reducing interest in traditional liberal arts and humanities majors, creating a supply-demand imbalance.

    The result has been increasing financial pressure and an unprecedented number of closures and mergers, to date mostly among smaller liberal arts colleges.

    To survive, institutions are scrambling to align curriculum with market demand. And they’re defaulting to the traditional college major to do so.

    The college major, developed and delivered by disciplinary experts within siloed departments, continues to be the primary benchmark for academic quality and institutional performance.

    This structure likely works well for professional majors governed by accreditation or licensure, or more tightly aligned with employment. But in today’s evolving landscape, reliance on the discipline-specific major may not always serve students or institutions well.

    As a professor emeritus and former college administrator and dean, I argue that the college major may no longer be able to keep up with the combinations of skills that cross multiple academic disciplines and career readiness skills demanded by employers, or the flexibility students need to best position themselves for the workplace.

    Students want flexibility

    A man wearing headphones checks his phone while working on a laptop.
    The college curriculum may be less flexible now than ever.
    MoMo Productions/Digital Vision via Getty Images

    I see students arrive on campus each year with different interests, passions and talents – eager to stitch them into meaningful lives and careers.

    A more flexible curriculum is linked to student success, and students now consult AI tools such as ChatGPT to figure out course combinations that best position them for their future. They want flexibility, choice and time to redirect their studies if needed.

    And yet, the moment students arrive on campus – even before they apply – they’re asked to declare a major from a list of predetermined and prescribed choices. The major, coupled with general education and other college requirements, creates an academic track that is anything but flexible.

    Not surprisingly, around 80% of college students switch their majors at least once, suggesting that more flexible degree requirements would allow students to explore and combine diverse areas of interest. And the number of careers, let alone jobs, that college graduates are expected to have will only increase as technological change becomes more disruptive.

    As institutions face mounting pressures to attract students and balance budgets, and the college major remains the principal metric for doing so, the curriculum may be less flexible now than ever.

    How schools are responding

    A student wearing a blue cap and gown stands on grass looking at a building.
    The college major emerged as a response to an evolving workforce that prioritized specialized knowledge.
    Fuse/Corbia via Getty Images

    In response to market pressures, colleges are adding new high-demand majors at a record pace. Between 2002 and 2022, the number of degree programs nationwide increased by nearly 23,000, or 40%, while enrollment grew only 8%. Some of these majors, such as cybersecurity, fashion business or entertainment design, arguably connect disciplines rather than stand out as distinct. Thus, these new majors siphon enrollment from lower-demand programs within the institution and compete with similar new majors at competitor schools.

    At the same time, traditional arts and humanities majors are adding professional courses to attract students and improve employability. Yet, this adds credit hours to the degree while often duplicating content already available in other departments.

    Importantly, while new programs are added, few are removed. The challenge lies in faculty tenure and governance, along with a traditional understanding that faculty set the curriculum as disciplinary experts. This makes it difficult to close or revise low-demand majors and shift resources to growth areas.

    The result is a proliferation of under-enrolled programs, canceled courses and stretched resources – leading to reduced program quality and declining faculty morale.

    Ironically, under the pressure of declining demand, there can be perverse incentives to grow credit hours required in a major or in general education requirements as a way of garnering more resources or adding courses aligned with faculty interests. All of which continues to expand the curriculum and stress available resources.

    Universities are also wrestling with the idea of liberal education and how to package the general education requirement.

    Although liberal education is increasingly under fire, employers and students still value it.

    Students’ career readiness skills – their ability to think critically and creatively, to collaborate effectively and to communicate well – remain strong predictors of future success in the workplace and in life.

    Reenvisioning the college major

    Assuming the requirement for students to complete a major in order to earn a degree, colleges can also allow students to bundle smaller modules – such as variable-credit minors, certificates or course sequences – into a customizable, modular major.

    This lets students, guided by advisers, assemble a degree that fits their interests and goals while drawing from multiple disciplines. A few project-based courses can tie everything together and provide context.

    Such a model wouldn’t undermine existing majors where demand is strong. For others, where demand for the major is declining, a flexible structure would strengthen enrollment, preserve faculty expertise rather than eliminate it, attract a growing number of nontraditional students who bring to campus previously earned credentials, and address the financial bottom line by rightsizing curriculum in alignment with student demand.

    One critique of such a flexible major is that it lacks depth of study, but it is precisely the combination of curricular content that gives it depth. Another criticism is that it can’t be effectively marketed to an employer. But a customized major can be clearly named and explained to employers to highlight students’ unique skill sets.

    Further, as students increasingly try to fit cocurricular experiences – such as study abroad, internships, undergraduate research or organizational leadership – into their course of study, these can also be approved as modules in a flexible curriculum.

    It’s worth noting that while several schools offer interdisciplinary studies majors, these are often overprescribed or don’t grant students access to in-demand courses. For a flexible-degree model to succeed, course sections would need to be available and added or deleted in response to student demand.

    Several schools also now offer microcredentials– skill-based courses or course modules that increasingly include courses in the liberal arts. But these typically need to be completed in addition to requirements of the major.

    We take the college major for granted.

    Yet it’s worth noting that the major is a relatively recent invention.

    Before the 20th century, students followed a broad liberal arts curriculum designed to create well-rounded, globally minded citizens. The major emerged as a response to an evolving workforce that prioritized specialized knowledge. But times change – and so can the model.

    The Conversation

    John Weigand does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Why the traditional college major may be holding students back in a rapidly changing job market – https://theconversation.com/why-the-traditional-college-major-may-be-holding-students-back-in-a-rapidly-changing-job-market-258383

  • How proposed changes to higher education accreditation could impact campus diversity efforts

    Source: ForeignAffairs4

    Source: The Conversation – USA (2) – By Jimmy Aguilar, PhD Candidate in Urban Education Policy, University of Southern California

    An executive order seeks to remove ‘discriminatory ideology’ in universities. Critics contend it politicizes the accreditation process. Abraham Gonzalez Fernandez via Getty Images

    President Donald Trump on April 23, 2025, signed an executive order that aims to change the higher education accreditation process. It asks accrediting agencies to root out “discriminatory ideology” and roll back diversity, equity and inclusion initiatives on college campuses.

    The Conversation asked Jimmy Aguilar, who studies higher education at the University of Southern California, to explain what accreditation is, why it matters and how the Trump order seeks to change it.

    What is accreditation and how does it work?

    Accreditation is a process that evaluates whether colleges and universities meet standards of academic rigor, institutional integrity and financial stability.

    In the United States, there were 88 accrediting agencies during the 2022-23 school academic year.

    The agencies are formally recognized by the Department of Education and the Council for Higher Education Accreditation.

    Accreditation is not a one-time stamp of approval, but a continuous process.

    At its core, accreditation is a guarantor of quality in higher education.

    The process involves self-assessment and peer review visits.

    Colleges typically undergo a full review every five to 10 years, depending on the accrediting agency.

    Institutions must meet standards for curriculum, faculty, student services and outcomes, and provide documentation.

    Then, federally recognized accrediting agencies review the documentation.

    Teams, often comprised of peer reviewers from other colleges, conduct campus visits and evaluations before granting or reviewing accreditation.

    Why do universities need to be accredited?

    Accreditation assures students, employers and the public that an institution meets basic academic standards.

    It also signals credibility and secures federal financial support.

    Without it, colleges cannot access key funding sources such as Pell Grants and federal student loans.

    The funding is essential for college budgets and students’ access to higher education.

    Accreditation is also required for professional licensure in fields such as teaching, nursing, medicine and law.

    It also helps ensure that students can transfer credits between institutions.

    What does Trump’s executive order do?

    President Donald Trump wearing a blue suit and red tie displays a signed executive order.
    President Donald Trump displays a signed executive order in the Oval Office at the White House on April 23, 2025, in Washington.
    Chip Somodevilla/Getty Images)

    The executive order would reshape the college accreditation system, aligning it with the administration’s political priorities. Those priorities include the rollback of DEI initiatives.

    The order seeks to use federal oversight to weaken institutional DEI policies and priorities. It also promotes new standards aligned with the administration’s interpretation of “merit-based” education.

    The executive order also directs the Department of Education to penalize agencies that require colleges to implement DEI-related standards.

    The Trump administration claims that such standards amount to “unlawful discrimination.”

    Penalties may include increased oversight or loss of federal recognition. This would render the accreditation seal meaningless, according to the executive order.

    The order also proposes a broad overhaul of the accreditation process, including:

    • Promoting “intellectual diversity” in faculty hiring. The executive order argues that promoting a broader range of viewpoints among faculty will enhance academic freedom. Critics often interpret this language as an effort to increase conservative ideological representation.

    • Streamlining the process for institutions to switch accreditors. During Trump’s first term, his administration removed geographic restrictions, giving colleges more flexibility to choose. The new executive order goes further. It makes it easier for schools to leave agencies whose standards they disagree with.

    • Expanding recognition of new accrediting agencies to increase competition.

    • Linking accreditation more directly to student outcomes. This would shift focus to metrics such as graduation rates and earnings, rather than commitments to diversity or equity.

    View from front steps of US Supreme Court
    A 2023 Supreme Court ruling that outlawed affirmative action in university admissions has been a point of contention in the debate over diversity, equity and inclusion in higher education.
    Joe Daniel Price/Getty Images

    The executive order singles out accreditors for law schools, such as the American Bar Association, and for medical schools, such as the Liaison Committee on Medical Education.

    The order accuses them of enforcing DEI standards that conflict with a 2023 Supreme Court ruling that outlawed affirmative action in university admissions.

    However, the ruling was limited to race-conscious admissions. It did not directly address faculty hiring or accreditation standards.

    That raises questions about whether the order’s interpretation extends beyond the scope of the court’s decision.

    The ruling has nonetheless been a point of contention in the debate over diversity, equity and inclusion.

    The American Association of University Professors and the Lawyers’ Committee for Civil Rights Under Law have denounced the executive order.

    The groups argue that it threatens to politicize accreditation and suppress efforts to promote equity and inclusion.

    Nevertheless, the order represents a push by the federal government to influence higher education governance.

    The Conversation

    Jimmy Aguilar does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. How proposed changes to higher education accreditation could impact campus diversity efforts – https://theconversation.com/how-proposed-changes-to-higher-education-accreditation-could-impact-campus-diversity-efforts-255309

  • AI isn’t replacing student writing – but it is reshaping it

    Source: ForeignAffairs4

    Source: The Conversation – USA (2) – By Jeanne Beatrix Law, Professor of English, Kennesaw State University

    Studies have shown that many students are using AI to brainstorm, learn new information and revise their work. krisanapong detraphiphat/Moment via Getty Images

    I’m a writing professor who sees artificial intelligence as more of an opportunity for students, rather than a threat.

    That sets me apart from some of my colleagues, who fear that AI is accelerating a glut of superficial content, impeding critical thinking and hindering creative expression. They worry that students are simply using it out of sheer laziness or, worse, to cheat.

    Perhaps that’s why so many students are afraid to admit that they use ChatGPT.

    In The New Yorker magazine, historian D. Graham Burnett recounts asking his undergraduate and graduate students at Princeton whether they’d ever used ChatGPT. No one raised their hand.

    “It’s not that they’re dishonest,” he writes. “It’s that they’re paralyzed.”

    Students seem to have internalized the belief that using AI for their coursework is somehow wrong. Yet, whether my colleagues like it or not, most college students are using it.

    A February 2025 report from the Higher Education Policy Institute in the U.K. found that 92% of university students are using AI in some form. As early as August 2023 – a mere nine months after ChatGPT’s public release – more than half of first-year students at Kennesaw State University, the public research institution where I teach, reported that they believed that AI is the future of writing.

    It’s clear that students aren’t going to magically stop using AI. So I think it’s important to point out some ways in which AI can actually be a useful tool that enhances, rather than hampers, the writing process.

    Helping with the busywork

    A February 2025 OpenAI report on ChatGPT use among college-aged users found that more than one-quarter of their ChatGPT conversations were education-related.

    The report also revealed that the top five uses for students were writing-centered: starting papers and projects (49%); summarizing long texts (48%); brainstorming creative projects (45%); exploring new topics (44%); and revising writing (44%).

    These figures challenge the assumption that students use AI merely to cheat or write entire papers.

    Instead, it suggests they are leveraging AI to free up more time to engage in deeper processes and metacognitive behaviors – deliberately organizing ideas, honing arguments and refining style.

    If AI allows students to automate routine cognitive tasks – like information retrieval or ensuring that verb tenses are consistent – it doesn’t mean they’re thinking less. It means their thinking is changing.

    Of course, students can misuse AI if they use the technology passively, reflexively accepting its outputs and ideas. And overreliance on ChatGPT can erode a student’s unique voice or style.

    However, as long as students learn how to use AI intentionally, this shift can be seen as an opportunity, rather than a loss.

    Clarifying the creative vision

    It has also become clear that AI, when used responsibly, can augment human creativity.

    For example, science comedy writer Sarah Rose Siskind recently gave a talk to Harvard students about her creative process. She spoke about how she uses ChatGPT to brainstorm joke setups and explore various comedic scenarios, which allows her to focus on crafting punchlines and refining her comedic timing.

    Note how Siskin used AI in ways that didn’t supplant the human touch. Instead of replacing her creativity, AI amplified it by providing structured and consistent feedback, giving her more time to polish her jokes.

    Another example is the Rhetorical Prompting Method, which I developed alongside fellow Kennesaw State University researchers. Designed for university students and adult learners, it’s a framework for conversing with an AI chatbot, one that emphasizes the importance of agency in guiding AI outputs.

    When writers use precise language to prompt, critical thinking to reflect, and intentional revision to sculpt inputs and outputs, they direct AI to help them generate content that aligns with their vision.

    There’s still a process

    The Rhetorical Prompting Method mirrors best practices in process writing, which encourages writers to revisit, refine and revise their drafts.

    When using ChatGPT, though, it’s all about thoughtfully revisiting and revising prompts and outputs.

    For instance, say a student wants to create a compelling PSA for social media to encourage campus composting. She considers her audience. She prompts ChatGPT to draft a short, upbeat message in under 50 words that’s geared to college students.

    Reading the first output, she notices it lacks urgency. So she revises the prompt to emphasize immediate impact. She also adds some additional specifics that are important to her message, such as the location of an information session. The final PSA reads:

    “Every scrap counts! Join campus composting today at the Commons. Your leftovers aren’t trash – they’re tomorrow’s gardens. Help our university bloom brighter, one compost bin at a time.”

    The Rhetorical Prompting Method isn’t groundbreaking; it’s riffing on a process that’s been tested in the writing studies discipline for decades. But I’ve found that it works by directing writers how to intentionally prompt.

    I know this because we asked users about their experiences. In an ongoing study, my colleagues and I polled 133 people who used the Rhetorical Prompting Method for their academic and professional writing:

    • 92% reported that it helped them evaluate writing choices before and during their process.

    • 75% said that they were able to maintain their authentic voice while using AI assistance.

    • 89% responded that it helped them think critically about their writing.

    The data suggests that learners take their writing seriously. Their responses reveal that they are thinking carefully about their writing styles and strategies. While this data is preliminary, we continue to gather responses in different courses, disciplines and learning environments.

    All of this is to say that, while there are divergent points of view over when and where it’s appropriate to use AI, students are certainly using it. And being provided with a framework can help them think more deeply about their writing.

    AI, then, is not just a tool that’s useful for trivial tasks. It can be an asset for creativity. If today’s students – who are actively using AI to write, revise and explore ideas – see AI as a writing partner, I think it’s a good idea for professors to start thinking about helping them learn the best ways to work with it.

    The Conversation

    Jeanne Beatrix Law does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. AI isn’t replacing student writing – but it is reshaping it – https://theconversation.com/ai-isnt-replacing-student-writing-but-it-is-reshaping-it-254878

  • South Africa’s 36.1% electricity price hike for 2025: why the power utility Eskom’s request is unrealistic

    Source: ForeignAffairs4

    Source: The Conversation – Africa – By Steven Matome Mathetsa, Senior Lecturer at the African Energy Leadership Centre, Wits Business School, University of the Witwatersrand

    South Africa’s state-owned electricity company, Eskom, has applied to the National Energy Regulator of South Africa to approve a 36.1% electricity price hike from April 2025, a 11.8% price increase in 2026 and an 9.1% increase in 2027. Steven Mathetsa teaches and researches sustainable energy systems at the University of the Witwatersrand’s African Energy Leadership Centre. He explains some of the problems with the planned tariff increase.

    Why such a big hike?

    Eskom says the multi-year price increase is because of the need to move closer a cost-reflective tariff that reflects the actual costs of supplying electricity.

    However, Eskom’s electricity tariff increases have been exorbitant for several years – an 18% increase in 2023 and a 13% increase in 2024. This is a price increase far above inflation, which is currently at 4.4%.

    Some companies have installed their own generation capacity, and individuals have moved to rooftop solar systems. As a result electricity sales have fallen by about 2% , resulting in a drop in revenue.

    There’s a knock on effect for municipalities, the biggest distributors of electricity, which have also been forced to hike tariffs in line with Eskom’s increases.

    All these costs are passed onto the consumers.

    What will the impact be on South Africans?

    If the hike is approved it will certainly worsen the economic difficulties facing
    South Africa. One of the most unequal countries in the world, South Africa has an extremely high unemployment rate – 33.5%at the last count.

    Economic growth is also very slow, at a mere 0.6% in 2023. The cost of living is high.

    Exorbitant increases in electricity costs aggravate these problems.

    South Africans and businesses in the country have little choice about where they source their energy. Eskom is still the sole supplier for nearly all the country’s electricity needs. This means that ordinary citizens are likely to continue relying on electricity supplied by Eskom, irrespective of the costs.

    The high costs affect businesses negatively. Large industrial and small, medium, and micro enterprises have all highlighted that costs associated with utilities, mainly electricity, are affecting their sustainability.




    Read more:
    Competition in South Africa’s electricity market: new law paves the way, but it won’t be a smooth ride


    The Electricity Regulation Amendment Act implementation will make major changes to Eskom. The reforms establish an independent Transmission Systems Operator tasked with connecting renewable energy providers to the grid. This will allow the creation of a competitive market where renewable energy providers can sell power to the grid.

    But it’s not yet clear if these changes will address the issue of exorbitant electricity price rises.

    What are the problems?

    The country’s energy frameworks are drafted on the basis of the World Energy Trilemma Index. The index promotes a balanced approach between energy security, affordability, and sustainability. In other words, countries must be able to provide environmentally friendly and reliable electricity that their residents can afford.

    South Africa is currently unable to meet these goals because of different energy policies that do not align, a lack of investment in electricity and dependency on coal-fired power. Electricity is increasingly becoming unaffordable in the country. Although there’s been a recent reprieve from power cuts, security of supply is still uncertain.




    Read more:
    South Africa’s new energy plan needs a mix of nuclear, gas, renewables and coal – expert


    Furthermore, over 78% of the country’s electricity is produced by burning coal. This means South Africa is also far from attaining its 2015 Paris Agreement greenhouse gas reduction goals.

    Compounding this problem is that Eskom is financially unstable – it needed R78 billion from the government in debt relief in 2024. For years, there was a lack of effective maintenance on the aging infrastructure.

    The country has made some inroads into improving security of supply. To date, recent interventions have resulted in over 200 days without power cuts. This should be commended. The same focus must be placed on ensuring that electricity remains affordable while giving attention to meeting the goals of the Paris Agreement.

    What needs to change?

    South Africa’s 1998 Energy Policy White Paper and the new Electricity Regulation Amendment Act promote access to affordable electricity. However, they’ve been implemented very slowly. Affordable electricity needs to be taken seriously.

    The question is whether the country’s electricity tariff methodology is flexible enough to accommodate poor South Africans, especially during these challenging economic times.

    In my view, it is not. In its current form, vulnerable communities continue to foot the bill for various challenges confronting Eskom, including financial mismanagement, operational inefficiencies, municipal non-payment, and corruption.

    I believe the following steps should be taken.

    Firstly, South Africa should revise its tariff application methodologies so that consumers, especially unemployed and impoverished people, are protected against exorbitant increases.

    Secondly, the National Energy Regulator of South Africa should strengthen its regulations to ensure its compliance and enforcement systems are effective. For example, Eskom should be held accountable when it does not deliver efficient services or mismanages funds, and be transparent about costs associated with its processes. Municipalities should also be held accountable for non-payment and other technical issues they regularly struggle with. Both affect the revenue of the power utility.




    Read more:
    South Africa’s economic growth affected by mismatch of electricity supply and demand


    Thirdly, the government must make sure that price increases are affordable and don’t hurt the broader economy. It can do this by adjusting its policies to make sure that increases in electricity tariffs are in line with the rate of inflation.

    Fourthly, communities can play a vital role in saving electricity at a household level. This will reduce the country’s overall energy consumption. Furthermore, both small and large businesses should continue to consider alternative energy technologies while implementing energy saving technologies.

    Lastly, the level of free-basic electricity is not sufficient for poor households. Subsidy policies should also be reviewed to allow users access to affordable electricity as their financial situation changes negatively.

    The Conversation

    Steven Matome Mathetsa does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. South Africa’s 36.1% electricity price hike for 2025: why the power utility Eskom’s request is unrealistic – https://theconversation.com/south-africas-36-1-electricity-price-hike-for-2025-why-the-power-utility-eskoms-request-is-unrealistic-240941

  • Post-flood recovery: lessons from Germany and Nigeria on how to help people cope with loss and build resilience

    Source: ForeignAffairs4

    Source: The Conversation – Africa – By Olasunkanmi Habeeb Okunola, Senior Research Associate, United Nations University – Institute for Environment and Human Security (UNU-EHS), United Nations University

    Extreme climate events — floods, droughts and heatwaves — are not just becoming more frequent; they are also more severe.

    It’s important to understand how communities can recover from these events in ways that also build resilience to future events.

    In a recent study, we analysed how communities affected by the extreme flood events of 2021 in Germany’s Ahr Valley and in Lagos, Nigeria, grappled with recovery from floods.

    Our aim was to identify the factors – and combinations of factors – that served as barriers (or enablers) to recovery from disasters.

    We found that financial limitations, political interests and administrative hurdles led to prioritising immediate relief and reconstruction over long-term sustainable recovery.

    In both cases immediate and long-term recovery efforts were siloed, underfunded and focused on reconstruction to pre-disaster conditions.

    We concluded from our findings that the success of recovery efforts lies in balancing short-term relief and a long-term vision. While immediate aid is essential after a disaster, true resilience hinges on proactive measures that address systemic challenges and empower communities to build a better future.

    Recovery should not be merely action-oriented and building back infrastructure (engineering). It should also include insights in other areas, like governance and psychology, helping people to deal with losses and to heal.

    What worked

    To understand the recovery pathways of the two regions, we reviewed relevant literature, newspaper articles and government documents. We also interviewed government agencies, NGO representatives, volunteers and local residents in the communities where these floods occurred.

    We found that in the Ahr Valley, recovery wasn’t just about rebuilding structures, it was about empowering individuals.

    Through initiatives like mental health and first aid courses, residents learned to support one another. This fostered a sense of community and resilience that was essential for meeting the emotional challenges posed by the disaster.

    The focus on rebuilding with a sustainable vision also included environmental initiatives. For example, a type of heating system was put in place that didn’t rely on fossil fuels.

    Not only did this reduce carbon emissions, it also served as a symbol of hope. It showed there was an opportunity to create a more sustainable and environmentally friendly community.

    In Lagos, too, residents found strength in community and innovation. Grassroots efforts using sustainable materials like bamboo and palm wood highlighted the ingenuity and resourcefulness of the people. Faith-based organisations provided material aid as well as emotional and spiritual support. This reinforced the bonds that held the community together.

    Each community faced unique challenges. But they shared a common thread: the importance of adaptive governance – flexible decision-making and strong community ties.

    For example, established building codes in the Ahr Valley provided a framework for reconstruction, ensuring that new structures were resilient and safe.

    In Lagos, the absence of strong government support highlighted the critical role of community organisations in providing services and fostering a sense of shared responsibility.

    What needs improvment

    In both the Ahr Valley and Lagos, the journey towards recovery has been fraught with obstacles as well.

    In the Ahr Valley, bureaucratic red tape has become a formidable barrier. Residents, eager to rebuild their lives, find themselves entangled in a complex web of regulations and lengthy approval processes. This has delayed their access to insurance and recovery funds. Waiting for months or even years has eroded hope and fuelled a sense of abandonment.

    Meanwhile, in Lagos, insufficient government support has left communities to fend for themselves, creating a breeding ground for uncertainty and conflict.

    Land tenure disputes, fuelled by a lack of clear property rights, sow seeds of distrust and hinder resettlement efforts. Political disagreements complicate the picture, as competing interests divert attention and resources away from those who need them most.

    In Lagos, none of the respondents reported having insurance to help them to recover from disaster-related losses.

    While some residents in the Ahr Valley did have insurance, many were under-insured.

    The Ahr Valley’s building codes offer a framework for reconstruction. But it’s clear that processes should be streamlined so communities can take ownership of their recovery.

    In Lagos, the importance of robust social safety nets is clear. Partnerships between communities and authorities are also needed.

    A different approach

    Recovery isn’t a separate process that occurs after disasters only. It should be seen as an essential part of managing risks. It’s important to understand what recovery involves and what resources are needed.

    This will help reduce future risks and increase resilience after extreme events.

    Governments should encourage flexible governance structures that value community voices and local knowledge to enable recovery. A good example is the New Orleans Recovery Authority, established after Hurricane Katrina. It involved local residents and city officials in planning and rebuilding efforts.

    Grassroots efforts in Lagos demonstrated the power of sustainable materials and community-led initiatives. Seeing things from the community’s point of view can help tailor solutions that fit the situation and adapt to evolving challenges.

    Training and capacity-building programmes empower communities to be active in their own recovery.

    Mental health and first aid courses were successful in the Ahr Valley. Equipping individuals with skills in sustainable practices and disaster preparedness helps weave a social fabric capable of weathering future storms.

    The Conversation

    Olasunkanmi Habeeb Okunola is a Visiting Scientist at, the United Nations University – Institute for Environment and Human Security (UNU-EHS)

    Saskia E. Werners works with United Nations University, Institute for Environment and Human Security (UNU-EHS). She is grateful to have received research grants in support of her research on climate change adaptation and recovery.

    ref. Post-flood recovery: lessons from Germany and Nigeria on how to help people cope with loss and build resilience – https://theconversation.com/post-flood-recovery-lessons-from-germany-and-nigeria-on-how-to-help-people-cope-with-loss-and-build-resilience-240260

  • Climate change is making it harder for people to get the care they need

    Source: ForeignAffairs4

    Source: The Conversation – Africa – By Maria S. Floro, Professor Emerita of Economics, American University

    The world is witnessing the consequences of climate change: long-lasting changes in temperature and rainfall, and more intense and frequent extreme weather events such as heat waves, hurricanes, typhoons, flooding and drought. All make it harder for families and communities to meet their care needs.

    Climate change affects care systems in various ways. First, sudden illnesses and unexpected disabilities heighten the need for care. Second, it reduces access to important inputs for care such as water, food and safe shelter. Third, it can damage physical and social care infrastructures.

    It can also lead to breakdowns of traditional units of caregiving such as households and communities. And it creates new situations of need with the increase in displaced person settlements and refugee camps.

    Climate change creates sudden spikes in the demand for care, and serious challenges to meeting the growing need for care. All this has immediate and long lasting effects on human well-being.

    The size of the current unmet care needs throughout the world is substantial. In childcare alone, about 23% of children worldwide – nearly 350 million – need childcare but do not have it. Families in low- and lower-middle-income countries are the most in need.

    Similarly, as the world’s population ages rapidly, only a small proportion of the elderly who need assistance are able to use formal care (in an institution or paid homecare). Most are cared for by family members or other unpaid caregivers. Much of this unpaid care and formal care work is provided by women and girls.

    Hundreds of millions of people around the world struggle to get healthcare. Expansion of access to essential health services has slowed compared to pre-2015 . And healthcare costs still create financial hardship.

    Without comprehensive public and global support for care provision and the integration of care in the climate agenda, unmet care needs will only grow and inequalities will widen.

    Impact

    Climate change interacts with human health in complex ways. Its impact is highly uneven across populations. It depends on geographical region, income, education, gender roles, social norms, level of development, and the institutional capacity and accessibility of health systems.

    In 2018-22, Africa experienced the biggest increase in the
    heat-related mortality rate since 2000-05
    . This is not surprising as the continent has more frequent health-threatening temperatures than ever before and a growing population of people older than 65.

    Africa is also the region most affected by droughts in 2013-22, with 64% of its land area affected by at least one month of extreme drought per year on average. It was followed by Oceania (55% of its land area) and South and Central America (53%).

    Scientific evidence also points to increases in health inequalities caused by climate change. The health effects of climate change are not uniformly felt by different population groups.

    Exposure, severity of impact, and ability of individuals to recover depend on a variety of factors. Physiological characteristics, income, education, type of occupation, location, social norms and health systems are some of them.

    For example, older people and young children face the greatest health risks from high temperatures.

    There is also evidence of the disproportionate effect of climate change on the health of people living in poverty and those who belong to disadvantaged groups.

    Women of lower social and economic status and with less education are more vulnerable to heat stress compared to women in wealthier households and with higher education or social status. They are exposed to pollution in the absence of clean cooking fuel, and to extreme heat as they walk to gather water and fuel, or do other work outdoors.

    Bad sanitation in poor urban areas increases the incidence of water-borne diseases after heavy rains and floods.

    Lack of access to healthcare services and the means to pay for medicines make it difficult for women and men in low-income households to recover from illness, heat strokes, and air pollution-related ailments.

    Mental health problems are being attributed to climate change as well. Studies show that the loss of family or kin member, home, livelihood and a safe environment can bring about direct emotional impacts.

    These adverse impacts increase the demand for caregiving and the care workload. Climate-induced health problems force family and community caregivers, particularly women, to spend more time looking after the sick and disabled, particularly frail elderly people and children.

    Effect on food and water

    Climate change threatens the availability of food, clean water and safe shelter. It erodes households’ and communities’ care capacity and hence societies’ ability to thrive.

    Fluctuations in food supply and rising food prices as a result of environmental disasters, along with the inadequacy of government policies, underscore the mounting challenge of meeting food needs.

    The threat of chronic shortage of safe drinking water has also risen. Water scarcity is an area where structural inequalities and gender disparities are laid bare.

    Care for the sick and disabled, the young and the elderly is compromised when water is scarce.

    Effects on providing care

    Extreme weather events disrupt physical care infrastructures. It may be hard to reach hospitals, clinics, daycare centres, nursery schools and nursing homes. Some facilities may be damaged and have to close.

    Another type of care system that can break down is family networks and support provided by friends and neighbours. These informal care sharing arrangements are illustrated in a study of the three large informal settlements in Nairobi.

    About half (50.5%) of the sampled households reported having had a sick member in the two weeks before the survey. The majority relied on close friends and family members living nearby for care and support.

    Studies have shown that climate change eventually leads to livelihood loss and resource scarcity, which can weaken social cohesion and local safety nets in affected communities.

    Heightened risks and uncertainty and imminent changes in socio-economic and political conditions can also compel individuals or entire households to migrate. Migration is caused by a host of factors, but it has increasingly been a climate-related response.

    The World Bank’s Groundswell Report released in 2018, for example, projected that climate change could force 216 million people to move within their countries by 2050 to avoid the slow-onset impacts of climate change.

    A possible consequence of migration is the withdrawal of care support provided by the migrating extended kin, neighbours or friends, increasing the caregiving load of people left behind.

    In the case of forced displacements, the traditional social networks existing in communities are disrupted entirely.

    What’s needed

    There are compelling reasons to believe that meeting care needs can also help mitigate the effects of climate change. And actions to meet carbon-zero goals, prevent biodiversity loss and regenerate ecosystems can reduce the care work burden that falls heavily on families, communities and women.

    Any effort to tackle these grave problems should be comprehensive in scope and must be based on principles of equality, universality, and responsibility shared by all.

    This article is part of a series of articles initiated through a project led by the Southern Centre for Inequality studies, in collaboration with the International Development Research Centre and a group of feminist economists and climate scientists across the world.

    The Conversation

    Maria S. Floro does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Climate change is making it harder for people to get the care they need – https://theconversation.com/climate-change-is-making-it-harder-for-people-to-get-the-care-they-need-240557

  • Parental controls on children’s tech devices are out of touch with child’s play

    Source: ForeignAffairs4

    Source: The Conversation – Canada – By Sara M. Grimes, Wolfe Chair in Scientific and Technological Literacy and Professor, McGill University

    Parenting in the digital age can be stressful and demands a lot from parents.

    The Family Online Safety Institute (FOSI) recently released its annual Online Safety Survey that discovered almost 50 per cent of parents surveyed aren’t using parental controls to manage their children’s devices. These are tools that would ostensibly help parents filter out inappropriate content or unwanted interactions on their children’s devices.

    The FOSI authors conclude the reason parents aren’t using the tools is because they feel “overwhelmed” and recommend parents educate themselves as a good first step toward broader use.

    While overwhelm is a real thing, we suggest a bigger problem with parental controls is how they are designed. This includes how little attention is given to supporting open communication between parents and children.

    Once a year for the past three years, we’ve asked the same 33 children (initially aged six to 12) what they think about content ratings, online safety, game monetization and privacy.
    Our team’s combined expertise in communication, education, policy and game studies analyzed their answers.

    We also asked their parents how they mediated their kids’ gaming. Nearly half of them don’t use parental controls either. They say parental controls don’t always work as promised, offer little context about how settings affect gameplay and force binary choices that don’t align with household rules or with children’s maturity levels.

    The parents we asked said they aren’t avoiding parental controls because they feel overwhelmed by them. It’s that the tools are poorly designed.

    Parent controls can introduce more problems

    At the same time, many of the parents described themselves as highly engaged in their child’s gameplay; talking with their children regularly or encouraging play in shared, supervised spaces. Several said they choose to trust their child rather than set top-down limits.

    Our findings align with previous research on digital parenting. In one British study, parents said they felt some controls were valuable supplements to mediation, while other controls were poorly designed, introducing more problems than solutions.

    The use of parental controls doesn’t necessarily translate to increased child safety. In fact, using parental controls can create a disconnect between parents and children on key safety issues.

    Awareness of risks

    Six children we interviewed were not aware their parents were using controls, and at least two children revealed they didn’t even know why a parent would use parental controls in the first place. In this context, parents’ efforts to protect their children had the unintended side effect of obscuring vital knowledge, leaving the children unaware of some of the key risks associated with playing online. Parental controls can remove opportunities to teach kids about safety if they aren’t part of the conversation.

    We believe that the behind-the-scenes protections enabled by (some) parental controls can be detrimental to parent-child communication about online safety. What are the risks? How can children avoid the riskiest behaviour? What should they do when or if they’ve encountered danger?

    Meanwhile, parents aren’t always familiar with the features and activities they are asked to restrict or allow. Very few parental controls contain information about how gameplay will be impacted by their settings. Many contain terms only someone familiar with the game would understand, while others are hard to navigate.

    All of this can lead to misinterpretations and parent-child conflicts, making the tools even harder to use.

    Power of communication

    Open communication between parents and children on safety topics fosters trust, which increases the likelihood kids will turn to their parents for help when something dangerous happens.

    It enables children to build resiliency, which in turn reduces the risk they’ll be harmed by negative online encounters.

    Research also suggests that parent-child communication may be more effective at helping to avoid harm than embedded restrictions enabled by parental controls.

    The importance of open communication is also emphasized in the FOSI report. In households where conversations about online safety happened regularly (six times or more a year), parents and children were both more likely to view parental controls as a useful and valuable tool for online safety.

    This, the authors conclude, “supports the view of online safety as a collaborative effort as opposed to a priority imposed by parents on their children.”

    On this point, we couldn’t agree more. Families would benefit from making parental controls and safety settings a family affair. Kids and parents have a lot to learn from each other about the digital world, and reviewing these systems together can provide a much-needed opening for crucial conversations about risk, safety and what kids find meaningful about digital play.

    Rethinking safety tools

    Let’s not pretend parental controls are a panacea for child safety.

    Many parental controls contain serious design flaws and limitations. Very few comprehensively address the needs and concerns of either children or their parents.

    Now that lawmakers are starting to make parental controls a mandatory part of new child safety legislation, we urgently need to start taking a closer and more critical look at what they can and can’t do.

    Parental controls can be a useful tool when they are designed well, applied with transparency, and provide families with ample options so they can be tailored to not only fit with but foster household rules and open communication.

    There’s a lot of work to be done before this is the standard. But also a growing impetus for game and other tech companies to make it happen.

    The Conversation

    Sara M. Grimes receives funding from the Social Sciences and Humanities Research Council (SSHRC) of Canada,

    Riley McNair does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Parental controls on children’s tech devices are out of touch with child’s play – https://theconversation.com/parental-controls-on-childrens-tech-devices-are-out-of-touch-with-childs-play-257874

  • Workplaces have embraced mindfulness and self-compassion — but did capitalism hijack their true purpose?

    Source: ForeignAffairs4

    Source: The Conversation – Canada – By Yasemin Pacaci, Postdoctoral Fellow, Smith School of Business, Queen’s University, Ontario

    When practiced with integrity, mindfulness and self-compassion can improve the collective well-being and personal agency of employees. (Shutterstock)

    Mindfulness and self-compassion have become popular tools for improving mental health and well-being in the workplace. Mindfulness involves paying attention to thoughts, emotions and surroundings without judgment, much like watching clouds pass in the sky. This moment-to-moment awareness helps people respond skilfully rather than react automatically.

    Self-compassion builds on mindfulness by encouraging people to meet difficult feelings and experiences with kindness instead of resistance. In other words, mindfulness helps people first recognize their suffering, while self-compassion helps people respond with kindness.

    Both mindfulness and self-compassion can be practised formally through meditations like body scans, breath awareness or loving-kindness meditation, and informally by bringing mindful attention to mind, emotions and everyday activities.

    Both practices have the potential to transform dysfunctional workplaces by improving the collective well-being and personal agency of employees.

    Yet too often, these practices are introduced superficially to boost productivity and performance, rather than used to address the root causes of workplace stress. It’s a pattern I’ve witnessed repeatedly in my years as a mindfulness teacher and researcher.

    This brings into question whether these practices can thrive in capitalist systems that prioritize profit over people. But rather than rejecting mindfulness and self-compassion as incompatible with capitalism, I argue we need a more thoughtful framework that stays true to their essence while tackling common misunderstandings and misuses.

    How capitalism is co-opting mindfulness

    Academic and practitioner critics have raised concerns about how mindfulness and self-compassion practices are being integrated into corporate life.

    Some of these critics argue that companies are incorporating mindfulness and self-compassion practices not to fix systemic problems, but to boost their own productivity and shift the responsibility for stress onto employees.

    In these cases, critics use the term “McMindfulness” to describe a commodified, diluted version of mindfulness that is stripped of its roots in Buddhist philosophy.

    Group of people having a meeting around a conference table in an office
    If organizations want to reap the full benefits of mindfulness and self-compassion, they need to take a more deliberate, systemic approach.
    (Unsplash/Redd Francisco)

    Some critics have gone further, claiming that mindfulness encourages contentment with the status quo and may make employees more vulnerable to exploitation.

    While these critiques raise valid concerns, they often create more confusion and resistance than meaningful dialogue or practical solutions for implementing mindfulness and self-compassion in the workplace.

    Empirical research offers a more nuanced perspective. Mindfulness and self-compassion, when practised consistently, can strengthen employees’ sense of agency, improve their self-confidence, support ethical decision-making and action for meaningful change.

    Done right, mindfulness can help workers

    Employees who develop mindfulness and self-compassion skills tend to respond in three main ways, according to research.

    First, they become more aware of dysfunction in the workplace. This awareness can empower them to speak up and advocate for change if it’s within their control and in their own interest. It can also cause them to engage in more ethical practices, especially in toxic work environments.

    Second, they are more likely to leave toxic work environments. When employees realize change is beyond their control, mindfulness and self-compassion can cause them to lose their motivation for work and, indirectly, might prompt them to leave toxic workplaces altogether.

    Third, for employees who end up staying in their roles, they are better able to acknowledge and become less effected by stressors. However, this doesn’t mean they become more productive or blindly enthusiastic about their jobs. Mindfulness enhances motivation that stems from genuine interest, not from pressure or obligation.

    It’s important to note that mindfulness doesn’t mean these employees condone poor conditions or toxic practices. Rather, it helps them see reality more clearly, without denial or avoidance.

    And for employers hoping mindfulness will instantly boost engagement or drive performance, research shows employees may actually become more critical of their work and less willing to perform mundane tasks.

    Towards true workplace transformation

    Mindfulness alone cannot fix a toxic workplace. When organizations introduce mindfulness programs without first addressing the underlying causes of stress or toxicity, they’re unlikely to see the results they expect.

    If organizations want to reap the full benefits of mindfulness and self-compassion, they need to take a more deliberate, structured approach. Psychologist Kurt Lewin’s three-step change management model offers a useful guide:

    Step 1. Unfreeze: Address the root causes of workplace stress

    • Address systemic stressors. Before introducing any well-being initiative, organizations must confront actual sources of stress such as excessive workloads, toxic leadership and job insecurity.
    • Correct misunderstandings. Clarify what mindfulness and self-compassion actually is to reduce scepticism and confusion.
    • Avoid mandatory participation. Giving employees the freedom to opt in fosters authentic engagement and sustains interest.
    A woman looks down at a sheaf of papers in her hands with an annoying look on her face
    Without addressing the systemic causes of stress, mindfulness practices can prove ineffective.
    (Shutterstock)

    Step 2. Change: Implement practices ethically and intentionally

    • Lead by example at the top. Instead of only offering these programs to employees, leaders should engage with mindfulness and self-compassion practices themselves. When senior figures lead by example, these programs gain legitimacy and workplaces foster more ethical, people-centered leadership that goes beyond performance and productivity.
    • Ensure cultural sensitivity. Small cultural adaptations can improve the inclusion of mindfulness and self-compassion sessions. For instance, research has found that in Hispanic communities, using familiar stories or proverbs can make mindfulness sessions more relatable and improve engagement.
    • Preserve ethical foundations. Present mindfulness and self-compassion as universal practices, not tied to any one religion. This preserves their ethical underpinnings while ensuring they remain universal and accessible to all.

    Step 3. Freeze: Embed mindfulness and self-compassion into workplace culture

    • Encourage small, daily practices. Offer simple tools like journaling or mindful breathing breaks that employees can tailor to their own needs and schedules.
    • Provide ongoing support. Create time and space for continued practice, such as guided meditations, mindfulness moments in meetings or gratitude boards so new habits take root.
    • Measure impact holistically. Consider hiring qualified professionals to evaluate program effectiveness, address emerging needs and keep the organization moving forward.

    Moving beyond wellness window-dressing

    Mindfulness and self-compassion are not magic bullets, but they can still be powerful catalysts for change.

    When introduced with a deliberate and thoughtful approach, mindfulness and self-compassion can help workplaces move beyond shallow wellness “hacks” toward truly transformative practices, even in high-pressure, profit-driven environments.

    Far from serving as a quick fix or a mere productivity tool, these practices encourage employees to challenge the status quo, take meaningful action, build healthier relationships and make more ethical decisions. They can help individual employees flourish within and beyond their workplaces.

    The true value of mindfulness and self-compassion practices lies not in short-term outcomes or surface-level improvements, but in helping individuals be more aware of themselves, their surroundings and the choices they make, which is beyond any outcome or context.

    The Conversation

    Yasemin Pacaci does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Workplaces have embraced mindfulness and self-compassion — but did capitalism hijack their true purpose? – https://theconversation.com/workplaces-have-embraced-mindfulness-and-self-compassion-but-did-capitalism-hijack-their-true-purpose-258043

  • The oldest rocks on Earth are 4.3 billion years old

    Source: ForeignAffairs4

    Source: The Conversation – Canada – By Hanika Rizo, Associate Professor, Department of Earth Sciences, Carleton University

    Earth formed about 4.6 billion years ago, during the geological eon known as the Hadean. The name “Hadean” comes from the Greek god of the underworld, reflecting the extreme heat that likely characterized the planet at the time.

    By 4.35 billion years ago, the Earth might have cooled down enough for the first crust to form and life to emerge.

    However, very little is known about this early chapter in Earth’s history, as rocks and minerals from that time are extremely rare. This lack of preserved geological records makes it difficult to reconstruct what the Earth looked like during the Hadean Eon, leaving many questions about its earliest evolution unanswered.

    We are part of a research team that has confirmed the oldest known rocks on Earth are located in northern Québec. Dating back 4.3 billion years, these rocks provide a rare and invaluable glimpse into the origins of our planet.

    two men stand on rocks examining pieces in their hands
    Geologists Jonathan O’Neil and Chris Sole examine rocks in northern Québec.
    (H. Rizo), CC BY

    Remains from the Hadean Eon

    The Hadean Eon is the first period in the geological timescale, spanning from Earth’s formation 4.6 billion years ago and ending around 4.03 billion years ago.

    The oldest terrestrial materials ever dated by scientists are extremely rare zircon minerals that were discovered in western Australia. These zircons were formed as early as 4.4 billion years ago, and while their host rock eroded away, the durability of zircons allowed them to be preserved for a long time.

    Studies of these zircon minerals has given us clues about the Hadean environment, and the formation and evolution of Earth’s oldest crust. The zircons’ chemistry suggests that they formed in magmas produced by the melting of sediments deposited at the bottom of an ancient ocean. This suggests that the zircons are evidence that the Hadean Eon cooled rapidly, and liquid water oceans were formed early on.

    Other research on the Hadean zircons suggests that the Earth’s earliest crust was mafic (rich in magnesium and iron). Until recently, however, the existence of that crust remained to be confirmed.

    In 2008, a study led by associate professor Jonathan O’Neil (then a McGill University doctoral student) proposed that rocks of this ancient crust had been preserved in northern Québec and were the only known vestige of the Hadean.

    Since then, the age of those rocks — found in the Nuvvuagittuq Greenstone Belt — has been controversial and the subject of ongoing scientific debate.

    a flat, rocky landscape
    The Nuvvuagittuq Greenstone Belt in northern Québec.
    (H. Rizo), CC BY

    ‘Big, old solid rock’

    The Nuvvuagittuq Greenstone Belt is located in the northernmost region of Québec, in the Nunavik region above the 55th parallel. Most of the rocks there are metamorphosed volcanic rocks, rich in magnesium and iron. The most common rocks in the belt are called the Ujaraaluk rocks, meaning “big old solid rock” in Inuktitut.

    The age of 4.3 billion years was proposed after variations in neodymium-142 were detected, an isotope produced exclusively during the Hadean through the radioactive decay of samarium-146. The relationship between samarium and neodymium isotope abundances had been previously used to date meteorites and lunar rocks, but before 2008 had never been applied to Earth rocks.

    This interpretation, however, was challenged by several research groups, some of whom studied zircons within the belt and proposed a younger age of at most 3.78 billion years, placing the rocks in the Archean Eon instead.

    Confirming the Hadean Age

    In the summer of 2017, we returned to the Nuvvuagittuq belt to take a closer look at the ancient rocks. This time, we collected intrusive rocks — called metagabbros — that cut across the Ujaraaluk rock formation, hoping to obtain independent age constraints. The fact that these newly studied metagabbros are in intrusion in the Ujaraaluk rocks implies that the latter must be older.

    The project was led by masters student Chris Sole at the University of Ottawa, who joined us in the field. Back in the laboratory, we collaborated with French geochronologist Jean-Louis Paquette. Additionally, two undergraduate students — David Benn (University of Ottawa) and Joeli Plakholm (Carleton University) participated to the project.

    We combined our field observations with petrology, geochemistry, geochronology and applied two independent samarium-neodymium age dating methods, dating techniques used to assess the absolute ages of magmatic rocks, before these become metamorphic rocks. Both assessments yielded the same result: the intrusive rocks are 4.16 billion years old.

    a rocky landscape silhouetted by sunset
    Sunset at the Nuvvuagittuq Greenstone Belt.
    (H. Rizo), CC BY

    The oldest rocks

    Since these metagabbros cut across the Ujaraaluk formation, the Ujaraaluk rocks must be even older, placing them firmly in the Hadean Eon.

    Studying the Nuvvuagittuq rocks, the only preserved rocks from the Hadean, provides a unique opportunity to learn about the earliest history of our planet. They can help us understand how the first continents formed, and how and when Earth’s environment evolved to become habitable.

    The Conversation

    Hanika Rizo receives funding from the Natural Sciences and Engineering Research Council of Canada (NSERC).

    Jonathan O’Neil receives funding from the Natural Sciences and Engineering Research Council of Canada.

    ref. The oldest rocks on Earth are 4.3 billion years old – https://theconversation.com/the-oldest-rocks-on-earth-are-4-3-billion-years-old-259657

  • University leaders have to make sense of massive disruption — 4 ways they do it

    Source: ForeignAffairs4

    Source: The Conversation – Canada – By Daniel Atlin, Adjunct Professor, Gordon S. Lang School of Business, University of Guelph

    Trying to navigate an environment where massive disruption and unprecedented change is the norm presents a challenge for business leaders everywhere.

    Social-purpose, multi-stakeholder organizations like post-secondary institutions, hospitals, governments and NGOs are particularly affected.

    The practice of “sense-making” — making sense of the situations people find themselves in, in the words of organizational theorist Karl Weick — offers an innovative and timely framework that can help social-purpose leaders address complexity.

    Senior post-secondary leaders study

    Management experts have described sense-making as the key skill needed in an age of disruption. This has been confirmed through my research while completing a master’s degree in change leadership.

    I interviewed more than two dozen senior leaders in complex organizations in Canada, the United Kingdom, Australia and New Zealand — the majority of whom were in the post-secondary sector. I found the leaders I interviewed were intuitively using elements from Weick’s organizational sense-making framework.

    As one leader shared:

    “The first thing you need to do is to recognize that it’s your role to help the rest of your community make sense of what’s happening around you. It’s something that I take very seriously.”

    Deborah Ancona, professor of management at MIT, says:

    “Sense-making is most often needed when our understanding of the world becomes unintelligible in some way. This occurs when the environment is changing rapidly, presenting us with surprises for which we are unprepared or confronting us with adaptive, rather than technical problems to solve.”

    Leading in ‘age of outrage’

    Social-purpose organizations face common issues such as a lack of funding, system fragmentation, competing stakeholders, new entrants and the challenges of emerging technologies.

    They are also at the centre of what business and public policy professor Karthik Ramana describes as “the age of outrage,” reflected in heightened polarization. Against this backdrop, it’s increasingly challenging to attract and retain leaders.

    I heard from leaders who felt they didn’t have the proper training for the job or support once they started their roles. In part, this is because few of them, including those involved in their hiring, seem to realize the actual messiness inherent within their organizations.

    This brings to mind the parable that writer David Foster Wallace used in his 2005 convocation speech at Kenyon College, in which two young fish are told by an older fish that they are swimming in water. One of the young fish then turns to the other in surprise and says: “What is water anyway?”

    Lack of agency

    I heard from various leaders who experienced an “aha” moment when they realized they were immersed within a fluid and dynamic organizational environment that they were expected to run like a traditional business. This realization gave them a framework to understand the lack of agency they often experienced.

    The challenge with social-purpose organizations is that they’re complex adaptive systems in which individual interactions form an ever-changing array of networks generating emergent behaviours that are often unpredictable. Complex adaptive systems also tend to revert to the status quo when faced with change.

    So how do social-purpose leaders navigate change and this challenging organizational context? They wrap their efforts around purpose. It’s an anchor point and unifying focus for leaders, teams and all stakeholders.

    4 strategies

    Based on my research, I’ve identified four main sense-making strategies that leaders use:

    Exploration and map-making: These pursuits help leaders extract a steady flow of information and data from their interactions both inside and outside their organizations. This allows them to develop high-level, adaptive frameworks that are constantly in flux — similar to Google Maps, as it generates live snapshots of traffic flows and suggested routes.

    Storytelling and narrative development: Leaders use storytelling and narrative development to project ideas, purposes and visions into the future. This allows them to connect emotionally and inspire people and communities. Recognizing their role as storyteller-in-chief can align disparate parts of an organization into a coherent and engaged whole.

    Invention and improvisation: These are employed by leaders to test assumptions as they learn what works and what doesn’t. This approach allows them to respond in real time to the never-ending flow of new information. Without taking risks, leaders are at risk of being stuck in paralysis.

    Adaptation and collaboration allows leaders to help their organizations remain relevant. Leaders spoke about the need to foster adaptation. They also stressed the need to attract new resources through collaboration across like-minded institutions, governments, funding partners and the private sector.

    Embracing a sense-making mindset

    Thinking that benefits the interests and perspectives of the total enterprise is a critical but challenging task for leaders in social- purpose organizations.

    Time and energy — two scarce resources — are necessary to build aligned and high-performing teams and to break down silos. Team alignment cannot be achieved through the occasional team-building session, but requires an ongoing commitment and a well-articulated plan.

    Social-purpose organizations need practices, frameworks and metrics that are tailored to organizations’ unique needs. Rather than spending resources, time and energy on strategic plans, some leaders are building more flexible strategic frameworks or using strategic foresight to guide an innovative vision for the future.

    Leadership can be lonely

    It’s also important to remember that leadership can be lonely. To survive and thrive, social-purpose leaders must remember to seek out their own coaches and build communities of practice to enhance their lived experience and activities.

    Developing an outer shell to weather criticism also helps. While leaders can’t please everyone, sense-making leaders find strength and build endurance in the recognition that the roles they play are meaningful, satisfying and essential — not only within the organizations they serve but through the collective work their organizations accomplish in the world.

    Leaders (and board members) must realize that hiring the same people with the same profile as the past won’t make an organization ready for change, but instead reinforces the status quo.

    By recognizing the messiness of their organizations and using sense-making skills, leaders in social-purpose organizations have better odds of surviving the perils and challenges of massive disruption and unprecedented change.

    The Conversation

    Daniel Atlin does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. University leaders have to make sense of massive disruption — 4 ways they do it – https://theconversation.com/university-leaders-have-to-make-sense-of-massive-disruption-4-ways-they-do-it-257866

  • The oldest rocks on Earth are more than four billion years old

    Source: ForeignAffairs4

    Source: The Conversation – Canada – By Hanika Rizo, Associate Professor, Department of Earth Sciences, Carleton University

    Earth formed about 4.6 billion years ago, during the geological eon known as the Hadean. The name “Hadean” comes from the Greek god of the underworld, reflecting the extreme heat that likely characterized the planet at the time.

    By 4.35 billion years ago, the Earth might have cooled down enough for the first crust to form and life to emerge.

    However, very little is known about this early chapter in Earth’s history, as rocks and minerals from that time are extremely rare. This lack of preserved geological records makes it difficult to reconstruct what the Earth looked like during the Hadean Eon, leaving many questions about its earliest evolution unanswered.

    We are part of a research team that has confirmed the oldest known rocks on Earth are located in northern Québec. Dating back more than four billion years, these rocks provide a rare and invaluable glimpse into the origins of our planet.

    two men stand on rocks examining pieces in their hands
    Geologists Jonathan O’Neil and Chris Sole examine rocks in northern Québec.
    (H. Rizo), CC BY

    Remains from the Hadean Eon

    The Hadean Eon is the first period in the geological timescale, spanning from Earth’s formation 4.6 billion years ago and ending around 4.03 billion years ago.

    The oldest terrestrial materials ever dated by scientists are extremely rare zircon minerals that were discovered in western Australia. These zircons were formed as early as 4.4 billion years ago, and while their host rock eroded away, the durability of zircons allowed them to be preserved for a long time.

    Studies of these zircon minerals has given us clues about the Hadean environment, and the formation and evolution of Earth’s oldest crust. The zircons’ chemistry suggests that they formed in magmas produced by the melting of sediments deposited at the bottom of an ancient ocean. This suggests that the zircons are evidence that the Hadean Eon cooled rapidly, and liquid water oceans were formed early on.

    Other research on the Hadean zircons suggests that the Earth’s earliest crust was mafic (rich in magnesium and iron). Until recently, however, the existence of that crust remained to be confirmed.

    In 2008, a study led by one of us — associate professor Jonathan O’Neil (then a McGill University doctoral student) — proposed that rocks of this ancient crust had been preserved in northern Québec and were the only known vestige of the Hadean.

    Since then, the age of those rocks — found in the Nuvvuagittuq Greenstone Belt — has been controversial and the subject of ongoing scientific debate.

    a flat, rocky landscape
    The Nuvvuagittuq Greenstone Belt in northern Québec.
    (H. Rizo), CC BY

    ‘Big, old solid rock’

    The Nuvvuagittuq Greenstone Belt is located in the northernmost region of Québec, in the Nunavik region above the 55th parallel. Most of the rocks there are metamorphosed volcanic rocks, rich in magnesium and iron. The most common rocks in the belt are called the Ujaraaluk rocks, meaning “big old solid rock” in Inuktitut.

    The age of 4.3 billion years was proposed after variations in neodymium-142 were detected, an isotope produced exclusively during the Hadean through the radioactive decay of samarium-146. The relationship between samarium and neodymium isotope abundances had been previously used to date meteorites and lunar rocks, but before 2008 had never been applied to Earth rocks.

    This interpretation, however, was challenged by several research groups, some of whom studied zircons within the belt and proposed a younger age of at most 3.78 billion years, placing the rocks in the Archean Eon instead.

    Confirming the Hadean Age

    In the summer of 2017, we returned to the Nuvvuagittuq belt to take a closer look at the ancient rocks. This time, we collected intrusive rocks — called metagabbros — that cut across the Ujaraaluk rock formation, hoping to obtain independent age constraints. The fact that these newly studied metagabbros are in intrusion in the Ujaraaluk rocks implies that the latter must be older.

    The project was led by masters student Chris Sole at the University of Ottawa, who joined us in the field. Back in the laboratory, we collaborated with French geochronologist Jean-Louis Paquette. Additionally, two undergraduate students — David Benn (University of Ottawa) and Joeli Plakholm (Carleton University) participated to the project.

    We combined our field observations with petrology, geochemistry, geochronology and applied two independent samarium-neodymium age dating methods, dating techniques used to assess the absolute ages of magmatic rocks, before they became metamorphic rocks. Both assessments yielded the same result: the intrusive rocks are 4.16 billion years old.

    a rocky landscape silhouetted by sunset
    Sunset at the Nuvvuagittuq Greenstone Belt.
    (H. Rizo), CC BY

    The oldest rocks

    Since these metagabbros cut across the Ujaraaluk formation, the Ujaraaluk rocks must be even older, placing them firmly in the Hadean Eon.

    Studying the Nuvvuagittuq rocks, the only preserved rocks from the Hadean, provides a unique opportunity to learn about the earliest history of our planet. They can help us understand how the first continents formed, and how and when Earth’s environment evolved to become habitable.

    The Conversation

    Hanika Rizo receives funding from the Natural Sciences and Engineering Research Council of Canada (NSERC).

    Jonathan O’Neil receives funding from the Natural Sciences and Engineering Research Council of Canada.

    ref. The oldest rocks on Earth are more than four billion years old – https://theconversation.com/the-oldest-rocks-on-earth-are-more-than-four-billion-years-old-259657

  • Employers are failing to insure the working class – Medicaid cuts will leave them even more vulnerable

    Source: ForeignAffairs4

    Source: The Conversation – USA (3) – By Sumit Agarwal, Assistant Professor of Internal Medicine, University of Michigan

    The Congressional Budget Office estimates that 7.8 million Americans across the U.S. will lose their coverage through Medicaid – the public program that provides health insurance to low-income families and individuals – under the multitrillion-dollar domestic policy package that President Donald Trump signed into law on July 4, 2025.

    That includes 247,000 to 412,000 of my fellow residents of Michigan based on the House Reconciliation Bill in early June. There are similarly deep projected cuts within the Senate version of the legislation, which Trump signed.
    Many of these people are working Americans who will lose Medicaid because of the onerous paperwork involved with the proposed work requirements.

    They won’t be able to get coverage in the Affordable Care Act Marketplaces after losing Medicaid. Premiums and out-of-pocket costs are likely to be too high for those making less than 100% to 138% of the federal poverty level who do not qualify for health insurance marketplace subsidies. Funding for this program is also under threat.

    And despite being employed, they also won’t be able to get health insurance through their employers because it is either too expensive or not offered to them. Researchers estimate that coverage losses will lead to thousands of medically preventable deaths across the country because people will be unable to access health care without insurance.

    I am a physician, health economist and policy researcher who has cared for patients on Medicaid and written about health care in the U.S. for over eight years. I think it’s important to understand the role of Medicaid within the broader insurance landscape. Medicaid has become a crucial source of health coverage for low-wage workers.

    A brief history of Medicaid expansion.

    Michigan removed work requirements from Medicaid

    A few years ago, Michigan was slated to institute Medicaid work requirements, but the courts blocked the implementation of that policy in 2020. It would have cost upward of US$70 million due to software upgrades, staff training, and outreach to Michigan residents enrolled in the Medicaid program, according to the Michigan Department of Health and Human Services.

    Had it gone into effect, 100,000 state residents were expected to lose coverage within the first year.

    The state took the formal step of eliminating work requirements from its statutes earlier this year in recognition of implementation costs being too high and mounting evidence against the policy’s effectiveness.

    When Arkansas instituted Medicaid work requirements in 2018, there was no increase in employment, but within months, thousands of people enrolled in the program lost their coverage. The reason? Many people were subjected to paperwork and red tape, but there weren’t actually that many people who would fail to meet the criteria of the work requirements. It is a recipe for widespread coverage losses without meeting any of the policy’s purported goals.

    Work requirements, far from incentivizing work, paradoxically remove working people from Medicaid with nowhere else to go for insurance.

    Shortcomings of employer-sponsored insurance

    Nearly half of Americans get their health insurance through their employers.

    In contrast to a universal system that covers everyone from cradle to grave, an employer-first system leaves huge swaths of the population uninsured. This includes tens of millions of working Americans who are unable to get health insurance through their employers, especially low-income workers who are less likely to even get the choice of coverage from their employers.

    Over 80% of managers and professionals have employer-sponsored health coverage, but only 50% to 70% of blue-collar workers in service jobs, farming, construction, manufacturing and transportation can say the same.

    There are some legal requirements mandating employers to provide health insurance to their employees, but the reality of low-wage work means many do not fall under these legal protections.

    For example, employers are allowed to incorporate a waiting period of up to 90 days before health coverage begins. The legal requirement also applies only to full-time workers. Health coverage can thus remain out of reach for seasonal and temporary workers, part-time employees and gig workers.

    Even if an employer offers health insurance to their low-wage employees, those workers may forego it because the premiums and deductibles are too high to make it worth earning less take-home pay.

    To make matters worse, layoffs are more common for low-wage workers, leaving them with limited options for health insurance during job transitions. And many employers have increasingly shed low-wage staff, such as drivers and cleaning staff, from their employment rolls and contracted that work out. Known as the fissuring of the workplace, it allows employers of predominately high-income employees to continue offering generous benefits while leaving no such commitment to low-wage workers employed as contractors.

    Medicaid fills in gaps

    Low-income workers without access to employer-sponsored insurance had virtually no options for health insurance in the years before key parts of the Affordable Care Act went into effect in 2014.

    Research my coauthors and I conducted showed that blue-collar workers have since gained health insurance coverage, cutting the uninsured rate by a third thanks to the expansion of Medicaid eligibility and subsidies in the health insurance marketplaces. This means low-income workers can more consistently see doctors, get preventive care and fill prescriptions.

    Further evidence from Michigan’s experience has shown that Medicaid can help the people it covers do a better job at work by addressing health impairments. It can also improve their financial well-being, including fewer problems with debt, fewer bankruptcies, higher credit scores and fewer evictions.

    Premiums and cost sharing in Medicaid are minimal compared with employer-sponsored insurance, making it a more realistic and accessible option for low-income workers. And because Medicaid is not tied directly to employment, it can promote job mobility, allowing workers to maintain coverage within or between jobs without having to go through the bureaucratic complexity of certifying work.

    Of course, Medicaid has its own shortcomings. Payment rates to providers are low relative to other insurers, access to doctors can be limited, and the program varies significantly by state. But these weaknesses stem largely from underfunding and political hostility – not from any intrinsic flaw in the model. If anything, Medicaid’s success in covering low-income workers and containing per-enrollee costs points to its potential as a broader foundation for health coverage.

    The current employer-based system, which is propped up by an enormous and regressive tax break for employer-sponsored insurance premiums, favors high-income earners and contributes to wage stagnation. In my view, which is shared by other health economists, a more public, universal model could better cover Americans regardless of how someone earns a living.

    Over the past six decades, Medicaid has quietly stepped into the breach left by employer-sponsored insurance. Medicaid started as a welfare program for the needy in the 1960s, but it has evolved and adapted to fill the needs of a country whose health care system leaves far too many uninsured.

    This article was updated on July 4, 2025, to reflect Trump signing the bill into law.

    The Conversation

    Sumit Agarwal does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Employers are failing to insure the working class – Medicaid cuts will leave them even more vulnerable – https://theconversation.com/employers-are-failing-to-insure-the-working-class-medicaid-cuts-will-leave-them-even-more-vulnerable-259256

  • Why Texas Hill Country, where a devastating flood killed dozens, is one of the deadliest places in the US for flash flooding

    Source: ForeignAffairs4

    Source: The Conversation – USA (2) – By Hatim Sharif, Professor of Civil and Environmental Engineering, The University of Texas at San Antonio

    A Kerrville, Texas, resident watches the flooded Guadalupe River on July 4, 2025. Eric Vryn/Getty Images

    Texas Hill Country is known for its landscapes, with shallow rivers winding among hills and through rugged valleys. But that geography also makes it one of the deadliest places in the U.S. for flash flooding.

    In the early hours of July 4, 2025, a rush of flood water swept through an area dotted with summer camps and small towns about 70 miles west of San Antonio. At least 27 people died, and about two dozen girls from one camp and other people in the area were still unaccounted for the following morning, officials said. More than 200 people had to be rescued.

    The flooding began as many flash floods in this region do, with a heavy downpour that sent water sheeting off the hillsides into creeks. The creeks poured into the Guadalupe River. Around 3 a.m. on July 4, National Weather Service data shows the river was rising about 1 foot every 5 minutes near the camp. By 4:30 a.m., the water had risen more than 20 feet.

    Flood expert Hatim Sharif, a hydrologist and civil engineer at the University of Texas at San Antonio, explains what makes this part of the country, known as Flash Flood Alley, so dangerous.

    What makes Hill Country so prone to flooding?

    Texas as a whole leads the nation in flood deaths, and by a wide margin. A colleague and I analyzed data from 1959 to 2019 and found 1,069 people had died in flooding in Texas over those six decades. The next highest total was in Louisiana, with 693.

    Many of those flood deaths have been in Hill County, an area known as Flash Flood Alley. It’s a crescent of land that curves from near Dallas down to San Antonio and then westward.

    The hills are steep, and the water moves quickly when it floods. This is a semi-arid area with soils that don’t soak up much water, so the water sheets off quickly and the shallow creeks can rise fast.

    When those creeks converge on a river, they can create a wall of water that wipes out homes and washes away cars and, unfortunately, anyone in its path.

    Hill Country has seen some devastating flash floods. In 1987, heavy rain in western Kerr County quickly flooded the Guadalupe River, triggering a flash flood similar to the one in 2025. Ten teenagers being evacuated from a camp died in the rushing water.

    San Antonio, considered the gateway to Hill Country, was hit with another flash flood on June 12, 2025, that killed 13 people whose cars were swept away when they drove into high water from a flooding creek near an interstate ramp in the early morning.

    Why does the region get such strong downpours?

    One reason Hill Country gets powerful downpours is the Balcones Escarpment.

    The escarpment is a line of cliffs and steep hills created by a geologic fault. When warm air from the Gulf rushes up the escarpment, it condenses and can dump a lot of moisture. That water flows down the hills quickly, from many different directions, filling streams and rivers below.

    As temperature rise, the warmer atmosphere can hold more moisture, increasing the downpour and flood risk.

    A tour of the Guadalupe River and its flood risk.

    The same effect can contribute to flash flooding in San Antonio, where the large amount of paved land and lack of updated drainage to control runoff adds to the risk.

    What can be done to improve flash flood safety?

    First, it’s important for people to understand why flash flooding happens and just how fast the water can rise and flow. In many arid areas, dry or shallow creeks can quickly fill up with fast-moving water and become deadly. So people should be aware of the risks and pay attention to the weather.

    Improving flood forecasting, with more detailed models of the physics and water velocity at different locations, can also help.

    Probabilistic forecasting, for example, can provide a range of rainfall scenarios, enabling authorities to prepare for worst-case scenarios. A scientific framework linking rainfall forecasts to the local impacts, such as streamflow, flood depth and water velocity, could also help decision-makers implement timely evacuations or road closures.

    Education is particularly essential for drivers. One to two feet of moving water can wash away a car. People may think their trucks and SUVs can go through anything, but fast-moving water can flip a truck and carry it away.

    Officials can also do more to barricade roads when the flood risk is high to prevent people from driving into harm’s way. We found that 58% of the flood deaths in Texas over the past six decades involved vehicles.

    The storm on June 12 in San Antonio was an example. It was early morning, and drivers has poor visibility. Cars drove into floodwater without seeing the risk until it was too late.

    The Conversation

    Hatim Sharif does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Why Texas Hill Country, where a devastating flood killed dozens, is one of the deadliest places in the US for flash flooding – https://theconversation.com/why-texas-hill-country-where-a-devastating-flood-killed-dozens-is-one-of-the-deadliest-places-in-the-us-for-flash-flooding-260555

  • Coups in west Africa have five things in common: knowing what they are is key to defending democracy

    Source: ForeignAffairs4

    Source: The Conversation – Africa (2) – By Salah Ben Hammou, Postdoctoral Research Associate, Rice University

    August 2025 makes it five years since Malian soldiers ousted President Ibrahim Boubacar Keïta in a coup d’état. While the event reshaped Mali’s domestic politics, it also marked the beginning of a broader wave of military takeovers that swept parts of Africa between 2020 and 2023.

    Soldiers have toppled governments in Niger, Burkina Faso (twice), Sudan, Chad, Guinea and Gabon.

    The return of military coups shocked many observers. Once thought to be relics of the cold war, an “extinct” form of regime change, coups appeared to be making a comeback.

    No new coups have taken place since Gabon’s in 2023, but the ripple effects are far from over. Gabon’s coup leader, Gen. Brice Oligui Nguema, formally assumed the presidency in May 2025. In doing so he broke promises that the military would step aside from politics. In Mali, the ruling junta dissolved all political parties to tighten its grip on power.

    Across the affected countries, military rulers remain entrenched. Sudan, for its part, has descended into a devastating civil war following its coup in 2021.

    Analysts often cite weak institutions, rising insecurity, and popular frustration with civilian governments to explain coups. While these factors play a role, they don’t capture the patterns we have observed.

    I have studied and written on military coups for nearly a decade, especially this coup wave.

    After a close analysis of the coup cascade, I conclude that the international community must move beyond the view of coups as isolated events.

    Patterns suggest that the Sahelian coups are not isolated. Coup leaders are not only seizing power, they are learning from one another how to entrench authority, sidestep international pressure and craft narratives that legitimise their rule.

    To help preserve democratic rule, the international community must confront five lessons revealed by the recent military takeovers.

    Key lessons

    Contagion: Just a month after Guinea’s military ousted President Alpha Condé, Sudan’s army disrupted its democratic transition. Three months later, Burkina Faso’s officers toppled President Roch Marc Christian Kaboré amid rising insecurity.

    Each case had unique triggers, but the timing suggests more than coincidence.

    Potential coup leaders watch closely, not just to see if a coup succeeds but what kinds of challenges arise as the event unfolds. When coups fail and plotters face harsh consequences, others are less likely to follow.

    Whether coups spread depends on the perceived risks as much as on opportunity. But when coups succeed – especially if new leaders quickly take control and avoid immediate instability – they send a signal that can encourage others to act.

    Civilian support matters: Civilian support for coups is real and observed.

    Since the start of Africa’s recent coup wave, many commentators have highlighted the cheering crowds that often welcome soldiers, celebrating the fall of unpopular regimes. Civilian support is a common and often underestimated aspect of coup politics. It signals to potential coup plotters that military rule can win legitimacy and public backing.

    This popular support also helps coup leaders strengthen their grip on power, shielding their regimes from both domestic opposition and international pressure. For example, following Niger’s 2023 coup, the putschists faced international condemnation and the threat of military intervention. In response, thousands of supporters gathered in the capital, Niamey, to rally around the coup leaders.

    In Mali, protesters flooded the streets in 2020 to welcome the military’s ousting of President Ibrahim Boubacar Keïta. In Guinea, crowds rallied behind the junta after Alpha Condé was removed in 2021. And in Burkina Faso, both 2022 coups were met with widespread approval.

    International responses: The international community’s response sends equally powerful signals. When those responses are weak, delayed, or inconsistent – such as the absence of meaningful sanctions, token aid suspensions, or symbolic suspensions from regional bodies – they can send the message that the illegal seizure of power carries few legitimate consequences.

    International responses to recent coups have been mixed. Some, like Niger’s, triggered strong initial reactions, including sanctions and threats of military intervention.

    But in Chad, Mahamat Déby’s 2021 takeover was effectively legitimised by key international actors, which portrayed it as a necessary step for stability following the battlefield death of his father, President Idriss Déby, at the hands of rebel forces.

    In Guinea and Gabon, regional suspensions were largely symbolic, with little pressure to restore civilian rule. In Mali and Burkina Faso, transitional timelines have been extended repeatedly without much pushback.

    The inconsistency signals to coup leaders that seizing power may provoke outrage, but rarely lasting consequences.

    Coup leaders learn from one another: Contagion isn’t limited to the moment of takeover. Coup leaders also draw lessons from how others entrench themselves afterwards. They watch to see which tactics succeed in defusing opposition and extending their grip on power.

    Entrenched military rule has become the norm across recent coup countries. On average, military rulers have remained in power for nearly 1,000 days since the start of the current wave. Before this wave, military leaders had retained power on average for 22 days since the year 2000.

    In Chad, Mahamat Déby secured his grip through a contested 2024 election. Gabon’s Nguema followed in 2025, winning nearly 90% of the vote after constitutional changes cleared the path. In both cases, elections were used to re-brand military regimes as democratic, even as the role of the armed forces remains unchanged.

    Connecting the dots

    Coup governments across Mali, Burkina Faso and Niger have shifted away from western alliances and towards Russia, deepening military and economic ties. All three exited the Economic Community of West African States and formed the Alliance of Sahel States, denouncing regional pressure.

    Aligning with Russia offers these regimes external support and a veneer of sovereignty, while legitimising authoritarianism as independence.

    The final lesson is clear: when coups are treated as isolated rather than interconnected, it’s likely that more will follow. Would-be plotters are watching how citizens react, how the world responds, and how other coup leaders consolidate power.

    When the message they receive is that coups are tolerable, survivable and even rewarded, the deterrent effect weakens.

    Poema Sumrow, a Baker Institute researcher, contributed to this article

    The Conversation

    Salah Ben Hammou does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Coups in west Africa have five things in common: knowing what they are is key to defending democracy – https://theconversation.com/coups-in-west-africa-have-five-things-in-common-knowing-what-they-are-is-key-to-defending-democracy-258890

  • Child labour numbers rise in homes where adults are jobless – South African study

    Source: ForeignAffairs4

    Source: The Conversation – Africa – By Derek Yu, Professor, Economics, University of the Western Cape

    Child labour is a big concern across the world. It is particularly acute in countries in the global south, where it is estimated that about 160 million children are engaged in child labour, about 87 million of them in sub-Saharan Africa.

    A range of countries have sought to outlaw child labour because it denies children their childhood as well as physical and mental development.

    In South Africa data on the work activities of children aged between 7 and 17 years are collected in the Survey of Activities of Young People, conducted by Statistics South Africa. Despite the survey having taken place four times (1999, 2010, 2015 and 2019), the dataset has been seriously under-used. There has hardly been any comprehensive research done on the state of South Africa’s child labour and child work activities.

    In a recently published study we looked at child labour activities in the country. We compared the 2010, 2015 and 2019 Survey of Activities of Young People.

    We first looked at personal and geographical characteristics of children, such as their gender, ethnic group and province of residence. We went on to look at their work activities, as well as the relationship (if any) between adults’ employment status and the probability of children from the same households having to work.

    The reason we chose to look at the relationship between child labour and work activities of adults is that South Africa has an extremely high level of unemployment. At the end of 2024 the unemployment rate was 31.8%.

    The Basic Conditions of Employment Act, which was passed in 1997, bans the employment of children until the last school day of the year when they turn 15 years old. Nonetheless, as some adult household members struggle to find work successfully, it is possible that child members of households are exploited to help the households survive financially.

    Two striking and alarming findings stand out from the study.

    First, the fewer adults were employed in a household, the more likely it was that children in the household were working. Secondly, the presence of child labour in the household had a discouraging impact on the adult members’ job-seeking action.

    The first key finding implies that if adults were employed, children might not be working. The second implies that jobless adult members most likely relied on the (illegal) income earned by the child labour, discouraging the adults from seeking work actively.

    The number of children working in South Africa has dropped from 778,000 in 2010 to 577,000 in 2019. This downward trend implies the success of South African legislation in prohibiting child labour over the years. But, we conclude, laws and regulations are not enough. In South Africa, the enforcement as well as the public awareness and understanding of the child labour related legislation must be improved to safeguard children.

    Thus, a coordinated programme of action by the government is important to bring all stakeholders into the fight against child labour and unemployment of the working-age population.

    About the survey

    The Survey of Activities of Young People was first introduced in 1999 by Statistics South Africa, two years after the 1997 legislation that banned child labour. However, since the 1999 survey was not linked to the Labour Force Survey and the 1999 survey questions were asked very differently from the 2010, 2015 and 2019 waves, we decided to exclude the 1999 survey wave from the analysis. Hence, we focus on examining the 2010, 2015 and 2019 results, notably because these three waves of data about young people are linked to the Labour Force Survey data taking place in the same year.

    This makes it possible to investigate the relationship between the employment status of child and adult household members.

    The 2019 survey findings show that, if a household had no employed adult members, the probability of the child from the same household ending up as child labour was 6.5%.

    If the household had one employed adult member, child labour probability dropped to 4.7%. Lastly, if the household had at least two employed adult members, child labour likelihood decreased further to 2.7%.

    Using the same 2019 data, we found that if a household had no child involved in labour, the probability of an adult member from the same household seeking work in the labour market was 60%. Adult members’ labour force participation rate from households where at least once child worked as child labour was much lower at 44%.

    Looking at other child labour statistics, we found that the majority (90%) of working children were Africans; above 60% were in the illegal age cohort of 7-14 years; and most were living in the rural areas of KwaZulu-Natal, Gauteng and Eastern Cape.

    In addition, 98% of them were still attending school while working as child labour.

    Lastly, most child labour worked 1-5 hours per week in elementary occupations in the wholesale and retail industry. The top three reasons for children working were “to obtain pocket money”, “to assist family with money” and “duty to help family”.

    The road ahead

    Some children spent many hours on household chores (which is not classified as child labour, strictly speaking). Parents, employers and the community must be educated about the dangers of long hours on domestic chores and even child labour.

    The government should consolidate its infrastructure development programmes, especially the delivery of electricity, water and sanitation in areas where children spend time on domestic chores. These actions will shorten the duration of child household chores and allow children more time for school activities. The surveys used for the study did not include questions about specific activities children were involved in. They only asked if the child was involved in chores such as cleaning, cooking and looking after elderly members.

    It is also worthwhile if questions relating to child labour are included in the child questionnaire of the National Income Dynamics Study (the only national panel data survey in South Africa) to more thoroughly investigate whether child labour is a short-term or long-term phenomenon, and whether there is any relationship between poverty (and receipt of social grants) and child labour incidence.

    Lastly, it has been six years since the Survey of Activities of Young People was last conducted. It is time for Statistics South Africa to collect the latest data on the state of child labour in the country.

    This article is based on a journal article which the writers co-authored with Clinton Herwel (Economics Masters student at the University of the Western Cape).

    The Conversation

    The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    ref. Child labour numbers rise in homes where adults are jobless – South African study – https://theconversation.com/child-labour-numbers-rise-in-homes-where-adults-are-jobless-south-african-study-259398

  • ‘Big’ legislative package shifts more of SNAP’s costs to states, saving federal dollars but causing fewer Americans to get help paying for food

    Source: ForeignAffairs4

    Source: The Conversation – USA (2) – By Tracy Roof, Associate Professor of Political Science, University of Richmond

    People shop for food in Brooklyn in 2023 at a store that makes sure that its customers know it accepts SNAP benefits, also known as food stamps and EBT.
    Spencer Platt/Getty Images

    The legislative package that President Donald Trump signed into law on July 4, 2025, has several provisions that will shrink the safety net, including the Supplemental Nutrition Assistance Program, long known as food stamps. SNAP spending will decline by an estimated US$186 billion through 2034 as a result of several changes Congress made to the program that today helps roughly 42 million people buy groceries – an almost 20% reduction.

    In my research on the history of food stamps, I’ve found that the program was meant to be widely available to most low-income people. The SNAP changes break that tradition in two ways.

    The Congressional Budget Office estimates that about 3 million people are likely to be dropped from the program and lose their benefits. This decline will occur in part because more people will face time limits if they don’t meet work requirements. Even those who meet the requirements may lose benefits because of difficulty submitting the necessary documents.

    And because states will soon have to take on more of the costs of the program, which totaled over $100 billion in 2024, they may eventually further restrict who gets help due to their own budgetary constraints.

    Summing up SNAP’s origins

    Inspired by the plight of unemployed coal miners whom John F. Kennedy met in Appalachia when he campaigned for the presidency in 1960, the early food stamps program was not limited to single parents with children, older people and people with disabilities, like many other safety net programs were at the time. It was supposed to help low-income people afford more and better food, regardless of their circumstances.

    In response to national attention in the late 1960s to widespread hunger and malnutrition in other areas of the country, such as among tenant farmers in the rural South, a limited food stamps program was expanded. It reached every part of the country by 1974.

    From the start, the states administered the program and covered some of its administrative costs and the federal government paid for the benefits in full. This arrangement encouraged states to enroll everyone who needed help without fearing the budgetary consequences.

    Who could qualify and how much help they could get were set by uniform national standards, so that even the residents of the poorest states would be able to afford a budget-conscious but nutritionally adequate diet.

    The federal government’s responsibility for the cost of benefits also allowed spending to automatically grow during economic downturns, when more people need assistance. These federal dollars helped families, retailers and local economies weather tough times.

    The changes to the SNAP program included in the legislative package that Congress approved by narrow margins and Trump signed into law, however, will make it harder for the program to serve its original goals.

    Restricting benefits

    Since the early 1970s, most so-called able-bodied adults who were not caring for a child or an adult with disabilities had to meet a work requirement to get food stamps. Welfare reform legislation in 1996 made that requirement stricter for such adults between the ages of 18 and 50 by imposing a three-month time limit if they didn’t log 20 hours or more of employment or another approved activity, such as verified volunteering.

    Budget legislation passed in 2023 expanded this rule to adults up to age 54. The 2025 law will further expand the time limit to adults up to age 64 and parents of children age 14 or over.

    States can currently get permission from the federal government to waive work requirements in areas with insufficient jobs or unemployment above the national average. This flexibility to waive work requirements will now be significantly limited and available only where at least 1 in 10 workers are unemployed.

    Concerned senators secured an exemption from the work requirements for most Native Americans and Native Alaskans, who are more likely to live in areas with limited job opportunities.

    A 2023 budget deal exempted veterans, the homeless and young adults exiting the foster care system from work requirements because they can experience special challenges getting jobs. The 2025 law does not exempt them.

    The new changes to SNAP policies will also deny benefits to many immigrants with authorization to be in the U.S., such as people granted political asylum or official refugee status. Immigrants without authorization to reside in the U.S. will continue to be ineligible for SNAP benefits.

    Tracking ‘error rates’

    Critics of food stamps have long argued that states lack incentives to carefully administer the program because the federal government is on the hook for the cost of benefits.

    In the 1970s, as the number of Americans on the food stamp rolls soared, the U.S. Department of Agriculture, which oversees the program, developed a system for assessing if states were accurately determining whether applicants were eligible for benefits and how much they could get.

    A state’s “payment error rate” estimates the share of benefits paid out that were more or less than an applicant was actually eligible for. The error rate was not then and is not today a measure of fraud. Typically, it just indicates the share of families who get a higher – or lower – amount of benefits than they are eligible for because of mistakes or confusion on the part of the applicant or the case worker who handles the application.

    Congress tried to penalize states with error rates over 5% in the 1980s but ultimately suspended the effort under state pressure. After years of political wrangling, the USDA started to consistently enforce financial penalties on states with high error rates in the mid-1990s.

    States responded by increasing their red tape. For example, they asked applicants to submit more documentation and made them go through more bureaucratic hoops, like having more frequent in-person interviews, to get – and continue receiving – SNAP benefits.

    These demands hit low-wage workers hardest because their applications were more prone to mistakes. Low-income workers often don’t have consistent work hours and their pay can vary from week to week and month to month. The number of families getting benefits fell steeply.

    The USDA tried to reverse this decline by offering states options to simplify the process for applying for and continuing to get SNAP benefits over the course of the presidencies of Bill Clinton, George W. Bush and Barack Obama. Enrollment grew steadily.

    Penalizing high rates

    Since 2008, states with error rates over 6% have had to develop a detailed plan to lower them.

    Despite this requirement, the national average error rate jumped from 7.4% before the pandemic, to a record high of 11.7% in 2023. Rates rose as states struggled with a surge of people applying for benefits, a shortage of staff in state welfare agencies and procedural changes.

    Republican leaders in Congress have responded to that increase by calling for more accountability.

    Making states pay more

    The big legislative package will increase states’ expenses in two ways.

    It will reduce the federal government’s responsibility for half of the cost of administering the program to 25% beginning in the 2027 fiscal year.

    And some states will have to pay a share of benefit costs for the first time in the program’s history, depending on their payment error rates. Beginning in the 2028 fiscal year, states with an error rate between 6-8% would be responsible for 5% of the cost of benefits. Those with an error rate between 8-10% would have to pay 10%, and states with an error rate over 10% would have to pay 15%. The federal government would continue to pay all benefits in states with error rates below 6%.

    Republicans argue the changes will give states more “skin in the game” and ensure better administration of the program.

    While the national payment error rate fell from 11.68% in the 2023 fiscal year to 10.93% a year later, 42 states still had rates in excess of 6% in 2024. Twenty states plus the District of Columbia had rates of 10% or higher.

    At nearly 25%, Alaska has the highest payment error rate in the country. But Alaska won’t be in trouble right away. To ease passage in the Senate, where the vote of Sen. Lisa Murkowski, an Alaska Republican, was in doubt, a provision was added to the bill allowing several states with the highest error rates to avoid cost sharing for up to two years after it begins.

    Democrats argue this may encourage states to actually increase their error rates in the short term.

    The effect of the new law on the amount of help an eligible household gets is expected to be limited.

    About 600,000 individuals and families will lose an average of $100 a month in benefits because of a change in the way utility costs are treated. The law also prevents future administrations from increasing benefits beyond the cost of living, as the Biden Administration did.

    States cannot cut benefits below the national standards set in federal law.

    But the shift of costs to financially strapped states will force them to make tough choices. They will either have to cut back spending on other programs, increase taxes, discourage people from getting SNAP benefits or drop the program altogether.

    The changes will, in the end, make it even harder for Americans who can’t afford the bare necessities to get enough nutritious food to feed their families.

    The Conversation

    Tracy Roof does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. ‘Big’ legislative package shifts more of SNAP’s costs to states, saving federal dollars but causing fewer Americans to get help paying for food – https://theconversation.com/big-legislative-package-shifts-more-of-snaps-costs-to-states-saving-federal-dollars-but-causing-fewer-americans-to-get-help-paying-for-food-260166

  • Are people at the South Pole upside down?

    Source: ForeignAffairs4

    Source: The Conversation – USA (2) – By Abigail Bishop, Ph.D. Student in Physics, University of Wisconsin-Madison

    At the South Pole, which way is up? Abigail Bishop

    Curious Kids is a series for children of all ages. If you have a question you’d like an expert to answer, send it to CuriousKidsUS@theconversation.com.


    Are people on the South Pole walking upside down from the rest of the world? – Ralph P., U.S.


    When I was standing at the South Pole, I felt the same way I feel anywhere on Earth because my feet were still on the ground and the sky was still overhead.

    I’m an astrophysicist from Wisconsin who lived at the South Pole for seven weeks from December 2024 to January 2025 to work on an array of detectors looking for extremely high energy particles from outer space.

    I didn’t feel upside down, but there were some differences that still made the South Pole feel flipped over from what I was used to.

    As someone who loves looking for the Moon, I noticed that the face of the man on the Moon was flipped over, like he went from 🙂 to 🙃. All the craters that I was used to seeing on the top of the Moon from Wisconsin were now on the bottom – because I was looking at the Moon from the Southern Hemisphere instead of the Northern Hemisphere.

    An image showing the Moon and the Earth, and how the Moon looks different from one end of the Earth than the other.
    How the Moon looks depends on your point of view.
    The Planetary Society, CC BY-SA

    After noticing this difference, I remembered something similar in the night skies of New Zealand, a country near Antarctica where my fellow travelers and I got our big red coats that kept us warm at the South Pole. I had looked for Orion, a constellation that in the Northern Hemisphere is viewed as a hunter holding a bow and drawing an arrow from his quiver. In the night sky of New Zealand, Orion looked like he was doing a handstand.

    Everything in the sky felt upside down and opposite, compared with what I was used to. A person who lives in the Southern Hemisphere might feel the same about visiting the Arctic or the North Pole.

    A view of Earth from space.
    ‘The Big Blue Marble’ photo, taken in 1972 by the crew of Apollo 17.
    NASA

    An out-of-this-world perspective

    To understand what’s happening, and why things are really different but also feel very much the same, it might be useful to back up a bit from Earth’s surface. Like into outer space. On space missions to the Moon, astronauts could see one side of the Earth’s sphere at once.

    If they had superhero vision, an astronaut would see the people at the South Pole and North Pole standing upside down from each other. And a person at the equator would look like they were sticking straight out the side of the planet. In fact, even though they might be standing on the equator, people in Colombia and Indonesia would also look like they were upside down from each other, because they would be sticking out from opposite sides of the Earth.

    Of course, if you asked each person, they would say, “My feet are on the ground, and the sky is up.”

    That’s because Earth is essentially a really big ball whose gravitational pull on every one of us points to the center of the planet. The direction that Earth pulls us in is what people call “down” all over the planet. Think about holding a baseball between your pointer fingers. From the perspective of your fingertips on the ball’s surface, both are pointing “down.” But from the perspective of a friend nearby, your fingers are pointing in different directions – though always toward the center of the ball.

    These relationships between people on the Earth’s surface are good for a little bit of fun, though. While I was at the South Pole, I pointed my body in the same direction as my friends in Wisconsin – by doing a handstand. But if you look at the picture the other way around, it looks like I’m holding up the entire planet, like Superman.

    A person does a handstand on a white surface near a red-and-white striped pole surrounded by flags of various nations.
    This is the right way up: Abigail Bishop does a handstand at the ceremonial South Pole.
    Abigail Bishop

    Hello, curious kids! Do you have a question you’d like an expert to answer? Ask an adult to send your question to CuriousKidsUS@theconversation.com. Please tell us your name, age and the city where you live.

    And since curiosity has no age limit – adults, let us know what you’re wondering, too. We won’t be able to answer every question, but we will do our best.

    The Conversation

    Abigail Bishop receives funding from National Science Foundation Award 2013134 and has received funding from the Belgian American Education Foundation.

    ref. Are people at the South Pole upside down? – https://theconversation.com/are-people-at-the-south-pole-upside-down-256754

  • Misinformation lends itself to social contagion – here’s how to recognize and combat it

    Source: ForeignAffairs4

    Source: The Conversation – USA (3) – By Shaon Lahiri, Assistant Professor of Public Health, College of Charleston

    Misinformation on social media has the potential to manipulate millions of people. Pict Rider/iStock via Getty Images Plus

    In 2019, a rare and shocking event in the Malaysian peninsula town of Ketereh grabbed international headlines. Nearly 40 girls age 12 to 18 from a religious school had been screaming inconsolably, claiming to have seen a “face of pure evil,” complete with images of blood and gore.

    Experts believe that the girls suffered what is known as a mass psychogenic illness, a psychological condition that results in physical symptoms and spreads socially – much like a virus.

    I’m a social and behavioral scientist within the field of public health. I study the ways in which individual behavior is influenced by prevailing social norms and social network processes, across a wide range of behaviors and contexts. Part of my work involves figuring out how to combat the spread of harmful content that can shape our behavior for the worse, such as misinformation.

    Mass psychogenic illness is not misinformation, but it gives researchers like me some idea about how misinformation spreads. Social connections establish pathways of influence that can facilitate the spread of germs, mental illness and even behaviors. We can be profoundly influenced by others within our social networks, for better or for worse.

    The spreading of social norms

    Researchers in my field think of social norms as perceptions of how common and how approved a specific behavior is within a specific network of people who matter to us.

    These perceptions may not always reflect reality, such as when people overestimate or underestimate how common their viewpoint is within a group. But they can influence our behavior nonetheless. For many, perception is reality.

    Social norms and related behaviors can spread through social networks like a virus can, but with one crucial caveat. Viruses often require just one contact with a potential host to spread, whereas behaviors often require multiple contacts to spread. This phenomenon, known as complex contagion, highlights how socially learned behaviors take time to embed.

    Watch the people in this video and see how you react.

    Fiction spreads faster than fact

    Consider a familiar scenario: the return of baggy jeans to the fashion zeitgeist.

    For many millennials like me, you may react to a friend engaging in this resurrected trend by cringing and lightly teasing them. Yet, after seeing them don those denim parachutes on multiple occasions, a brazen thought may emerge: “Hmm, maybe they don’t look that bad. I could probably pull those off.” That’s complex contagion at work.

    This dynamic is even more evident on social media. One of my former students expressed this succinctly. She was looking at an Instagram post about Astro Boy Boots – red, oversize boots based on those worn by a 1952 Japanese cartoon character. Her initial skepticism quickly faded upon reading the comments. As she put it, “I thought they were ugly at first, but after reading the comments, I guess they’re kind of fire.”

    Moving from innocuous examples, consider the spread of misinformation on social media. Misinformation is false information that is spread unintentionally, while disinformation is false information that is intentionally disseminated to deceive or do serious harm.

    Research shows that both misinformation and disinformation spread faster and farther than truth online. This means that before people can muster the resources to debunk the false information that has seeped into their social networks, they may have already lost the race. Complex contagion may have taken hold, in a malicious way, and begun spreading falsehood throughout the network at a rapid pace.

    People spread false information for various reasons, such as to advance their personal agenda or narrative, which can lead to echo chambers that filter out accurate information contrary to one’s own views. Even when people do not intend to spread false information online, doing so tends to happen because of a lack of attention paid to accuracy or lower levels of digital media literacy.

    Inoculation against social contagion

    So how much can people do about this?

    One way to combat harmful contagion is to draw on an idea first used in the 1960s called pre-bunking. The idea is to train people to practice skills to spot and resist misinformation and disinformation on a smaller scale before they’re exposed to the real thing.

    The idea is akin to vaccines that build immunity through exposure to a weakened form of the disease-causing germ. The idea is for someone to be exposed to a limited amount of false information, say through the pre-bunking with Google quiz. They then learn to spot common manipulation tactics used in false information and learn how to resist their influence with evidence-based strategies to counter the falsehoods. This could also be done using a trained facilitator within classrooms, workplaces or other groups, including virtual communities.

    Then, the idea is to gradually repeat the process with larger doses of false information and further counterarguments. By role-playing and practicing the counterarguments, this resistance skills training provides a sort of psychological innoculation against misinformation and disinformation, at least temporarily.

    Importantly, this approach is intended for someone who has not yet been exposed to false information – hence, pre-bunking rather than debunking. If we want to engage with someone who firmly believes in their stance, particularly when it runs contrary to our own, behavioral scientists recommend leading with empathy and nonjudgmentally exchanging narratives.

    Debunking is difficult work, however, and even strong debunking messages can result in the persistence of misinformation. You may not change the other person’s mind, but you may be able to engage in a civil discussion and avoid pushing them further away from your position.

    Spreading facts, not fiction

    When everyday people apply this with their friends and loved ones, they can train people to recognize the telltale signs of false information. This might be recognizing what’s known as a false dichotomy – for instance, “either you support this bill or you HATE our country.”

    Another signal of false information is the common tactic of scapegoating: “Oil industry faces collapse due to rise in electric car ownership.” And another is the slippery slope of logical fallacy. An example is “legalization of marijuana will lead to everyone using heroin.”

    All of these are examples of common tactics that spread misinformation and come from a Practical Guide to Pre-Bunking Misinformation, created by a collaborative team from the University of Cambridge, BBC Media Action and Jigsaw, an interdisciplinary think tank within Google.

    This approach is not only effective in combating misinformation and disinformation, but also in delaying or preventing the onset of harmful behaviors. My own research suggests that pre-bunking can be used effectively to delay the initiation of tobacco use among adolescents. But it only works with regular “booster shots” of training, or the effect fades away in a matter of months or less.

    Many researchers like me who study these social contagion dynamics don’t yet know the best way to keep these “booster shots” going in people’s lives. But there are recent studies showing that it can be done. A promising line of research also suggests that a group-based approach can be effective in maintaining the pre-bunking effects to achieve psychological herd immunity. Personally, I would bet my money on group-based approaches where you, your friends or your family can mutually reinforce each other’s capacity to resist harmful social norms entering your network.

    Simply put, if multiple members of your social network have strong resistance skills, then your group has a better chance of resisting the incursion of harmful norms and behaviors into your network than if it’s just you resisting alone. Other people matter.

    In the end, whether we’re empowering people to resist the insidious creep of online falsehoods or equipping adolescents to stand firm against peer pressure to smoke or use other substances, the research is clear: Resistance skills training can provide an essential weapon for safeguarding ourselves and young people from harmful behaviors.

    The Conversation

    Shaon Lahiri does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Misinformation lends itself to social contagion – here’s how to recognize and combat it – https://theconversation.com/misinformation-lends-itself-to-social-contagion-heres-how-to-recognize-and-combat-it-254298

  • From Seattle to Atlanta, new social housing programs seek to make homes permanently affordable for a range of incomes

    Source: ForeignAffairs4

    Source: The Conversation – USA (2) – By Susanne Schindler, Research Fellow at the Joint Center for Housing Studies, Harvard Kennedy School

    Activists in Seattle gather signatures to put a social housing initiative on the ballot. In early 2025, voters passed the measure, which implements a payroll tax on high incomes to fund the program. House Our Neighbors, CC BY-SA

    Seattle astounded housing advocates around the country in February 2025, when roughly two-thirds of voters approved a ballot initiative proposing a new 5% payroll tax on salaries in excess of US$1 million.

    The expected revenue – estimated to amount to $52 million dollars annually – would go toward funding a public development authority named Seattle Social Housing, which would then build and maintain permanently affordable homes.

    The city has experienced record high rents and home prices over the past two decades, attributed in part to the high incomes and relatively low taxes paid by tech firms like Amazon. Prior attempts to make these companies do their part to keep the city affordable have had mixed results.

    So despite nationwide, bipartisan skepticism of government and tax increases, Seattle’s voters showed that in light of a severe affordability crisis, a new role for the public sector and a new, dedicated fiscal revenue stream for housing were not only necessary, but possible.

    As a trained architect and urban historian, I study how capitalist societies have embraced – or rejected – housing that’s permanently shielded from market forces and what that means for architecture and urban design.

    To me, Seattle’s social housing initiative shows that the country’s traditional, “either-or” housing model – of unregulated, market-rate housing versus tightly regulated, income-restricted affordable housing – has reached its limits.

    Social housing promises a different path forward.

    The rise of the ‘two-tiered’ system

    After World War I, amid a similarly dire housing crisis, journalist Catherine Bauer traveled to Europe and learned about the continent’s social housing programs.

    She publicized her findings in the 1934 book “Modern Housing,” in which she advocated for housing that would be permanently shielded from the private real estate market. High-quality design was central to her argument. (The book was reissued in 2020, reflecting a renewed hunger for her ideas.)

    Early New Deal programs supported “limited-dividend,” or nonprofit, housing sponsored by civic organizations such as labor unions. The Carl Mackley Houses in Philadelphia exemplified this approach: The government provided low-interest loans to the American Federation of Full-Fashioned Hosiery Workers, which then constructed housing for its workers with rents set at affordable rates. The complex was built with community rooms and a swimming pool for its residents.

    Black and white photo of a swimming pool surrounded by an apartment complex.
    Financed by $1.2 million in federal funds, the Carl Mackley Houses, completed in 1935, provided homes for union workers.
    Alfred Kastner papers, Collection No. 7350, Box 45, Record 12, American Heritage Center, University of Wyoming

    However, the 1937 U.S. Housing Act omitted this form of middle-income housing. Instead, the federal government chose to support public rental housing for low-income Americans and private homeownership, with little in between.

    Historian Gail Radford has aptly termed this a “two-tiered system,” and it was problematic from the start.

    Funding for public housing in the U.S. – as well as for its successor, private-sector-built affordable housing – has always been capped in ways that fall far short of demand, with access to the homes largely restricted to households with the lowest incomes. Private-sector-built affordable housing depends on dangling tax credits for private investors, and rent restrictions can expire.

    While the U.S. promoted this two-tiered system, cities like Vienna pursued a different path.

    In Austria’s culturally vibrant capital, today half of all dwellings are permanently removed from the private market. Roughly 80% of households qualify to live in them. The buildings take a range of forms, are located in all neighborhoods, and are built and operated as rental or cooperative housing either by the city or by nonprofit developers.

    Rents do not rise and fall according to household income, but are instead set to cover capital and operation expenses. These are kept low thanks to long-term, low-interest loans. These loans are funded through a nationwide 1% payroll tax, split evenly between employers and employees. Renters also make a down payment, priced in relation to the size and age of the apartment, which keeps monthly rents down. To guarantee access to low-cost land, the municipality has pursued an active land acquisition policy since the 1980s.

    A blue, modern-looking, two-story dwelling with red flowers in the windows.
    Vienna’s Pilotengasse Housing Estate, a social housing development featuring low-rise buildings with abundant greenery, was completed in 1992 and serves a range of income groups.
    Viennaslide/Construction Photography/Avalon/Getty Images

    Housing shielded from the private market

    The inequities created by the two-tiered system – along with the absence of viable options for moderate- and middle-income households – are what social housing advocates in the U.S. are trying to address today.

    In 2018, the think tank People’s Policy Project published what was likely the first 21st-century report advocating for social housing in the U.S., citing Vienna as a model.

    Across the U.S., social housing is being used to describe a range of programs, from limited equity cooperatives and community land trusts to public housing.

    They all share a few underlying principles, however.

    First and foremost, social housing calls for permanently shielding homes from the private real estate market, often referred to as “permanent affordability.” This usually means public investment in housing and public ownership of it. Second, unlike the ways in which public housing has traditionally operated in the U.S., most social housing programs aim to serve households across a broader range of incomes. The goal is to create housing that is both financially sustainable and appealing to broad swaths of the electorate. Third, social housing aspires to give residents more control over the governance of their homes.

    Social housing doesn’t all look the same. But thoughtful design is key to its success. It’s built to be owned and operated in the long-term, not for short-term financial gain. Construction quality matters, and developers realize it needs to be appealing to a range of tenants with different needs.

    Early successes

    In recent years, there have been significant wins for the social housing movement at the state and local levels.

    In 2023, Atlanta created a new quasi-public entity to co-develop mixed-income housing on city-owned land. In 2024, Rhode Island voters and the Massachusetts legislature funded pilot projects to test public investment in social housing. And 2025 has seen the the passage of Chicago’s Green Social Housing ordinance.

    Many of these programs were directly inspired by affordable housing initiatives in Montgomery County, Maryland.

    Since 2021, the county’s housing authority has used a $100 million housing fund to invest in new mixed-income developments. Through these investments, the county retains co-ownership and has been able to bring down the cost of development enough to offer 30% of homes at significantly below market rents, in perpetuity. If Vienna is the global paragon for social housing, Montgomery County has become its domestic counterpart.

    In Seattle, social housing will mean homes delivered and permanently owned by Seattle Social Housing, which is funded through the payroll tax on high incomes. The initiative envisions developments featuring a range of apartment sizes to meet the needs of different family sizes, built to high energy-efficiency standards. Homes will be available to households earning up to 120% of area median income, with residents paying no more than 30% of their income on rent. In Seattle, that means that a single-person household making up to $120,000 will qualify.

    Activists stand on steps holding colorful signs while a woman stands in front of them speaking from a lectern.
    Members of the New York City Council hold a rally with housing activists to promote social housing legislation in March 2023.
    William Alatriste/NYC Council Media Unit, CC BY-SA

    Ongoing debates

    Despite these successes, many Americans remain skeptical of social housing.

    Sign up for a webinar on the topic, and you’ll hear participants question the term itself. Isn’t it far too “socialist” to be broadly adopted in the U.S.? And isn’t this just “old wine in new bottles”?

    Join a housing task force, and established nonprofits will be the ones to push back, arguing that they already know how to build and manage housing, and that all they need is money.

    Some housing activists also question whether using scarce public dollars to pay for mixed-income housing will yet again shortchange those who most need governmental assistance – namely, the poor. Others point to the need to provide more ways to build intergenerational wealth, especially for racial minorities, who have historically faced barriers to homeownership.

    Urban planner Jonathan Tarleton has highlighted another important issue: the danger of social housing reverting over time to private ownership, as has been the case with some cooperatives in New York City. Tarleton stresses the need for “social maintenance” – the importance of telling and retelling the story of whom social housing is meant to serve.

    These debates raise important questions. Social housing may be a confusing term and an aspirational concept. But it is here to stay: It has galvanized organizers and policymakers around a new approach to the design, development and maintenance of housing.

    Social housing keeps prices down through long-term public investment, ensuring that future generations will still benefit. Developers can design and provide homes that respond to how people want to live. And in an increasingly polarized country, social housing will allow people of various backgrounds, incomes and ideological persuasions to live together again, rather than apart.

    Whether it’s the kind found in Seattle, in Maryland or somewhere in between, I believe social housing is needed more than ever before to address the country’s twin problems of affordability and a lack of political imagination.

    This article is part of a series centered on envisioning ways to deal with the housing crisis.

    The Conversation

    Susanne Schindler receives funding from Harvard’s Joint Center for Housing Studies.

    ref. From Seattle to Atlanta, new social housing programs seek to make homes permanently affordable for a range of incomes – https://theconversation.com/from-seattle-to-atlanta-new-social-housing-programs-seek-to-make-homes-permanently-affordable-for-a-range-of-incomes-255097

  • What schools can learn from skate culture

    Source: ForeignAffairs4

    Source: The Conversation – UK – By Sander Hölsgens, Assistant Professor, Leiden Institute of Cultural Anthropology and Development Sociology, Leiden University

    Dean Drobot/Shutterstock

    At a school in Malmö, Sweden, skateboarding is on the curriculum. John Dahlquist, vice principal of Bryggeriets High School, teaches skate classes and brings lessons from skateboarding into other subjects. By encouraging teenagers to have fun together through skating and beyond, he notices that they want to attend school. Writing in a recent book I co-edited on skateboarding and teaching, Dahlquist notes that he even sees students longing to be back in the classroom after the weekend.

    Skateboarding is creative, requiring ingenuity in adapting to new environments. It’s collaborative and social: skaters cheer each other on when they try to learn something new, acknowledging that everyone operates at a different level and faces a distinct challenge.

    When skateboarding is done well, individual growth takes place among a community of care and mutual support. And it requires a willingness to fail. There’s no way to master a trick without trying and failing, over and over again.

    My colleagues and I have researched the value of a skateboarding philosophy in schools, and how teachers can bring it into their classrooms.


    Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.


    Take Dahlquist’s teaching in Malmö. He notes that interweaving skate classes with other subjects has multiple noteworthy effects. The physical activity of skateboarding improves levels of concentration. Some students even say that they’d never been successful in any other learning environment. Elsewhere, they’d be unable to focus on the task at hand.

    What’s more, a skateboarding mindset – being prepared to learn difficult tricks in unfamiliar settings – equipped students with the capacity to master other kinds of new skills.

    Able to fail

    The process of overcoming the anxiety to fail is crucial. Skaters cannot be afraid to fall if they want to learn new tricks. The motivation to learn through repeated efforts helps skaters in other areas of life, too. Skaters at Bryggeriet aren’t worried as much about failing grades, precisely because they see it as an opportunity to learn and move forward.

    As Dahlquist says, “At the end of my classes, I usually have to throw my students out of the classroom. A lot of them beg for three more tries: ‘I’ve got this, just give me three more tries. I promise I will learn.‘”

    This mindset decreases grades as education’s cornerstone and, by extension, enhances students’ mental health. My colleague Esther Sayers, who conducted fieldwork at Bryggeriets, found another effect. Teachers help students to develop the skills to get motivated, to reach a point of feeling inspired – or what skaters call “stoke”.

    Young people laughing with skateboard
    Skateboarding fosters a non-competitive learning culture.
    PeopleImages.com – Yuri A

    Bryggeriets High School isn’t the only place where skateboarding is helping teach people how to learn. Reaching beyond its historical status as a self-regulated street culture, skateboarding now plays an important role in building engaged learning communities across the globe. Berlin-based skate organisation Skateistan hosts skate classes, gives young people access to education and offers funds for young and upcoming community leaders.

    Concrete Jungle Foundation co-builds skateparks with young people in Peru, Morocco and Jamaica, in order to exchange knowledge and drive local ownership and apprenticeship. Similarly, the New York-based Harold Hunter Foundation runs skate workshops that also provide mentoring and career guidance.

    Colleagues Arianna Gil and Jessica Forsyth have studied working class black and Latin American skate crews, run by genderdiverse community organisers. They found that skate crews such as Brujas and Gang Corp mobilise skaters according to the “for us, by us” spirit.

    Challenging institutional models of authority, these skate crews develop services based on the hopes and aspirations of their communities – ranging from teach-ins to recreational programmes. This includes a talk on the history and meaning of hoodies, and modules on the power of storytelling and the danger of propaganda. The crux, here, is to learn about stuff you encounter in your daily lives.

    Skaters who experience poverty and oppression create their own ecosystem for learning from one another, from being out of an educational system that is organised in a top-down way. This means creating a grassroots school model where skate crews choose what and how they want to learn. Rather than grades and degrees, education here is structured around the process of learning from your peers – with the idea of passing on this knowledge in the near future.

    The effects of this approach are threefold. First, it centers mentorship and apprenticeship, resulting in intergenerational knowledge exchange. Second, skateboarding’s DIY spirit can help overcome access barriers. By embracing grassroots teaching practices and formats, education can be tailored to the specific needs and desires of a community, rather than following standardised learning objectives.

    Third, rather than focusing on memorising facts or learning for grades, this new ecosystem is structured around problem-based learning. Presented with worldly problems such as human rights violations and hostile architecture, skaters learn not just how to analyse their surroundings, but also how to cope with and engage oppressive societal structures.

    As formal education faces incremental budget cuts and deepened governmental influence, skateboarding shows us new ways to organise our learning spaces. Schools and teachers can engage their students by integrating aspects of a learning culture that decentres evaluations and assessments and celebrates attempts, rather than just successes.

    The Conversation

    Sander Hölsgens received a ‘starting grant’ from OCW, The Netherlands. He is affiliated with Pushing Boarders, a platform tracing the social impact of skateboarding worldwide.

    ref. What schools can learn from skate culture – https://theconversation.com/what-schools-can-learn-from-skate-culture-255239

  • Georgia: how democracy is being eroded fast as government shifts towards Russia

    Source: ForeignAffairs4

    Source: The Conversation – UK – By Natasha Lindstaedt, Professor in the Department of Government, University of Essex

    Georgia was once considered a post-Soviet success story. After years of authoritarian rule, followed by independence which brought near state collapse, corruption and chaos, Georgia appeared to have transitioned to democracy.

    In a period after independence in 1991 and before 2020, elections were regularly held and were deemed mostly free and fair, the media and civil society were vibrant and corruption levels had diminished significantly.

    The “Rose revolution” in 2003 ushered in an era of unprecedented reform and suggested a move towards democracy and a closer relationship with the west. Georgians were full of hope for the country’s future, and prospects of joining the European Union – or at least moving closer to Europe.

    Fast forward two decades and Georgia has fully returned to authoritarianism. Six opposition leaders are in prison or facing charges and now thinktank leaders are being targeted with investigations that could land them in prison. Typically these charges centre around accepting foreign funding or criticising the government.


    Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.


    In moves in line with other authoritarian regimes around the world, opposition organisations such as thinktanks are being told to produce financial documents in short timeframes, and accused of financial mismanagement and threatened with prosecution if they don’t.

    In May 2024, Georgia passed a Russian-inspired foreign agent law — which would require non-governmental organisations (NGOs) receiving foreign funding to register themselves and face restrictions. Protests erupted each time Georgia’s parliament debated this measure, but eventually the pro-RussianGeorgia Dream party prevailed. More than 90% of NGOs receive funding from abroad, so the new law cripples the efforts of some 26,000 of them.

    Many Georgians were outraged that the passage of the bill may end dreams of one day becoming a European Union candidate country. Regular surveys have found that about 80% of Georgians have aspirations for their country to join the EU.

    Though Georgia faces a host of economic problems, the Georgia Dream party has campaigned on delivering a return to traditional values. Like Russia they have also passed a series of laws in 2024 that target the LGBTQ+ community, such as banning content that features same-sex relationships and stripping same-sex couples of rights, such as adoption.

    Parallels with Russia?

    Georgia Dream also passed legislation making treason a criminal offence, a clear attempt to eliminate political opponents. Any insults of politicians online are also considered a criminal offence.

    Also, in June of this year civil society organisations in Georgia received court orders requiring them to disclose highly sensitive data. Meanwhile, members of the Georgia Dream party were accused of assaulting opposition party leader Giorgi Gakharia suffering a broken nose and a concussion, which they denied.

    In another effort to exercise greater control over the state, since the beginning of this year more than 800 civil servants have been dismissed. Similar to the purges that took place in Turkey — this is not being done in the name of efficiency, but to ensure that the bureaucracy is loyal to wishes of the Georgia Dream government.

    This hasn’t happened overnight, as the laws had already changed several times to weaken legal protections for civil servants.

    During its time in government, the Georgia Dream party has moved the country much closer to Russia, often by portraying the nation as locked in a cultural struggle against the west. Despite this, 69% of Georgians still see Russia as Georgia’s main enemy, up from 35% in 2012.

    Though the Georgia Dream party faces increasing public opposition to its rule, it gained nearly the same amount of votes in the 2024 elections as it did in 2012 – when it was at its peak of popularity. The election result in October 2024 may be partly explained by accusations of fraud and other irregularities.

    How did this happen?

    One of the first big threats to Georgia’s democracy came in August 2008 when Russia invaded the country to offer support for two breakaway regions in South Ossetia and Abkhazia which declared themselves independent from Georgia. The international community did little to censure Russia, giving Russian president Vladimir Putin the confidence to engage in further acts of aggression.

    Russia has maintained troops in South Ossetia, only about 30 miles from Georgia’s capital Tbilisi, and continues to play an important role in Georgian politics, undermining democracy.

    The next threat came from within. Billionaire Bidzina Ivanishvili was elected prime minister of Georgia in 2012 as the leader of Georgia Dream. despite the fact that he officially stepped down from this position in 2013, he has wielded power behind the scenes and is still widely considered to be the de facto leader of Georgia.

    Though Georgia did not immediately slide towards autocracy under the Georgia Dream party, today there are few remnants of democracy left. The major opposition parties are banned, opposition politicians and journalists are spied on, and protests are repressed by the police.

    Cameras are now installed on the streets of Tbilisi as part of a crackdown on protest and fines for protesting have increased. Elections are no longer considered to be free and fair by the European Union and others as the Georgia Dream party uses its access to the state resources to dole out patronage to its supporters and intimidate voters.

    In just over two decades, Georgia has managed to plunge back to authoritarianism. Once hailed as a beacon of democratic reform, the country is now gripped by a Russian-influenced ruling party that has consolidated power through repression, surveillance and manipulation.

    But while the Georgia Dream party has tried to dismantle the country’s democratic institutions, support for resistance is high. According to a poll in 2025, more than 60% of respondents supported protests against the government and 45% identified as active supporters. And 82% feel Georgia is in crisis, with 78% blaming Georgian Dream.

    It appears that Russia may have succeeded in undermining democracy in Georgia, but not in shaping hearts and minds.

    The Conversation

    Natasha Lindstaedt does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Georgia: how democracy is being eroded fast as government shifts towards Russia – https://theconversation.com/georgia-how-democracy-is-being-eroded-fast-as-government-shifts-towards-russia-260430

  • Rural hospitals will be hit hard by Trump’s signature spending package

    Source: ForeignAffairs4

    Source: The Conversation – USA (3) – By Lauren S. Hughes, State Policy Director, Farley Health Policy Center; Associate Professor of Family Medicine, University of Colorado Anschutz Medical Campus

    Health policy experts predict that cuts to Medicaid will push more rural hospitals to close. sneakpeekpic via iStock / Getty Images Plus

    The public health provisions in the massive spending package that President Donald Trump signed into law on July 4, 2025, will reduce Medicaid spending by more than US$1 trillion over a decade and result in an estimated 11.8 million people losing health insurance coverage.

    As researchers studying rural health and health policy, we anticipate that these reductions in Medicaid spending, along with changes to the Affordable Care Act, will disproportionately affect the 66 million people living in rural America – nearly 1 in 5 Americans.

    People who live in rural areas are more likely to have health insurance through Medicaid and are at greater risk of losing that coverage. We expect that the changes brought about by this new law will lead to a rise in unpaid care that hospitals will have to provide. As a result, small, local hospitals will have to make tough decisions that include changing or eliminating services, laying off staff and delaying the purchase of new equipment. Many rural hospitals will have to reduce their services or possibly close their doors altogether.

    Hits to rural health

    The budget legislation’s biggest effect on rural America comes from changes to the Medicaid program, which represent the largest federal rollback of health insurance coverage in the U.S. to date.

    First, the legislation changes how states can finance their share of the Medicaid program by restricting where funds states use to support their Medicaid programs can come from. This bill limits how states can tax and charge fees to hospitals, managed care organizations and other health care providers, and how they can use such taxes and fees in the future to pay higher rates to providers under Medicaid. These limitations will reduce payments to rural hospitals that depend upon Medicaid to keep their doors open.

    Rural hospitals play a crucial role in health care access.

    Second, by 2027, states must institute work requirements that demand most Medicaid enrollees work 80 hours per month or be in school at least half time. Arkansas’ brief experiment with work requirements in 2018 demonstrates that rather than boost employment, the policy increases bureaucracy, hindering access to health care benefits for eligible people. States will also now be required to verify Medicaid eligibility every six months versus annually. That change also increases the risk people will lose coverage due to extra red tape.

    The Congressional Budget Office estimates that work requirements instituted through this legislative package will result in nearly 5 million people losing Medicaid coverage. This will decrease the number of paying patients at rural hospitals and increase the unpaid care hospitals must provide, further damaging their ability to stay open.

    Additionally, the bill changes how people qualify for the premium tax credits within the Affordable Care Act Marketplace. The Congressional Budget Office estimates that this change, along with other changes to the ACA such as fewer and shorter enrollment periods and additional requirements for documenting income, will reduce the number of people insured through the ACA Marketplace by about 3 million by 2034. Premium tax credits were expanded during the COVID-19 pandemic, helping millions of Americans obtain coverage who previously struggled to do so. This bill lets these expanded tax credits expire, which with may result in an additional 4.2 million people becoming uninsured.

    An insufficient stop-gap

    Senators from both sides of the aisle have voiced concerns about the legislative package’s potential effects on the financial stability of rural hospitals and frontier hospitals, which are facilities located in remote areas with fewer than six people per square mile. As a result, the Senate voted to set aside $50 billion over the next five years for a newly created Rural Health Transformation Program.

    These funds are to be allocated in two ways. Half will be directly distributed equally to states that submit an application that includes a rural health transformation plan detailing how rural hospitals will improve the delivery and quality of health care. The remainder will be distributed to states in varying amounts through a process that is currently unknown.

    While additional funding to support rural health facilities is welcome, how it is distributed and how much is available will be critical. Estimates suggest that rural areas will see a reduction of $155 billion in federal spending over 10 years, with much of that concentrated in 12 states that expanded Medicaid under the Affordable Care Act and have large proportions of rural residents.

    That means $50 billion is not enough to offset cuts to Medicaid and other programs that will reduce funds flowing to rural health facilities.

    An older bearded white man in a yellow shirt sits on a hospital bed in an exam room
    Americans living in rural areas are more likely to be insured through Medicaid than their urban counterparts.
    Halfpoint Images/Moment via Getty Images

    Accelerating hospital closures

    Rural and frontier hospitals have long faced hardship because of their aging infrastructure, older and sicker patient populations, geographic isolation and greater financial and regulatory burdens. Since 2010, 153 rural hospitals have closed their doors permanently or ceased providing inpatient services. This trend is particularly acute in states that have chosen not to expand Medicaid via the Affordable Care Act, many of which have larger percentages of their residents living in rural areas.

    According to an analysis by University of North Carolina researchers, as of June 2025 338 hospitals are at risk of reducing vital services, such as skilled nursing facilities; converting to an alternative type of health care facility, such as a rural emergency hospital; or closing altogether.

    Maternity care is especially at risk.

    Currently more than half of rural hospitals no longer deliver babies. Rural facilities serve fewer patients than those in more densely populated areas. They also have high fixed costs, and because they serve a high percentage of Medicaid patients, they rely on payments from Medicaid, which tends to pay lower rates than commercial insurance. Because of these pressures, these units will continue to close, forcing women to travel farther to give birth, to deliver before going full term and to deliver outside of traditional hospital settings.

    And because hospitals in rural areas serve relatively small populations, they lack negotiating power to obtain fair and adequate payment from private health insurers and affordable equipment and supplies from medical companies. Recruiting and retaining needed physicians and other health care workers is expensive, and acquiring capital to renovate, expand or build new facilities is increasingly out of reach.

    Finally, given that rural residents are more likely to have Medicaid than their urban counterparts, the legislation’s cuts to Medicaid will disproportionately reduce the rate at which rural providers and health facilities are paid by Medicaid for services they offer. With many rural hospitals already teetering on closure, this will place already financially fragile hospitals on an accelerated path toward demise.

    Far-reaching effects

    Rural hospitals are not just sources of local health care. They are also vital economic engines.

    Hospital closures result in the loss of local access to health care, causing residents to choose between traveling longer distances to see a doctor or forgoing the services they need.

    But hospitals in these regions are also major employers that often pay some of the highest wages in their communities. Their closure can drive a decline in the local tax base, limiting funding available for services such as roads and public schools and making it more difficult to attract and retain businesses that small towns depend on. Declines in rural health care undermine local economies.

    Furthermore, the country as a whole relies on rural America for the production of food, fuel and other natural resources. In our view, further weakening rural hospitals may affect not just local economies but the health of the whole U.S. economy.

    The Conversation

    Lauren S. Hughes has received funding for rural health projects from the Sunflower Foundation, The Colorado Health Foundation, the University of Colorado School of Medicine Rural Program Office, the Caring for Colorado Foundation, and the Zoma Foundation. She currently serves as chair of the Rural Health Redesign Center Organization Board of Directors and is a member of the Rural Primary Care Advisory Council with the Weitzman Institute.

    Kevin J. Bennett receives funding from the National Institutes of Health, the Centers for Disease Control & Prevention, the Health Resources and Services Administration and the state of South Carolina. He is currently on the Board of Trustees of the National Rural Health Association as immediate past president.

    ref. Rural hospitals will be hit hard by Trump’s signature spending package – https://theconversation.com/rural-hospitals-will-be-hit-hard-by-trumps-signature-spending-package-260164

  • Ageing bridges around the world are at risk of collapse. But there’s a simple way to safeguard them

    Source: ForeignAffairs4

    Source: The Conversation – Global Perspectives – By Andy Nguyen, Senior Lecturer in Structural Engineering, University of Southern Queensland

    The Story Bridge, with its sweeping steel trusses and art deco towers, is a striking sight above the Brisbane River in Queensland. In 2025, it was named the state’s best landmark. But more than an icon, it serves as one of the vital arteries of the state capital, carrying more than 100,000 vehicles daily.

    But a recent report revealed serious structural issues in the 85-year-old bridge. These included the deterioration of concrete, corrosion and overloading on pedestrian footpaths.

    The findings prompted an urgent closure of the footpath for safety reasons. They also highlighted the urgency of Brisbane City Council’s planned bridge restoration project.

    But this example – and far more tragic ones from around the world in recent years – have also sparked a broader conversation about the safety of ageing bridges and other urban infrastructure. A simple, proactive step known as structural health monitoring can help.

    A number of collapses

    In January 2022, the Fern Hollow Bridge in Pittsburgh, Pennsylvania, in the United States collapsed and injured several people. This collapse was caused by extensive corrosion and the fracturing of a vital steel component. It stemmed from poor maintenance and failure to act on repeated inspection recommendations. These problems were compounded by inadequate inspections and oversight.

    Three years earlier, Taiwan’s Nanfang’ao Bridge collapsed. Exposure to damp, salty sea air had severely weakened its suspension cables. Six people beneath the bridge died.

    In August 2018, Italy’s Morandi Bridge fell, killing 43 people. The collapse was due to corrosion in pre-stressed concrete and steel tendons. These factors were worsened by inspection and maintenance challenges.

    In August 2007, a bridge in the US city of Minneapolis collapsed, killing 13 people and injuring 145. This collapse was primarily due to previously unnoticed problems with the design of the bridge. But it also demonstrated how ageing infrastructure, coupled with increasing loads and ineffective routine visual inspections, can exacerbate inherent weaknesses.

    A technology-driven solution

    Structural health monitoring is a technology-driven approach to assessing the condition of infrastructure. It can provide near real-time information and enable timely decision-making. This is crucial when it comes to managing ageing structures.

    The approach doesn’t rely solely on occasional periodic inspections. Instead it uses sensors, data loggers and analytics platforms to continuously monitor stress, vibration, displacement, temperature and corrosion on critical components.

    This approach can significantly improve our understanding of bridge performance compared to traditional assessment models. In one case, it updated a bridge’s estimated fatigue life – the remaining life of the structure before fatigue-induced failure is predicted to occur– from just five years to more than 52 years. This ultimately avoided unnecessary and costly restoration.

    Good structural health-monitoring systems can last several decades. They can be integrated with artificial intelligence techniques and bridge information modelling to develop digital twin-based monitoring platforms.

    The cost of structural health monitoring systems varies by bridge size and the extent of monitoring required. Some simple systems can cost just a few thousand dollars, while more advanced ones can cost more than A$300,000.

    These systems require ongoing operational support – typically 10% to 20% of the installation cost annually – for data management, system maintenance, and informed decision-making.

    Additionally, while advanced systems can be costly, scalable structural health monitoring solutions allow authorities to start small and expand over time.

    A model for proactive management

    The design of structural health monitoring systems has been incorporated into new large-scale bridge designs, such as Sutong Bridge in China and Governor Mario M. Cuomo Bridge in the US.

    But perhaps the most compelling example of these systems in action is the Jacques Cartier Bridge in Montreal, Canada.

    Opened in 1930, it shares design similarities with Brisbane’s Story Bridge. And, like many ageing structures, it faces its own challenges.

    A steel bridge seen at sunset.
    Opened in 1930, the Jacques Cartier Bridge in Montreal, Canada, shares design similarities with Brisbane’s Story Bridge.
    Pinkcandy/Shutterstock

    However, authorities managing the Jacques Cartier Bridge have embraced a proactive approach through comprehensive structural health monitoring systems. The bridge has been outfitted with more than 300 sensors.

    Acoustic emission monitoring enables early detection of micro-cracking activity, while long-term instrumentation tracks structural deformation and dynamic behaviour across key spans.

    Satellite-based radar imagery adds a remote, non-intrusive layer of deformation monitoring, and advanced data analysis ensures that the vast amounts of sensor data are translated into timely, actionable insights.

    Together, these technologies demonstrate how a well-integrated structural-health monitoring system can support proactive maintenance, extend the life of ageing infrastructure – and ultimately improve public safety.

    A way forward for Brisbane – and beyond

    The Story Bridge’s current challenges are serious, but they also present an opportunity.

    By investing in the right structural health monitoring system, Brisbane can lead the way in modern infrastructure management – protecting lives, restoring public confidence, preserving heritage and setting a precedent for cities around the world.

    As climate change, urban growth, and ageing assets put increasing pressure on our transport networks, smart monitoring is no longer a luxury – it’s a necessity.

    The Conversation

    Andy Nguyen receives funding from the Queensland government, through the Advance Queensland fellowship. He is on the executive committee of Australian Network of Structural Health Monitoring.

    ref. Ageing bridges around the world are at risk of collapse. But there’s a simple way to safeguard them – https://theconversation.com/ageing-bridges-around-the-world-are-at-risk-of-collapse-but-theres-a-simple-way-to-safeguard-them-260005

  • We don’t need deep-sea mining, or its environmental harms. Here’s why

    Source: ForeignAffairs4

    Source: The Conversation – Global Perspectives – By Justin Alger, Associate Professor / Senior Lecturer in Global Environmental Politics, The University of Melbourne

    Potato-sized polymetallic nodules from the deep sea could be mined for valuable metals and minerals. Carolyn Cole / Los Angeles Times via Getty Images

    Deep-sea mining promises critical minerals for the energy transition without the problems of mining on land. It also promises to bring wealth to developing nations. But the evidence suggests these promises are false, and mining would harm the environment.

    The practice involves scooping up rock-like nodules from vast areas of the sea floor. These potato-sized lumps contain metals and minerals such as zinc, manganese, molybdenum, nickel and rare earth elements.

    Technology to mine the deep sea exists, but commercial mining of the deep sea is not happening anywhere in the world. That could soon change. Nations are meeting this month in Kingston, Jamaica, to agree to a mining code. Such a code would make way for mining to begin within the next few years.

    On Thursday, Australia’s national science agency, CSIRO, released research into the environmental impacts of deep-sea mining. It aims to promote better environmental management of deep-sea mining, should it proceed.

    We have previously challenged the rationale for deep-sea mining, drawing on our expertise in international politics and environmental management. We argue mining the deep sea is harmful and the economic benefits have been overstated. What’s more, the metals and minerals to be mined are not scarce.

    The best course of action is a ban on international seabed mining, building on the coalition for a moratorium.

    The Metals Company spent six months at sea collecting nodules in 2022, while studying the effects on ecosystems.

    Managing and monitoring environmental harm

    Recent advances in technology have made deep-sea mining more feasible. But removing the nodules – which also requires pumping water around – has been shown to damage the seabed and endanger marine life.

    CSIRO has developed the first environmental management and monitoring frameworks to protect deep sea ecosystems from mining. It aims to provide “trusted, science-based tools to evaluate the environmental risks and viability of deep-sea mining”.

    Scientists from Griffith University, Museums Victoria, the University of the Sunshine Coast, and Earth Sciences New Zealand were also involved in the work.

    The Metals Company Australia, a local subsidiary of the Canadian deep-sea mining exploration company, commissioned the research. It involved analysing data from test mining the company carried out in the Pacific Ocean in 2022.

    The company has led efforts to expedite deep-sea mining. This includes pushing for the mining code, and exploring commercial mining of the international seabed through approval from the US government.

    In a media briefing this week, CSIRO Senior Principal Research Scientist Piers Dunstan said the mining activity substantially affected the sea floor. Some marine life, especially that attached to the nodules, had very little hope of recovery. He said if mining were to go ahead, monitoring would be crucial.

    We are sceptical that ecological impacts can be managed even with this new framework. Little is known about life in these deep-water ecosystems. But research shows nodule mining would cause extensive habitat loss and damage.

    Do we really need to open the ocean frontier to mining? We argue the answer is no, on three counts.

    How does deep-sea mining work? (The Guardian)

    1. Minerals are not scarce

    The minerals required for the energy transition are abundant on land. Known global terrestrial reserves of cobalt, copper, manganese, molybdenum and nickel are enough to meet current production levels for decades – even with growing demand.

    There is no compelling reason to extract deep-sea minerals, given the economics of both deep-sea and land-based mining. Deep-sea mining is speculative and inevitably too expensive given such remote, deep operations.

    Claims about mineral scarcity are being used to justify attempting to legitimise a new extractive frontier in the deep sea. Opportunistic investors can make money through speculation and attracting government subsidies.

    2. Mining at sea will not replace mining on land

    Proponents claim deep-sea mining can replace some mining on land. Mining on land has led to social issues including infringing on indigenous and community rights. It also damages the environment.

    But deep-sea mining will not necessarily displace, replace or change mining on land. Land-based mining contracts span decades and the companies involved will not abandon ongoing or planned projects. Their activities will continue, even if deep-sea mining begins.

    Deep-sea mining also faces many of the same challenges as mining on land, while introducing new problems. The social problems that arise during transport, processing and distribution remain the same.

    And sea-based industries are already rife with modern slavery and labour violations, partly because they are notoriously difficult to monitor.

    Deep-sea mining does not solve social problems with land-based mining, and adds more challenges.

    The sun sets on the mining vessel Hidden Gem in Rotterdam, South Holland, Netherlands, 2022.
    Hidden Gem was the world’s first deep-sea mineral production vessel with seabed-to-surface nodule collection and transport systems.
    Photo by Charles M. Vella/SOPA Images/LightRocket via Getty Images

    3. Common heritage of humankind and the Global South

    Under the United Nations Convention on the Law of the Sea, the international seabed is the common heritage of humankind. This means the proceeds of deep-sea mining should be distributed fairly among all countries.

    Deep-sea mining commercial partnerships between developing countries in the Global South and firms from the North have yet to pay off for the former. There is little indication this pattern will change.

    For example, when Canadian company Nautilus went bankrupt in 2019, it saddled Papua New Guinea with millions in debt from a failed domestic deep-sea mining venture.

    The Metals Company has partnerships with Nauru and Tonga but the latest deal with the US creates uncertainty about whether their agreements will be honoured.

    European investors took control of Blue Minerals Jamaica, originally a Jamaican-owned company, shortly after orchestrating its start up. Any profits would therefore go offshore.

    A man holding a nodule from the deep sea stands on the dock with a ship labelled The Metals Company behind him.
    Australian Gerard Barron is Chairman and CEO of The Metals Company, formerly DeepGreen.
    Carolyn Cole / Los Angeles Times via Getty Images

    A wise investment?

    It is unclear whether deep-sea mining will ever be a good investment.

    Multiple large corporate investors have pulled out of the industry, or gone bankrupt. And The Metals Company has received delisting notices from the Nasdaq stock exchange due to poor financial performance.

    Given the threat of environmental harm, the evidence suggests deep-sea mining is not worth the risk.

    The Conversation

    Justin Alger receives funding from the Social Sciences and Humanities Research Council of Canada.

    D.G. Webster receives funding from the National Science Foundation in the United States and various internal funding sources at Dartmouth University.

    Jessica Green receives funding from the Social Sciences and Humanities Research Council of Canada.

    Kate J Neville receives funding from the Social Sciences and Humanities Research Council of Canada.

    Stacy D VanDeveer and Susan M Park do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    ref. We don’t need deep-sea mining, or its environmental harms. Here’s why – https://theconversation.com/we-dont-need-deep-sea-mining-or-its-environmental-harms-heres-why-260401

  • Astronomers have spied an interstellar object zooming through the Solar System

    Source: ForeignAffairs4

    Source: The Conversation – Global Perspectives – By Kirsten Banks, Lecturer, School of Science, Computing and Engineering Technologies, Swinburne University of Technology

    K Ly / Deep Random Survey

    This week, astronomers spotted the third known interstellar visitor to our Solar System.

    First detected by the Asteroid Terrestrial-impact Last Alert System (ATLAS) on July 1, the cosmic interloper was given the temporary name A11pl3Z. Experts at NASA’s Center for Near Earth Object Studies and the International Astronomical Union (IAU) have confirmed the find, and the object now has an official designation: 3I/ATLAS.

    A diagram of the Solar System out to Jupiter detailing the path of interstellar object 3I/ATLAS.
    The orbital path of 3I/ATLAS through the Solar System.
    NASA/JPL-Caltech, CC BY-NC

    There are a few strong clues that suggest 3I/ATLAS came from outside the Solar System.

    First, it’s moving really fast. Current observations show it speeding through space at around 245,000km per hour. That’s more than enough to escape the Sun’s gravity.

    An object near Earth’s orbit would only need to be travelling at just over 150,000km/h to break free from the Solar System.

    Second, 3I/ATLAS has a wildly eccentric orbit around the Sun. Eccentricity measures how “stretched” an orbit is: 0 eccentricity is a perfect circle, and anything up to 1 is an increasingly strung-out ellipse. Above 1 is an orbit that is not bound to the Sun.

    3I/ATLAS has an estimated eccentricity of 6.3, by far the highest ever recorded for any object in the Solar System.

    Has anything like this happened before?

    An artist's impression of the first confirmed interstellar object, 1I/'Oumuamua.
    An artist’s impression of the first confirmed interstellar object, 1I/‘Oumuamua.
    ESO/M. Kornmesser, CC BY

    The first interstellar object spotted in our Solar System was the cigar-shaped ‘Oumuamua, discovered in 2017 by the Pan-STARRS1 telescope in Hawaii. Scientists tracked it for 80 days before eventually confirming it came from interstellar space.

    The interstellar comet 2I/Borisov, imaged by the Hubble Space Telescope.
    The interstellar comet 2I/Borisov, imaged by the Hubble Space Telescope.
    NASA, ESA, and D. Jewitt (UCLA), CC BY-NC

    The second interstellar visitor, comet 2I/Borisov, was discovered two years later by amateur astronomer Gennadiy Borisov. This time it only took astronomers a few weeks to confirm it came from outside the Solar System.

    This time, the interstellar origin of 3I/ATLAS has been confirmed in a matter of days.

    How did it get here?

    We have only ever seen three interstellar visitors (including 3I/ATLAS), so it’s hard to know exactly how they made their way here.

    However, recent research published in The Planetary Science Journal suggests these objects might be more common than we once thought. In particular, they may come from relatively nearby star systems such as Alpha Centauri (our nearest interstellar neighbour, a mere 4.4 light years away).

    Two bright stars of the Alpha Centauri triple star system.
    Alpha Centauri A and Alpha Centauri B, from the triple star system Alpha Centauri.
    ESA/Hubble & NASA, CC BY

    Alpha Centauri is slowly moving closer to us, with its closest approach expected in about 28,000 years. If it flings out material in the same way our Solar System does, scientists estimate around a million objects from Alpha Centauri larger than 100 metres in diameter could already be in the outer reaches of our Solar System. That number could increase tenfold as Alpha Centauri gets closer.

    Most of this material would have been ejected at relatively low speeds, less than 2km/s, making it more likely to drift into our cosmic neighbourhood over time and not dramatically zoom in and out of the Solar System like 3I/ATLAS appears to be doing. While the chance of one of these objects coming close to the Sun is extremely small, the study suggests a few tiny meteors from Alpha Centauri, likely no bigger than grains of sand, may already hit Earth’s atmosphere every year.

    Why is this interesting?

    Discovering new interstellar visitors like 3I/ATLAS is thrilling, not just because they’re rare, but because each one offers a unique glimpse into the wider galaxy. Every confirmed interstellar object expands our catalogue and helps scientists better understand the nature of these visitors, how they travel through space, and where they might have come from.

    A swarm of new asteroids discovered by the NSF–DOE Vera C. Rubin Observatory.

    Thanks to powerful new observatories such as the NSF–DOE Vera C. Rubin Observatory, our ability to detect these elusive objects is rapidly improving. In fact, during its first 10 hours of test imaging, Rubin revealed 2,104 previously unknown asteroids.

    This is an astonishing preview of what’s to come. With its wide field of view and constant sky coverage, Rubin is expected to revolutionise our search for interstellar objects, potentially turning rare discoveries into routine ones.

    What now?

    There’s still plenty left to uncover about 3I/ATLAS. Right now, it’s officially classified as a comet by the IAU Minor Planet Center.

    But some scientists argue it might actually be an asteroid, roughly 20km across, based on the lack of typical comet-like features such as a glowing coma or a tail. More observations will be needed to confirm its nature.

    Currently, 3I/ATLAS is inbound, just inside Jupiter’s orbit. It’s expected to reach its closest point to the Sun, slightly closer than the planet Mars, on October 29. After that, it will swing back out towards deep space, making its closest approach to Earth in December. (It will pose no threat to our planet.)

    Whether it’s a comet or an asteroid, 3I/ATLAS is a messenger from another star system. For now, these sightings are rare – though as next-generation observatories such as Rubin swing into operation, we may discover interstellar companions all around.

    The Conversation

    Kirsten Banks does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Astronomers have spied an interstellar object zooming through the Solar System – https://theconversation.com/astronomers-have-spied-an-interstellar-object-zooming-through-the-solar-system-260422

  • Cape Town’s sewage treatment isn’t coping: scientists are worried about what the city is telling the public

    Source: ForeignAffairs4

    Source: The Conversation – Africa – By Lesley Green, Professor of Earth Politics and Director: Environmental Humanities South, University of Cape Town

    Urban water bodies – rivers, lakes and oceans – are in trouble globally. Large sewage volumes damage the open environment, and new chemicals and pharmaceutical compounds don’t break down on their own. When they are released into the open environment, they build up in living tissues all along the food chain, bringing with them multiple health risks.

    The city of Cape Town, South Africa, is no exception. It has 300km of coastline along two bays and a peninsula, as well as multiple rivers and wetlands. The city discharges more than 40 megalitres of raw sewage directly into the Atlantic Ocean every day. In addition, large volumes of poorly treated sewage and runoff from shack settlements enter rivers and from there into both the Atlantic and the Indian Oceans.

    Over almost a decade, our multi-disciplinary team, and others, have studied contamination risks in Cape Town’s oceans, rivers, aquifers and lakes. Our goal has been to bring evidence of contaminants to the attention of officials responsible for a clean environment.

    Monitoring sewage levels in the city’s water bodies is essential because of the health risks posed by contaminated water to all citizens – farmers, surfers, and everybody eating fish and vegetables. Monitoring needs to be done scientifically and in a way that produces data that is trustworthy and not driven by vested interests. This is a challenge in cities where scientific findings are expected to support marketing of tourism or excellence of the political administration.

    Our research findings have been published in multiple peer-reviewed journals. We have also communicated with the public through articles in the media, a website and a documentary.

    Cape Town’s official municipal responses to independent studies and reports, however, have been hostile. Our work has been unjustifiably denounced by top city officials and politicians. We have been subject to attacks by fake social media avatars. Laboratory studies have even received a demand for an apology from the political party in charge of the city.

    These extraordinary responses – and many others – reflect the extent to which independent scientific inquiry has been under attack.

    We set about tracking the different kinds of denial and attacks on independent contaminant science in Cape Town over 11 years. Our recently published study describes 18 different types of science communication that have minimised or denied the problem of contamination. It builds on similar studies elsewhere.

    Our findings show the extent to which contaminant science in Cape Town is at risk of producing not public knowledge but public ignorance, reflecting similar patterns internationally where science communication sometimes obfuscates more than it informs. To address this risk, we argue that institutionalised conflicts of interest should be removed. There should also be changes to how city-funded testing is done and when data is released to citizens. After all, it is citizens’ rates and taxes that have paid for that testing, and the South African constitution guarantees the right to information.

    We also propose that the city’s political leaders take the courageous step of accepting that the current water treatment infrastructure is unworkable for a city of over 5 million people. Accepting this would open the door to an overhaul of the city’s approach to wastewater treatment.

    The way forward

    We divided our study of contaminant communication events into four sub-categories:

    • non-disclosure of data

    • misinformation that gives a partial or misleading account of a scientific finding

    • using city-funded science to bolster political authority

    • relying on point data collected fortnightly to prove “the truth” of bodies of water as if it never moves or changes, when in reality, water bodies move every second of every day.

    We found evidence of multiple instances of miscommunication. On the basis of these, we make specific recommendations.

    First: municipalities should address conflicts of interest that are built into their organisational structure. These arise when the people responsible for ensuring that water bodies are healthy are simultaneously contracting consultants to conduct research on water contaminants. This is particularly important because over the last two decades large consultancies have established themselves as providers of scientific certification. But they are profit-making ventures, which calls into question the independence of their findings.

    Second: the issue of data release needs to be addressed. Two particular problems stand out:

    • Real-time information. Water quality results for beaches are usually released a week or more after samples have been taken. But because water moves all the time every day, people living in the city need real-time information. Best-practice water contamination measures use water current models to predict where contaminated water will be, given each day’s different winds and temperatures.

    • Poor and incomplete data. When ocean contaminant data is released as a 12-month rolling average, all the very high values are smoothed out. The end result is a figure that does not communicate the reality of risks under different conditions.

    Third: Politicians should be accountable for their public statements on science. Independent and authoritative scientific bodies, such as the Academy of Science of South Africa, should be empowered to audit municipal science communications.

    Fourth: Reputational harm to the science community must stop. Government officials claiming that they alone know a scientific truth and denouncing independent scientists with other data closes down the culture of scientific inquiry. And it silences others.

    Fifth: The integrity of scientific findings needs to be protected. Many cities, including Cape Town, rely on corporate brand management and political reputation management. Nevertheless, cities, by their very nature, have to deal with sewage, wastes and runoff. Public science communication that is based on marketing strategies prioritises advancing a brand (whether of a political party or a tourist destination). The risk is that city-funded science is turned into advertising and is presented as unquestionable.

    Finally, Cape Town needs political leaders who are courageous enough to confront two evident realities. Current science communications in the city are not serving the public well, and wastewater treatment systems that use rivers and oceans as open sewers are a solution designed a century ago. Both urgently need to be reconfigured.

    Next steps

    As a team of independent contaminant researchers we have worked alongside communities where health, ecology, livestock and recreation have been profoundly harmed by ongoing contamination. We have documented these effects, only to hear the evidence denied by officials.

    We recognise and value the beginnings of some new steps to data transparency in Cape Town’s mayoral office, like rescinding the 2021 by-law that banned independent scientific testing of open water bodies, almost all of which are classified as nature reserves.

    We would welcome a dialogue on building strong and credible public science communications.

    This study is dedicated to the memory of Mpharu Hloyi, head of Scientific Services in the City of Cape Town, in acknowledgement of her dedication to the health of urban bodies of water. Her untimely passing was a loss for all.

    This article also drew on Masters theses written by Melissa Zackon and Amy Beukes.

    The Conversation

    Lesley Green has received funding from the Science for Africa Foundation; the Seed Box MISTRA Formas Environmental Humanities Collaboratory; and the Science For Africa Foundation’s DELTAS Africa II program (Del:22-010).

    Cecilia Yejide Ojemaye receives funding from the University of Cape Town Carnegie DEAL Sustainable Development Goals Research Fellowship and the National Research Foundation for the SanOcean grant from the South Africa‐Norway Cooperation on Ocean Research (UID 118754).

    Leslie Petrik received funding from National Research Foundation for the SanOcean grant from the South Africa‐Norway Cooperation on Ocean Research (UID 118754) for this study.

    Nikiwe Solomon received funding at different stages for PhD research from the Water Research Commission (WRC) and National Institute for Humanities and Social Sciences (NIHSS), in collaboration with the South African Humanities Deans Association (SAHUDA). Opinions expressed and conclusions arrived at are those of the author and are not necessarily to be attributed to the WRC, NIHSS and SAHUDA.

    Jo Barnes and Vanessa Farr do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    ref. Cape Town’s sewage treatment isn’t coping: scientists are worried about what the city is telling the public – https://theconversation.com/cape-towns-sewage-treatment-isnt-coping-scientists-are-worried-about-what-the-city-is-telling-the-public-260317

  • How can the James Webb Space Telescope see so far?

    Source: ForeignAffairs4

    Source: The Conversation – USA – By Adi Foord, Assistant Professor of Astronomy and Astrophysics, University of Maryland, Baltimore County

    This is a James Webb Space Telescope image of NGC 604, a star-forming region about 2.7 million light-years from Earth. NASA/ESA/CSA/STScI

    Curious Kids is a series for children of all ages. If you have a question you’d like an expert to answer, send it to CuriousKidsUS@theconversation.com.


    How does the camera on the James Webb Space Telescope work and see so far out? – Kieran G., age 12, Minnesota


    Imagine a camera so powerful it can see light from galaxies that formed more than 13 billion years ago. That’s exactly what NASA’s James Webb Space Telescope is built to do.

    Since it launched in December 2021, Webb has been orbiting more than a million miles from Earth, capturing breathtaking images of deep space. But how does it actually work? And how can it see so far? The secret lies in its powerful cameras – especially ones that don’t see light the way our eyes do.

    I’m an astrophysicist who studies galaxies and supermassive black holes, and the Webb telescope is an incredible tool for observing some of the earliest galaxies and black holes in the universe.

    When Webb takes a picture of a distant galaxy, astronomers like me are actually seeing what that galaxy looked like billions of years ago. The light from that galaxy has been traveling across space for the billions of years it takes to reach the telescope’s mirror. It’s like having a time machine that takes snapshots of the early universe.

    By using a giant mirror to collect ancient light, Webb has been discovering new secrets about the universe.

    A telescope that sees heat

    Unlike regular cameras or even the Hubble Space Telescope, which take images of visible light, Webb is designed to see a kind of light that’s invisible to your eyes: infrared light. Infrared light has longer wavelengths than visible light, which is why our eyes can’t detect it. But with the right instruments, Webb can capture infrared light to study some of the earliest and most distant objects in the universe.

    A dog, shown normally, then through thermal imaging, with the eyes, mouth and ears brighter than the rest of the dog.
    Infrared cameras, like night-vision goggles, allow you to ‘see’ the infrared waves emitting from warm objects such as humans and animals. The temperatures for the images are in degrees Fahrenheit.
    NASA/JPL-Caltech

    Although the human eye cannot see it, people can detect infrared light as a form of heat using specialized technology, such as infrared cameras or thermal sensors. For example, night-vision goggles use infrared light to detect warm objects in the dark. Webb uses the same idea to study stars, galaxies and planets.

    Why infrared? When visible light from faraway galaxies travels across the universe, it stretches out. This is because the universe is expanding. That stretching turns visible light into infrared light. So, the most distant galaxies in space don’t shine in visible light anymore – they glow in faint infrared. That’s the light Webb is built to detect.

    A diagram of the electromagnetic spectrum, with radio, micro and infrared waves having a longer wavelength than visible light, while UV, X-ray and gamma rays have shorter wavelengths than visible light.
    The rainbow of visible light that you can see is only a small slice of all the kinds of light. Some telescopes can detect light with a longer wavelength, such as infrared light, or light with a shorter wavelength, such as ultraviolet light. Others can detect X-rays or radio waves.
    Inductiveload, NASA/Wikimedia Commons, CC BY-SA

    A golden mirror to gather the faintest glow

    Before the light reaches the cameras, it first has to be collected by the Webb telescope’s enormous golden mirror. This mirror is over 21 feet (6.5 meters) wide and made of 18 smaller mirror pieces that fit together like a honeycomb. It’s coated in a thin layer of real gold – not just to look fancy, but because gold reflects infrared light extremely well.

    The mirror gathers light from deep space and reflects it into the telescope’s instruments. The bigger the mirror, the more light it can collect – and the farther it can see. Webb’s mirror is the largest ever launched into space.

    The JWST's mirror, which looks like a large, roughly hexagonal shiny surface made up of 18 smaller hexagons put together, sitting in a facility. The mirror is reflecting the NASA meatball logo.
    Webb’s 21-foot primary mirror, made of 18 hexagonal mirrors, is coated with a plating of gold.
    NASA

    Inside the cameras: NIRCam and MIRI

    The most important “eyes” of the telescope are two science instruments that act like cameras: NIRCam and MIRI.

    NIRCam stands for near-infrared camera. It’s the primary camera on Webb and takes stunning images of galaxies and stars. It also has a coronagraph – a device that blocks out starlight so it can photograph very faint objects near bright sources, such as planets orbiting bright stars.

    NIRCam works by imaging near-infrared light, the type closest to what human eyes can almost see, and splitting it into different wavelengths. This helps scientists learn not just what something looks like but what it’s made of. Different materials in space absorb and emit infrared light at specific wavelengths, creating a kind of unique chemical fingerprint. By studying these fingerprints, scientists can uncover the properties of distant stars and galaxies.

    MIRI, or the mid-infrared instrument, detects longer infrared wavelengths, which are especially useful for spotting cooler and dustier objects, such as stars that are still forming inside clouds of gas. MIRI can even help find clues about the types of molecules in the atmospheres of planets that might support life.

    Both cameras are far more sensitive than the standard cameras used on Earth. NIRCam and MIRI can detect the tiniest amounts of heat from billions of light-years away. If you had Webb’s NIRCam as your eyes, you could see the heat from a bumblebee on the Moon. That’s how sensitive it is.

    Two photos of space, with lots of stars and galaxies shown as little dots. The right image shows more, brighter dots than the left.
    Webb’s first deep-field image: The MIRI image is on the left and the NIRCam image is on the right.
    NASA

    Because Webb is trying to detect faint heat from faraway objects, it needs to keep itself as cold as possible. That’s why it carries a giant sun shield about the size of a tennis court. This five-layer sun shield blocks heat from the Sun, Earth and even the Moon, helping Webb stay incredibly cold: around -370 degrees F (-223 degrees C).

    MIRI needs to be even colder. It has its own special refrigerator, called a cryocooler, to keep it chilled to nearly -447 degrees F (-266 degrees C). If Webb were even a little warm, its own heat would drown out the distant signals it’s trying to detect.

    Turning space light into pictures

    Once light reaches the Webb telescope’s cameras, it hits sensors called detectors. These detectors don’t capture regular photos like a phone camera. Instead, they convert the incoming infrared light into digital data. That data is then sent back to Earth, where scientists process it into full-color images.

    The colors we see in Webb’s pictures aren’t what the camera “sees” directly. Because infrared light is invisible, scientists assign colors to different wavelengths to help us understand what’s in the image. These processed images help show the structure, age and composition of galaxies, stars and more.

    By using a giant mirror to collect invisible infrared light and sending it to super-cold cameras, Webb lets us see galaxies that formed just after the universe began.


    Hello, curious kids! Do you have a question you’d like an expert to answer? Ask an adult to send your question to CuriousKidsUS@theconversation.com. Please tell us your name, age and the city where you live.

    And since curiosity has no age limit – adults, let us know what you’re wondering, too. We won’t be able to answer every question, but we will do our best.

    The Conversation

    Adi Foord does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. How can the James Webb Space Telescope see so far? – https://theconversation.com/how-can-the-james-webb-space-telescope-see-so-far-257421

  • Thailand’s judiciary is flexing its muscles, but away from PM’s plight, dozens of activists are at the mercy of capricious courts

    Source: ForeignAffairs4

    Source: The Conversation – Global Perspectives – By Tyrell Haberkorn, Professor of Southeast Asian Studies, University of Wisconsin-Madison

    Thai Prime Minister Paetongtarn Shinawatra is swarmed by members of the media after a cabinet meeting at Government House on July 1, 2025. Anusak Laowilas/NurPhoto via Getty Images

    Thai Prime Minister Paetongtarn Shinawatra is currently feeling the sharp end of the country’s powerful judiciary.

    On July 2, 2025, Thailand’s Constitutional Court suspended Paetongtarn from office as a result of a leaked phone conversation in which she was heard disparaging Thailand’s military and showing deference to former the prime minister of Cambodia, Hun Sen, despite an ongoing border dispute between the two countries. Initially set for 14 days, many onlookers believe the court’s suspension is likely to become permanent.

    Meanwhile, far from the prime minister’s office is Arnon Nampa, another Thai national whose future is at the mercy of the Thai judiciary – in this case, the Criminal Court.

    Arnon, a lawyer and internationally recognized human rights defender, is one of 32 political prisoners imprisoned over “lèse majesté,” or insulting the Thai monarchy. He is currently serving a sentence of nearly 30 years for a speech questioning the monarchy during pro-democracy protests in 2020. Unless he is both acquitted in his remaining cases and his current convictions are overturned on appeal, Arnon will likely spend the rest of his life in prison.

    The plights of Paetongtarn and Arnon may seem distant. But as a historian of Thai politics, I see the cases as connected by a judiciary using the law and its power to diminish the prospects for democracy in Thailand and constrain the ability of its citizens to participate freely in society.

    Familiar troubles

    The Shinawatra family is no stranger to the reach of both the Thai military and the country’s courts.

    Paetongtarn is the third of her family to be prime minister – and could become the third to be ousted. Her father, Thaksin Shinawatra, was removed in a 2006 military coup. Her aunt, Yingluck Shinawatra, was ousted prior to the May 22, 2014, coup. In common with past coups, the juntas who fomented them were shielded from the law, with none facing prosecution.

    For now, it is unclear whether Paetongtarn’s suspension is the precursor to another coup, the dissolution of parliament and new elections, or a reshuffle of the cabinet. But what is clear is that the Constitutional Court’s intervention is one of several in which the nine appointed judges are playing a critical role in the future of Thai democracy.

    Protecting the monarchy

    The root of the judiciary’s power can be found in the way the modern Thai nation was set up nearly 100 years ago.

    On June 24, 1932, Thailand transitioned from an absolute monarchy to a constitutional monarchy. Since then, the country has experienced 13 coups, as the country has shifted from democracy to dictatorship and back again.

    But throughout, the monarchy has remained a constant presence – protected by Article 112 of the Criminal Code, which defines the crime and penalty of lese majesté: “Whoever defames, insults, or threatens the king, queen, heir-apparent or regent shall be subject to three-to-fifteen years imprisonment.”

    The law is widely feared among dissidents in Thailand both because it is interpreted broadly to include any speech or action that is not laudatory and innocent verdicts are rare.

    Although Article 112 has been law since 1957, it was rarely used until after the 2006 coup.

    Since then, cases have risen steadily and reached record levels following a youth-led movement for democracy in 2020. At least 281 people have been, or are currently being, prosecuted for alleged violation of Article 112, according to Thai Lawyers for Human Rights.

    Challenging the status quo

    The 2020 youth-led movement for democracy was sparked by the Constitutional Court’s dissolution of the progressive Future Forward Party at the beginning of that year, the disappearance of a Thai dissident in exile in Cambodia, and economic problems caused by the COVID-19 pandemic.

    In protests in Bangkok and in provinces across the country, they called for a new election, a new constitution and an end to state repression of dissent.

    A man next to illuminated building gestures to the crowd
    Pro-democracy activist leader Arnon Nampa speaks to protesters.
    Peerapon Boonyakiat/SOPA Images/LightRocket via Getty Images

    On Aug. 3, 2020, Nampa added another demand: The monarchy must be openly discussed and questioned.

    Without addressing such a key, unquestionable institution in the nation, Arnon argued, the struggle for democracy would inevitably fail.

    This message resonated with many Thai citizens, and despite the fearsome Article 112, protests grew throughout the last months of 2020.

    Students at Thammasat University, the center of student protest since the 1950s, expanded Arnon’s call into a 10-point set of demands for reform of the monarchy.

    Making it clear that they did not aim to abolish the monarchy, the students’ proposal aimed to clarify the monarchy’s economic, political and military role and make it truly constitutional.

    As the protests began to seem unstoppable, with tens of thousands joining, the police began cracking down on demonstrations. Many were arrested for violating anti-COVID-19 measures and other minor laws. By late November 2020, however, Article 112 charges began to be brought against Arnon and other protest leaders for their peaceful speech.

    In September 2023, Arnon was convicted in his first case, and he has been behind bars since. He is joined by other political prisoners, whose numbers grow weekly as their cases move through the judicial process.

    Capricious courts

    Unlike Arnon, Paetongtarn Shinawatra is not facing prison.

    But the Constitutional Court’s decision to suspend her from her position as prime minister because of a leaked recording of an indiscreet telephone conversation is, to many legal minds, a capricious response that has the effect of short-circuiting the democratic process.

    So too, I believe, does bringing the weight of the law against Arnon and other political prisoners in Thailand who remain behind bars as the current political turmoil plays out.

    The Conversation

    Tyrell Haberkorn does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Thailand’s judiciary is flexing its muscles, but away from PM’s plight, dozens of activists are at the mercy of capricious courts – https://theconversation.com/thailands-judiciary-is-flexing-its-muscles-but-away-from-pms-plight-dozens-of-activists-are-at-the-mercy-of-capricious-courts-260408

  • Nations are increasingly ‘playing the field’ when it comes to US and China – a new book explains explains why ‘active nonalignment’ is on the march

    Source: ForeignAffairs4

    Source: The Conversation – Global Perspectives – By Jorge Heine, Outgoing Interim Director of the Frederick S. Pardee Center for the Study of the Longer-Range Future, Boston University

    Brazil President Luiz Inacio Lula da Silva, center, flanked by India Prime Minister Narendra Modi, left, and South Africa President Cyril Ramaphosa, speaks at the summit of Group of 20 leading economies in Rio de Janeiro on Nov. 19, 2024. Mauro Pimentel/AFP via Getty Images

    In 2020, as Latin American countries were contending with the triple challenges of the COVID-19 pandemic, a global economic shock and U.S. policy under the first Trump administration, Jorge Heine, research professor at Boston University and a former Chilean ambassador, in association with two colleagues, Carlos Fortin and Carlos Ominami, put forward the notion of “active nonalignment.”

    A book cover with the title 'The Non-Aligned World.'

    Polity Books

    Five years on, the foreign policy approach is more relevant than ever, with trends including the rise of the Global South and the fragmentation of the global order, encouraging countries around the world to reassess their relationships with both the United States and China.

    It led Heine, along with Fortin and Ominami, to follow up on their original arguments in a new book, “The Non-Aligned World,” published in June 2025.

    The Conversation spoke with Heine on what is behind the push toward active nonalignment, and where it may lead.

    For those not familiar, what is active nonalignment?

    Active nonalignment is a foreign policy approach in which countries put their own interests front and center and refuse to take sides in the great power rivalry between the U.S. and China.

    It takes its cue from the Non-Aligned Movement of the 1950s and 1960s but updates it to the realities of the 21st century. Today’s rising Global South is very different from the “Third World” that made up the Non-Aligned Movement. Countries like India, Turkey, Brazil and Indonesia have greater economic heft and wherewithal. They thus have more options than in the past.

    They can pick and choose policies in accordance with what is in their national interests. And because there is competition between Washington and Beijing to win over such countries’ hearts and minds, those looking to promote a nonaligned agenda have greater leverage.

    Traditional international relations literature suggests that in relations between nations, you can either “balance,” meaning take a strong position against another power, or “bandwagon” – that is, go along with the wishes of that power. The notion was that weaker states couldn’t balance against the Great Powers because they don’t have the military power to do so, so they had to bandwagon.

    What we are saying is that there is an intermediate approach: hedging. Countries can hedge their bets or equivocate by playing one power off the other. So, on some issues you side with the U.S., and others you side with China.

    Thus, the grand strategy of active nonalignment is “playing the field,” or in other words, searching for opportunities among what is available in the international environment. This means being constantly on the lookout for potential advantages and available resources – in short, being active, rather than passive or reactive.

    So active nonalignment is not so much a movement as it is a doctrine.

    Two men in suits sit behind a desk chatting.
    Tunisian President Habib Bourguiba, right, and Egyptian President Gamal Abdel Nasser attend the first Conference of Non-Aligned Countries in Belgrade, Yugoslavia, in September 1961.
    Keystone/Hulton Archive/Getty Images

    It’s been five years since you first came up with the idea of active nonalignment. Why did you think it was time to revisit it now?

    The notion of active nonalignment came up during the first Trump administration and in the context of a Latin America hit by the triple-whammy of U.S. pressure, a pandemic and the ensuing recession – which in Latin America translated into the biggest economic downturn in 120 years, a 6.6% drop of regional gross domestic product in 2020.

    ANA was intended as a guide for Latin American countries to navigate those difficult moments, and it led us to the publication of a symposium volume with contributions by six former Latin American foreign ministers in November 2021, in which we elaborated on the concept.

    Three months later, with the Russian invasion of Ukraine and the reaction to it by many countries in Asia and Africa, nonalignment was back with a vengeance.

    Countries like India, Pakistan, South Africa and Indonesia, among others, took positions that were at odds with the West on Ukraine. Many of them, though not all, condemned Russian aggression but also wanted no part in the West’s sanctions on Moscow. These sanctions were seen as unwarranted and as an expression of Western double standards – no sanctions were applied on the U.S. for invading Iraq, of course.

    And then there were the Hamas attacks on Israel on Oct. 7, 2023, and the resulting war in the Gaza Strip. Countries across the Global South strongly condemned the Hamas attacks, but the West’s response to the subsequent deaths of tens of thousands of Palestinians brought home the notion of double standards when it came to international human rights.

    Why weren’t Palestinians deserving of the same compassion as Ukrainians? For many in the Global South, that question hit very hard – the idea that “human rights are limited to Europeans and people who looked like them did not go down well.”

    Thus, South Africa brought a case against Israel in the International Court of Justice alleging genocide, and Brazil spearheaded ceasefire efforts at the United Nations.

    A third development is the expansion of the BRICS bloc of economies from its original five members – Brazil, Russia, India, China and South Africa – to 10 members. Although China and Russia are not members of the Global South, those other founding members are, and the BRICS group has promoted key issues on the Global South’s agenda. The addition of countries such as Egypt and Ethiopia has meant that BRICS has increasingly taken on the guise of the Global South forum. Brazil President Luiz Inácio Lula da Silva, a leading proponent of BRICS, is keen on advancing this Global South agenda.

    All three of these developments have made active nonalignment more relevant than ever before.

    How are China and the US responding to active nonalignment – or are they?

    I’ll give you two examples: Angola and Argentina.

    In Angola, the African country that has received most Chinese cooperation to the tune of US$45 billion, you now have the U.S. financing what is known as the Lobito Corridor – a railway line that stretches from the eastern border of the Democratic Republic of the Congo to Angola’s Atlantic coast.

    Ten years ago, the notion that the U.S. would be financing railway projects in southern Africa would have been considered unfathomable. Yet it has happened. Why? Because China has built significant railway lines in countries such as Kenya and Ethiopia, and the U.S. realized that it was being left behind.

    For the longest time, the U.S. would condemn such Chinese-financed infrastructure projects via the “Belt and Road Initiative” as nothing but “debt-trap diplomacy” designed to saddle developing nations with “white elephants” nobody needed. But a couple of years ago, that tune changed: The U.S. and Europe realized that there is a big infrastructure deficit in Asia, Africa and Latin America that China was stepping in to reduce – and the West was nowhere to be seen in this critical area.

    In short, the West changed it approach – and countries like Angola are now able to play the U.S. off against China for its own national interests.

    Then take Argentina. In 2023, Javier Milei was elected president on a strong anti-China platform. He said his government would have nothing to do with Beijing. But just two years later, Milei announced in an Economist interview that he is a great admirer of Beijing.

    Why? Because Argentina has a very significant foreign debt, and Milei knew that a continued anti-China stance would mean a credit line from Beijing would likely not be renewed. The Argentinian president was under pressure from the International Monetary Fund and Washington to let the credit line with China lapse, but Milei refused to do so and managed to hold his own, playing both sides against the middle.

    Milei is a populist conservative; Brazil’s Lula a leftist. So is active nonalignment immune to ideological differences?

    Absolutely. When people ask me what the difference is between traditional nonalignment and active nonalignment, one of the most obvious things is that the latter is nonideological – it can be used by people of the right, left and center. It is a guide to action, a compass to navigate the waters of a highly troubled world, and can be used by governments of very different ideological hues.

    Two men in suits turn away from each other.
    Brazil President Luiz Inacio Lula da Silva and Argentina President Javier Milei at the 66th Summit of leaders of the Mercosur trading bloc in Buenos Aires on July 3, 2025.
    Luis Robayo/AFP via Getty Images

    The book talks a lot about the fragmentation of the rules-based order. Where do you see this heading?

    There is little doubt that the liberal international order that framed world politics from 1945 to 2016 has come to an end. Some of its bedrock principles, like multilateralism, free trade and respect for international law and existing international treaties, have been severely undermined.

    We are now in a transitional stage. The notion of the West as a geopolitical entity, as we knew it, has ceased to exist. We now have the extraordinary situation where illiberal forces in Hungary, Germany and Poland, among other places, are being supported by those in power in both Washington and Moscow.

    And this decline of the West has not come about because of any economic issue – the U.S. still represents around 25% of global GDP, much as it did in 1970 – but because of the breakdown of the trans-Atlantic alliance.

    So we are moving toward a very different type of world order – and one in which the Global South has the opportunity to have much more of a role, especially if it deploys active nonalignment.

    How have events since Trump’s inauguration played into your argument?

    The notion of active nonalignment was triggered by the first Trump administration’s pressure on Latin American countries. I would argue that the measures undertaken in Trump’s second administration – the tariffs imposed on 90 countries around the world; the U.S. leaving the Paris climate agreement, the World Health Organization and the U.N. Human Rights Council; and other “America First” policies – have only underscored the validity of active nonalignment as a foreign policy approach.

    The pressures on countries across the Global South are very strong, and there is a temptation to give in to Trump and align with U.S. Yet, all indications are that simply giving in to Trump’s demands isn’t a recipe for success. Those countries that have gone down the route of giving in to Trump’s demands only see more demands after that. Countries need a different approach – and that can be found in active nonalignment.

    The Conversation

    Jorge Heine does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Nations are increasingly ‘playing the field’ when it comes to US and China – a new book explains explains why ‘active nonalignment’ is on the march – https://theconversation.com/nations-are-increasingly-playing-the-field-when-it-comes-to-us-and-china-a-new-book-explains-explains-why-active-nonalignment-is-on-the-march-260234

  • Here’s a way to save lives, curb traffic jams and make commutes faster and easier − ban left turns at intersections

    Source: ForeignAffairs4

    Source: The Conversation – USA – By Vikash V. Gayah, Associate Professor of Civil Engineering, Penn State

    Research shows left turns at intersections are dangerous and slow traffic. Benjamin Rondel/The Image Bank via Getty Images

    More than 60% of traffic collisions at intersections involve left turns. Some U.S. cities – including San Francisco, Salt Lake City and Birmingham, Alabama – are restricting left turns.

    Dr. Vikash Gayah, a professor of civil engineering at Penn State University and the interim director of the Larson Transportation Institute, discusses how left turns at intersections cause accidents, make traffic worse and use more gas.

    Dr. Vikash Gayah discusses why left turns should be banned at some intersections.

    The Conversation has collaborated with SciLine to bring you highlights from the discussion, edited for brevity and clarity.

    How dangerous are left turns at intersections?

    Vikash Gayah: When you make a left turn, you have to cross oncoming traffic. When you have a green light, you need to wait for a gap in the oncoming traffic before turning left. If you misjudge when you decide to turn, you could hit the oncoming traffic, or be hit by it. That’s an angle crash, one of the most dangerous types of crashes.

    Also, the driver of the left-turning vehicle is typically looking at oncoming traffic. But pedestrians may be crossing the street they’re turning on to. Often the driver doesn’t see the pedestrians, and that too can cause a serious accident.

    On the other hand, right turns require merging into traffic, but they’re not conflicting directly with traffic. So right turns are much, much safer than left turns.

    What are the statistics on the unique dangers of left turns?

    Gayah: Approximately 40% of all crashes occur at intersections − 50% of those crashes involve a serious injury, and 20% involve a fatality.

    About 61% of the crashes at intersections involve a left turn. Left-hand turns are generally the least frequent movement at an intersection, so that 61% is a lot.

    Why are left turns inefficient for traffic flow?

    Gayah: When left-turning vehicles are waiting for the gap, they can block other lanes from moving, particularly when several vehicles are waiting to turn left.

    Instead of the solid green light, many intersections use the green arrow to let left-turning vehicles move. But to do that, all other movements at the intersection have to stop. Stopping all other traffic just to serve a few left turns makes the intersection less efficient.

    Also, every time you move to another “phase” of traffic – like the green arrow – the intersection has a brief period of time when all the lights are red. Traffic engineers call that an all-red time, and that’s when the intersection is not serving any vehicles. All-red time is two to three seconds per phase change, and that wasted time adds up quickly to further make the intersection less efficient.

    An aerial view of a cars traveling around a roundabout.
    Roundabouts reduce the need for left turns, but they don’t work everywhere.
    Pete Ark/Moment via Getty Images

    What restrictions have been tried in different cities?

    Gayah: When a downtown is not very busy – in the off-peak periods – allowing left turns is fine because you don’t need that additional ability to move vehicles at each intersection.

    Some cities are implementing signs that say no left turns at intersections from 7 to 9, which is the morning peak period, or 4 to 6, which is the afternoon peak period. In San Francisco, for example, Van Ness Avenue restricts left turns during peak periods.

    But cities aren’t implementing these restrictions on a larger scale. Restrictions are more along individual corridors or isolated intersections instead of essentially the entire downtown, where possible. That would make the downtown street network more efficient.

    Roundabouts are one approach to avoiding left turns.

    Gayah: Roundabouts are safe because there’s no longer a need to cross opposing traffic. Everyone circulates in the same direction. You find where you need to go and then exit.

    But restricting left turns, in general, is more efficient. Roundabouts aren’t as efficient when it’s busier. The roundabout gets full, which can cause a gridlock, and no vehicle can move. Traditional intersections are less prone to gridlock.

    Roundabouts also take up more space. Installing a roundabout might mean expanding the intersection. In some downtowns, that means tearing down buildings or removing sidewalks. Restricting left turns only requires a sign that says “no left turns” or “no left turns during peak periods.” That’s it.

    What are the benefits to banning left turns in urban areas?

    Gayah: Any way you cut it, eliminating left turns will result in longer travel distances. I’ll have to travel a longer distance to get to where I need to go. The worst case is having to circle the block. I’m actually traveling four extra block lengths to get to where I need to go.

    But not all trips require circling the block. In a typical downtown, each trip will be about one block length longer on average. That’s not a lot of extra distance. And that extra driving is more than offset by the fact that each intersection with banned left turns is now moving more vehicles. Which means every time you’re at an intersection, you wait less time, on average. So you travel a slightly longer distance but get to where you’re going more quickly.

    Does avoiding left turns improve fuel efficiency?

    Gayah: Our research found that even though vehicles travel longer distances on average with the restricted left turns, they spend less fuel – about 10% to 15% less per trip – because they don’t stop as much at intersections.

    This is why UPS and other fleets route their vehicles to avoid left turns. There’s less idling and fewer stops.

    Do you think banning left turns could become widely accepted?

    Gayah: It’s a new strategy, so it’s uncomfortable for some people. But when they get to their destination faster, I think people will latch onto it.

    Watch the full interview to hear more.

    SciLine is a free service based at the American Association for the Advancement of Science, a nonprofit that helps journalists include scientific evidence and experts in their news stories.

    The Conversation

    Vikash V. Gayah’s research has been funded by various State Departments of Transportation (including Pennsylvania, Wisconsin, Washington State, Montana, South Dakota and North Carolina), US Department of Transportation (via the Mineta National Transit Research Consortium, the Mid-Atlantic Universities Transportation Center, and the Center for Integrated Asset Management for Multimodal Transportation Infrastructure Systems), Federal Highway Administration, National Cooperative Highway Research Program, and National Science Foundation..

    ref. Here’s a way to save lives, curb traffic jams and make commutes faster and easier − ban left turns at intersections – https://theconversation.com/heres-a-way-to-save-lives-curb-traffic-jams-and-make-commutes-faster-and-easier-ban-left-turns-at-intersections-257877