Category: The Conversation

  • MIL-OSI Analysis: Canada’s proposed Strong Borders Act further threatens the legal rights of migrants

    Source: The Conversation – Canada – By Shiva S. Mohan, Research Fellow, Canada Excellence Research Chair in Migration & Integration program, Toronto Metropolitan University

    Canada’s federal government recently introduced the Strong Borders Act, also known as Bill C-2, that proposes Canada tighten migration controls and modernize border enforcement between Canada and the United States.

    Critics have warned the bill “could pave the way for mass deportations” as well as increase precarity for legal migrants.




    Read more:
    Why Canada’s Strong Borders Act is as troublesome as Donald Trump’s travel bans


    Even now, under existing laws, a migrant could be “legal” and still be denied health care, lose their job or effectively be unable to leave Canada for fear of being denied re-entry.

    Bill C-2’s expanded enforcement powers and increased risk of status revocation could make these precarities much worse.

    This is already the quiet reality for thousands of migrants in Canada under their “maintained status”, formerly “implied status.” This status is a legal provision designed to protect continuity for temporary residents who apply to extend their permits.

    Maintained status itself is not the problem. On paper, it offers legal protection.

    But in practice, it often collapses because of the ecosystem in which it operates: fragmented institutions, absent co-ordination and lack of transparency.

    Maintained status has been narrowed

    In May 2025, Immigration, Refugees and Citizenship Canada (IRCC) quietly narrowed the scope of maintained status.

    Under the new rules, if a person’s first application is refused while they are on maintained status, any second application submitted during that period is now automatically refused.

    This effectively strips applicants of legal status, including protections under maintained status, to remain in Canada. The change shows how even compliant migrants can lose status abruptly, further heightening the insecurity built into the system.

    This is a clear expression of complex precarity: a condition in which migrants face legal, economic and social insecurity, even when they follow all the rules.

    Maintained status is just one example of this larger phenomenon of Canadian policy generating hidden forms of exclusion.

    Legal, but not recognized?

    Migrants on maintained status are legally allowed to stay in Canada and continue working or studying under the same conditions as their expired permit. Yet no new permit is issued to confirm this status.

    Proof of this legal standing varies depending on how a person applies. Those who apply online may receive a WP-EXT letter confirming their right to continue working. However, this isn’t issued to post-graduation work-permit holders, and expires after 365 days.

    Paper-based applicants are advised that no such letter will be provided. Instead, they must rely on a copy of their application, a fee payment receipt or courier tracking information to demonstrate continued legal status.

    If no letter is available, or once it expires, IRCC advises applicants to direct employers to the Help Centre web page as proof of their right to remain and work.

    These workarounds are legally valid but fall short of what many employers, landlords and service providers consider adequate proof of status.




    Read more:
    Canada’s new immigration policy favours construction workers but leaves the rest behind


    The limits of informal proof

    My current ongoing research points to how employers following rigid HR protocols often reject informal documentation. Some migrants even obtain letters from immigration lawyers to explain their legal right to remain and work.

    IRCC does not publish public data on the number of people on maintained status or how long they remain in that condition. Some front-line organizations have adjusted their services in response to this gap.

    MOSAIC, for example, a major settlement agency in British Columbia, explicitly lists “migrant workers on maintained status” as eligible for support. This signals institutional recognition of the category.

    The broader situation, however, reflects a disconnect between legal recognition by the state and practical verifiability in everyday life.

    The risk of travel

    Travel while on maintained status is legally permitted only under narrow conditions, such as holding a valid Temporary Resident Visa, being visa-exempt or returning from the U.S. under specific circumstances.

    But even in these cases, leaving Canada terminates maintained status.

    Migrants may be allowed to re-enter as visitors, but they cannot resume work or study until a new permit is issued. This introduces major uncertainties for people who may need to travel for family, emergencies or professional obligations.

    Disparities in provincial health access

    Access to public health insurance during maintained status varies widely across provinces.

    In Ontario, OHIP (Ontario Health Insurance Plan) cards are directly tied to the expiration of work permits. Unless migrants know to proactively request extended coverage and can meet specific document requirements, they risk losing health insurance entirely. Even when eligible, coverage is not automatic and may require out-of-pocket payment pending reimbursement.

    In Québec, RAMQ (Régie de l’assurance maladie du Québec) treats migrants on maintained status like new arrivals. They must reregister for coverage and face a three-month waiting period from the time of renewal, regardless of continuous legal presence.

    In British Columbia, by contrast, the MSP (Medical Services Plan) offers temporary coverage for up to six months (extendable) to individuals on maintained status, provided they previously held MSP and submit IRCC receipt proof.

    This more inclusive approach highlights how uneven provincial co-ordination amplifies the precarity of federal policy.

    Infrastructure is needed immediately

    Migrants face great risks on maintained status.

    Despite investments in automation and digital infrastructure, IRCC continues to experience chronic processing delays, leaving migrants in prolonged uncertainty: legally present, but practically unrecognized.

    To address this, Canada needs systems and resources designed to uphold legal recognition in daily life. It needs to:

    • Create a secure centralized portal that allows migrants to control who can verify their legal status in real time. The U.K.’s share code platform and the American myE‑Verify system provide clear examples of how this can work, reducing confusion for employers, landlords, and service providers.

    • Issue co-ordinated provincial guidance, particularly regarding access to essential services such as health care, so that front-line staff have clarity on migrants’ rights under maintained status.

    • Protect continuity of status after international travel, ensuring that those who leave Canada while on maintained status do not lose the ability to return and resume work or study.

    As Canada advances legislation like Bill C‑2, we must not ignore the country’s quiet erosion of its existing legal architecture for migrants.

    Migrants on maintained status have followed the rules.

    If we are serious about building trust in immigration systems, we must commit to infrastructure that is workable, visible and fair.

    Shiva S. Mohan receives funding from the Canada Excellence Research Chair in Migration and Integration Program at Toronto Metropolitan University. He has no other affiliations or financial interests that would benefit from this article.

    ref. Canada’s proposed Strong Borders Act further threatens the legal rights of migrants – https://theconversation.com/canadas-proposed-strong-borders-act-further-threatens-the-legal-rights-of-migrants-259349

    MIL OSI Analysis

  • MIL-OSI Analysis: Colonization devastated biodiversity, habitats and human life in the Pacific Northwest

    Source: The Conversation – Canada – By Meaghan Efford, Postdoctoral Research Fellow, Institute for the Oceans and Fisheries, University of British Columbia

    Burrard Inlet, known traditionally as səl̓ilwəɬ (Tsleil-Wat) in the hən̓q̓əmin̓əm̓ language, has been the heart of the traditional, ancestral and unceded territory of the səl̓ilwətaɬ (Tsleil-Waututh Nation) since time immemorial.

    An image of part of Burrard Inlet and the City of Vancouver taken from the International Space Station in April 2022.
    (NASA)

    The inlet is a water system that wraps through and around what we now know today as the city of Vancouver on the coast of British Columbia. The ecosystem is home to essential habitat for species like Pacific herring, Pacific salmon and harbour seals.

    Burrard Inlet is also host to many commercial, industrial and urban developments and interests. This includes the Port of Vancouver, one of the largest marine ports in Canada and the terminal end of the Trans Mountain Pipeline. Today, more than 2.5 million people call the area home and it’s a popular tourism spot.

    This is relatively new, however. Colonization and urbanization have caused intense change and damage since Europeans first settled in the area in around 1792, with most changes occurring since the 1880s.

    Through a collaborative research project between the Tsleil-Waututh Nation, the University of British Columbia, engineering consultant firm Kerr Wood Leidal and Mitacs Canada, we assessed the impact of colonization on the Burrard Inlet ecosystem since Europeans first settled in the area.

    When we look at the cumulative effects of specific events, we are adding the individual impacts of each event together to get a fuller picture of how colonialism impacted the ecosystem.

    How we tracked change over time

    We chose four sources of stress to the ecosystem to assess for this research:

    1) The impact of smallpox on the ancestral Tsleil-Waututh population and the resulting health of the inlet.

    2) The impact of settler fisheries, including Pacific salmon and Pacific herring.

    3) The impact of settler hunting on land animals, including deer.

    4) The impact of urbanization on the health of the ecosystem.

    We used an ecosystem modelling software program called Ecopath with Ecosim, and modelled how these events impacted the inlet ecosystem between 1750-1980. We found there was a significant decrease in biomass (how much of a given organism is in an ecosystem) and available habitat.

    We focused on 12 animal groups based on another collaborative project that focused on traditional Tsleil-Waututh diets.

    To do this, we drew on multiple sources of data, including Tsleil-Waututh traditional ecological knowledge, archeological data, historical and archival work and ecological resources.

    By combining these different sources of information, we can address gaps in each data source and weave together information to paint a fuller picture of ecological change over time.

    An aerial photo of the Burrard Inlet’s North Shore and the Maplewood Mudflats taken by a Tsleil-Waututh field survey team by drone during a kelp survey in August 2020.
    (Tsleil-Waututh Nation)

    What we found

    Our research highlights how shoreline change from events like the construction of the Port of Vancouver resulted in the loss of more than half of the intertidal habitat that clams, crabs, birds and fish rely on.

    Along with over-harvesting, this has resulted in a dramatic population decline for these species. Clams and other bivalves have also become unsafe to eat due to pollution.

    Over-fishing has been a huge problem. Forage fish, including Pacific herring, eulachon, surf smelt and Northern anchovy, collectively experienced a 99 per cent decline in biomass.

    Pacific herring was completely wiped out by dynamite fishing, and only recently returned.

    Pink salmon and chum salmon both experienced more than 40 per cent losses in biomass due to over-fishing. White sturgeon were almost wiped out.

    Mammals didn’t fare any better: three-quarters of the deer and elk populations and over one-quarter of the harbour seal population in the area around the inlet were lost to hunting.

    Smallpox had a devastating effect on Salish communities throughout the region. The loss of lives caused dramatic change in the ecosystem because it reduced how much food was taken out of the ecosystem significantly.

    The smallpox epidemics only touch the surface of how colonization impacted Indigenous lives. Other events that we didn’t include in the model — like the Residential School system and the Reserve System, for example — severely limited or criminalized stewardship activities that Tsleil-Waututh and other Nations have been using to take care of their territory for millennia.

    Tsleil-Waututh stewardship and sovereignty

    Tsleil-Waututh people are specialists in managing and stewarding the marine, tidal and terrestrial resources of the inlet’s ecosystem. Tsleil-Waututh salmon stewardship sustainably maintained a chum salmon fishery for almost 3,000 years.

    The research questions, priorities and direction of our project were established through frequent collaborative meetings. This approach ensured Tsleil-Waututh co-authors and colleagues were involved in every step of the research.

    This kind of community-driven work is complex. It is also incredibly valuable for understanding ecosystem change over time. Without the leadership and knowledge of Tsleil-Waututh knowledge-holders, this research would have had massive data and knowledge gaps and the work would have much less significance.

    This is an example of transdisciplinary research: research that is interdisciplinary, that draws on multiple disciplines for data and methods and is grounded in community from the beginning.

    Our research shows that colonialism has had a devastating impact on habitats and biodiversity in and around Burrard Inlet. This is not just an ecological story, but a human story that speaks to the wide-reaching impacts of colonization. It is an intertwined story that shows how harmful colonization and rapid urbanization can be, both to humans and to the ecosystems we call home.

    Meaghan Efford received funding from Mitacs Canada through a collaborative project with Tsleil-Waututh Nation.

    ref. Colonization devastated biodiversity, habitats and human life in the Pacific Northwest – https://theconversation.com/colonization-devastated-biodiversity-habitats-and-human-life-in-the-pacific-northwest-260791

    MIL OSI Analysis

  • Starmer’s suspension of ‘rebel’ MPs risks alienating his party in a way he can’t afford

    Source: ForeignAffairs4

    Source: The Conversation – UK – By Tony McNulty, Lecturer/Teaching Fellow, British Politics and Public Policy, Queen Mary University of London

    Starmer has removed the whip from four ‘persistent rebel’ MPs. Flickr/UK Parliament , CC BY-NC-ND

    Political parties with commanding parliamentary majorities are often tempted by the promise of assertive leadership and decisive action. Yet, as the events of the last few weeks reveal, a large majority is no substitute for the subtler arts of political management, party cohesion and narrative discipline.

    Missteps like suspending four MPs and sacking three trade envoys are not isolated misjudgements but symptomatic of deeper issues within Labour’s approach to internal governance. These are issues that need to be addressed if this government is to make the difference needed.

    At the centre of the week’s controversies sits the leader’s decision to discipline members of his own parliamentary party. On the surface, such acts might be interpreted as “factional authoritarianism” – a heavy-handed display to quell rebellion. But it is more probably rooted in clumsy party management and weakness.


    Want more politics coverage from academic experts? Every week, we bring you informed analysis of developments in government and fact check the claims being made.

    Sign up for our weekly politics newsletter, delivered every Friday.


    This is especially true given Labour’s comfortable majority, which is currently around 160. It is reasonable to expect a majority party to exude a certain confidence and to practise tolerance for internal debate. It knows, after all, that a handful of dissenters pose no existential threat to the government’s legislative agenda. Instead, the government appears brittle, hyper-sensitive to criticism, and more interested in enforcing unity than fostering meaningful dialogue.

    The consequences are not trivial. Rather than projecting an image of strength and competence, the government gives the impression of insecurity and control for its own sake. The sacking of trade envoys – posts which previously were barely known or understood by the public – appears to many as petty and vindictive. The broader public takeaway is not about Labour’s policy on trade or any other issue, but about its willingness to punish internal dissent.

    Lost narrative and missed opportunities

    A parallel failure lies in the government’s continuing inability to control or shape the public narrative. Just days before the prime minister decided to suspend his rebels, the government announced £500m for a “better futures fund” to support vulnerable children and families. This could have been a bold declaration of intent for the new government. It could have been a huge win. Yet, it was disconnected from any overarching narrative and proved yet another missed opportunity to champion a new direction for the party and the country.

    Instead, media and public attention shifted immediately to the suspensions and sackings, drowning out any potential positive coverage of the government’s messaging. The chancellor’s Mansion House speech – an annual opportunity to set the agenda – fell similarly flat. Rachel Reeves received only insipid headlines before being entirely overshadowed.

    Neil Duncan-Jordan speaking in parliament.
    Neil Duncan-Jordan, one of the suspended MPs.
    Flickr/UK Parliament, CC BY-NC-ND

    The government’s inability to sequence and frame its positive announcements, and to anticipate how punitive actions would dominate the news cycle, requires urgent attention. It is not enough to make policy announcements; there must be a coherent story that MPs and the public alike can follow.

    Rebellion, dissent and party discipline

    The rebellion that sparked this drama was not led by perennial troublemakers, but a group of select committee chairs who are experienced, respected parliamentarians and not easily dismissed as the “usual subjects.” When the government gutted its own benefits bill to quell the backlash, a majority of rebels indeed relented. Only Rachel Maskell (one of the four MPs now suspended) and 46 others persisted in voting against the bill at third reading.

    Rachael Maskell in parliament.
    Rachael Maskell, now suspended, speaking in parliament in March.
    Flickr/UK Parliament, CC BY-NC-ND

    Was this really worthy of suspension, especially so early in a new parliamentary session? The government’s justification rests on the need for discipline – that rebels should “play ball” after exacting concessions. But this only works when both government and rebels understand and respect the same rules.

    The claim is that the four rebels and three MPs who lost envoy status are persistent rebels, but this is an overreaction. In either case, it is clear the backbenchers felt ignored and undervalued, and that the government failed to take their concerns seriously in the first place.

    There is a sense that Labour’s leadership is more interested in enforcing conformity than in building consensus. A true show of strength would be to sit down and discuss with colleagues how differing views can be accommodated, and to have some confidence in your argument and build a narrative around it.

    Several warnings about internal unrest were ignored. The Whips Office flagged issues around poverty, pensions, and benefit reform, but these concerns were sidelined by Number 10. Ministers called for a broader anti-poverty strategy but again found themselves ignored. Select committee chairs, who tried for months to initiate constructive dialogue, were only heard in the final days before the bill’s debate.

    External threats

    Labour’s majority, while impressive, is based on fragile foundations. It won with only a 34% share of the vote. Many of the newly elected MPs are inexperienced and hold wafer thin majorities. A 5% swing against Labour would see more than 100 MPs lose their seats. External threats – an ascendant Reform UK, a possible Corbynista party, and the consolidation of the Liberal Democrats and Greens – compound the sense of fragility.

    In this context, disciplining a handful of MPs as some sort of a show of strength to keep putative rebels in line, is not going to work. The government cannot afford to alienate its own MPs.

    Labour’s early weeks in government provide a cautionary tale in the risks of prioritising discipline over dialogue, and of losing sight of the narrative that should bind the party and its supporters together. Most Labour MPs want the government to succeed, but early heavy-handedness breeds resentment and undermines unity just when it is most needed.

    True political strength lies not in the ability to punish dissent, but in the confidence to accommodate it – building a compelling story that inspires loyalty rather than demands it.

    If the government wants its MPs to sing from the same song sheet, it must first establish the melody. The significant achievements of this government – £40 billion more on public services, international trade deals, infrastructure investment, renters’ and workers’ rights, energy initiatives, advances in the living wage, and free school meals – can only resonate if they are woven into a story that MPs and the public can share.

    The lesson is clear: discipline without narrative and command without consensus are recipes for internal discord and political decline.

    The Conversation

    Tony McNulty is a member of the Labour Party.

    ref. Starmer’s suspension of ‘rebel’ MPs risks alienating his party in a way he can’t afford – https://theconversation.com/starmers-suspension-of-rebel-mps-risks-alienating-his-party-in-a-way-he-cant-afford-261339

  • MIL-OSI Submissions: Starmer’s suspension of ‘rebel’ MPs risks alienating his party in a way he can’t afford

    Source: The Conversation – UK – By Tony McNulty, Lecturer/Teaching Fellow, British Politics and Public Policy, Queen Mary University of London

    Starmer has removed the whip from four ‘persistent rebel’ MPs. Flickr/UK Parliament , CC BY-NC-ND

    Political parties with commanding parliamentary majorities are often tempted by the promise of assertive leadership and decisive action. Yet, as the events of the last few weeks reveal, a large majority is no substitute for the subtler arts of political management, party cohesion and narrative discipline.

    Missteps like suspending four MPs and sacking three trade envoys are not isolated misjudgements but symptomatic of deeper issues within Labour’s approach to internal governance. These are issues that need to be addressed if this government is to make the difference needed.

    At the centre of the week’s controversies sits the leader’s decision to discipline members of his own parliamentary party. On the surface, such acts might be interpreted as “factional authoritarianism” – a heavy-handed display to quell rebellion. But it is more probably rooted in clumsy party management and weakness.


    Want more politics coverage from academic experts? Every week, we bring you informed analysis of developments in government and fact check the claims being made.

    Sign up for our weekly politics newsletter, delivered every Friday.


    This is especially true given Labour’s comfortable majority, which is currently around 160. It is reasonable to expect a majority party to exude a certain confidence and to practise tolerance for internal debate. It knows, after all, that a handful of dissenters pose no existential threat to the government’s legislative agenda. Instead, the government appears brittle, hyper-sensitive to criticism, and more interested in enforcing unity than fostering meaningful dialogue.

    The consequences are not trivial. Rather than projecting an image of strength and competence, the government gives the impression of insecurity and control for its own sake. The sacking of trade envoys – posts which previously were barely known or understood by the public – appears to many as petty and vindictive. The broader public takeaway is not about Labour’s policy on trade or any other issue, but about its willingness to punish internal dissent.

    Lost narrative and missed opportunities

    A parallel failure lies in the government’s continuing inability to control or shape the public narrative. Just days before the prime minister decided to suspend his rebels, the government announced £500m for a “better futures fund” to support vulnerable children and families. This could have been a bold declaration of intent for the new government. It could have been a huge win. Yet, it was disconnected from any overarching narrative and proved yet another missed opportunity to champion a new direction for the party and the country.

    Instead, media and public attention shifted immediately to the suspensions and sackings, drowning out any potential positive coverage of the government’s messaging. The chancellor’s Mansion House speech – an annual opportunity to set the agenda – fell similarly flat. Rachel Reeves received only insipid headlines before being entirely overshadowed.

    Neil Duncan-Jordan, one of the suspended MPs.
    Flickr/UK Parliament, CC BY-NC-ND

    The government’s inability to sequence and frame its positive announcements, and to anticipate how punitive actions would dominate the news cycle, requires urgent attention. It is not enough to make policy announcements; there must be a coherent story that MPs and the public alike can follow.

    Rebellion, dissent and party discipline

    The rebellion that sparked this drama was not led by perennial troublemakers, but a group of select committee chairs who are experienced, respected parliamentarians and not easily dismissed as the “usual subjects.” When the government gutted its own benefits bill to quell the backlash, a majority of rebels indeed relented. Only Rachel Maskell (one of the four MPs now suspended) and 46 others persisted in voting against the bill at third reading.

    Rachael Maskell, now suspended, speaking in parliament in March.
    Flickr/UK Parliament, CC BY-NC-ND

    Was this really worthy of suspension, especially so early in a new parliamentary session? The government’s justification rests on the need for discipline – that rebels should “play ball” after exacting concessions. But this only works when both government and rebels understand and respect the same rules.

    The claim is that the four rebels and three MPs who lost envoy status are persistent rebels, but this is an overreaction. In either case, it is clear the backbenchers felt ignored and undervalued, and that the government failed to take their concerns seriously in the first place.

    There is a sense that Labour’s leadership is more interested in enforcing conformity than in building consensus. A true show of strength would be to sit down and discuss with colleagues how differing views can be accommodated, and to have some confidence in your argument and build a narrative around it.

    Several warnings about internal unrest were ignored. The Whips Office flagged issues around poverty, pensions, and benefit reform, but these concerns were sidelined by Number 10. Ministers called for a broader anti-poverty strategy but again found themselves ignored. Select committee chairs, who tried for months to initiate constructive dialogue, were only heard in the final days before the bill’s debate.

    External threats

    Labour’s majority, while impressive, is based on fragile foundations. It won with only a 34% share of the vote. Many of the newly elected MPs are inexperienced and hold wafer thin majorities. A 5% swing against Labour would see more than 100 MPs lose their seats. External threats – an ascendant Reform UK, a possible Corbynista party, and the consolidation of the Liberal Democrats and Greens – compound the sense of fragility.

    In this context, disciplining a handful of MPs as some sort of a show of strength to keep putative rebels in line, is not going to work. The government cannot afford to alienate its own MPs.

    Labour’s early weeks in government provide a cautionary tale in the risks of prioritising discipline over dialogue, and of losing sight of the narrative that should bind the party and its supporters together. Most Labour MPs want the government to succeed, but early heavy-handedness breeds resentment and undermines unity just when it is most needed.

    True political strength lies not in the ability to punish dissent, but in the confidence to accommodate it – building a compelling story that inspires loyalty rather than demands it.

    If the government wants its MPs to sing from the same song sheet, it must first establish the melody. The significant achievements of this government – £40 billion more on public services, international trade deals, infrastructure investment, renters’ and workers’ rights, energy initiatives, advances in the living wage, and free school meals – can only resonate if they are woven into a story that MPs and the public can share.

    The lesson is clear: discipline without narrative and command without consensus are recipes for internal discord and political decline.

    Tony McNulty is a member of the Labour Party.

    ref. Starmer’s suspension of ‘rebel’ MPs risks alienating his party in a way he can’t afford – https://theconversation.com/starmers-suspension-of-rebel-mps-risks-alienating-his-party-in-a-way-he-cant-afford-261339

    MIL OSI

  • MIL-OSI Analysis: When grief involves trauma − a social worker explains how to support survivors of the recent floods and other devastating losses

    Source: The Conversation – USA (3) – By Liza Barros-Lane, Assistant Professor of Social Work, University of Houston-Downtown

    Rain falls over a makeshift memorial for flood victims along the Guadalupe River on July 13, 2025, in Kerrville, Texas. AP Photo/Eric Gay

    The July 4, 2025, floods in Kerr County, Texas, swept away children and entire families, leaving horror in their wake. Days later, flash floods struck Ruidoso, New Mexico, killing three people, including two young children.

    These are not just devastating losses. When death is sudden, violent, or when a body is never recovered, grief gets tangled up with trauma.

    In these situations, people don’t only grieve the death. They struggle with the terror of how it happened, the unanswered questions and the shock etched into their bodies.

    I’m a social work professor, grief researcher and the founder of The Young Widowhood Project, a research initiative aimed at expanding scholarship and public understanding of premature spousal loss.

    I was widowed when I was 36. In July 2020, my husband, Brent, went missing after testing a small, flat-bottomed fishing boat called a Jon boat. His body was recovered two days later, but I never saw his remains.

    Both my personal loss and professional work have shown me how trauma changes the grieving process and what kind of support actually helps.

    To understand how trauma can complicate grief, it’s important to first understand how people typically respond to loss.

    Grief isn’t a set of stages

    Many people still think of grief through the lens of psychiatrist Dr. Elisabeth Kübler-Ross’ five stages of grief, popularized in the early 1970s: denial, anger, bargaining, depression and acceptance.

    But in fact, this model was originally designed for people facing their own deaths, not for mourners. In the absence of accessible grief research in the 1960s, it became a leading framework for understanding the grieving process – even though it wasn’t meant for that.

    Despite this misapplication, the stages model has shaped cultural expectations: namely, that grief ends once people reach the “acceptance” stage. But research doesn’t support this idea. Trying to force grief into this model can cause real harm, leaving mourners feeling they’re grieving “wrong.”

    In reality, mourning is often lifelong. Most people go through an acute period of overwhelming pain right after the loss. This is usually followed by integrated grief, where the pain softens but the loss is still part of everyday life, returning in waves.

    Although grief is unique to each person and relationship, researchers have found that mourners often strive to a) make sense of the death; b) adjust to a world without their loved one; c) form an ongoing connection with their deceased loved one in new ways; and d) figure out who they are without their loved one.

    It’s difficult and at times disorienting work, but most people find ways to carry their grief and keep living.

    Julia Mora embraces her granddaughter, Isla Meyer, during a vigil for Texas flood victims on July 11, 2025.
    AP Photo/Gerald Herbert

    When grief and trauma collide

    However, some losses carry an extra layer of pain, confusion and trauma.

    Sudden, unexpected, accidental, violent or deeply tragic deaths – like those experienced during the recent floods – can lead to what researchers call traumatic bereavement: grief that is disrupted by the traumatic nature of the death.

    People experiencing traumatic bereavement often endure a longer and more intense acute grief period. They may be haunted by disturbing images, nightmares or relentless thoughts about how their loved one died or suffered. Many wrestle with dread, spiritual disorientation and a shattered sense of safety in the world.

    Some of these deaths are also considered “ambiguous” – unclear or unconfirmed loss – such as when a body is never recovered or is too damaged to view. Without physical confirmation, mourners often feel stuck in disbelief and helplessness.

    This was true in my case. Not seeing my husband’s body left a part of me suspended between knowing and not knowing. I knew he had died but couldn’t fully believe it, no matter how much I lived with the reality of his absence. For a long time, I caught myself repeating these words every morning: “Brent is dead. Brent is dead.”

    In many cases, these reactions aren’t short term. Many people affected by traumatic loss remain overwhelmed and sometimes physically and emotionally impaired for years. Symptoms may taper over time, but they rarely disappear entirely.

    Supporting mourners

    Traumatic bereavement can feel unbearable. Many mourners struggle with intense, long-lasting reactions that can leave them feeling helpless, altered or even unrecognizable to themselves. They may appear withdrawn, forgetful or emotionally drained because their systems are overwhelmed. Coping can look messy or self-destructive, but these are often survival strategies, not conscious choices. I’ve also seen how those same struggles become more survivable when mourners don’t have to carry them alone. If you’re supporting someone through traumatic loss, here are three ways to help.

    • Make space for the horror. Listen without flinching. Acknowledge the full weight of what happened and how terrifying and unjust the loss was. This means saying things like, “This should never have happened,” or “What you went through is beyond words.” It means staying present when the mourner speaks about what haunts them. Let them know they don’t have to carry this alone. You may feel the urge to say something hopeful such as, “At least the body was recovered,” but there is no silver lining in these cases. Instead, say: “There’s nothing I can say to fix this, but I’m not going anywhere.”

    • Help them find others who can understand. Trauma can be isolating. Mourners often feel uniquely overwhelmed or confused. Support groups, peer companions and therapists trained in treating grief and trauma can offer the kind of recognition and validation that even the most devoted friend may not be able to provide.

    • Take care of yourself, too. Being present for someone in deep grief takes energy, especially if you were personally affected by the loss. Stay connected to replenishing people, practices and routines. If you don’t, you may begin to experience trauma, too. Taking care of yourself will help you remain grounded so that you can show up.

    I believe supporting someone through traumatic bereavement is one of the most meaningful things you can do. You don’t need perfect words or a plan. What sustains them won’t be advice or solutions, but your simple, powerful act of staying.

    Liza Barros-Lane does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. When grief involves trauma − a social worker explains how to support survivors of the recent floods and other devastating losses – https://theconversation.com/when-grief-involves-trauma-a-social-worker-explains-how-to-support-survivors-of-the-recent-floods-and-other-devastating-losses-260908

    MIL OSI Analysis

  • MIL-OSI Analysis: Children living near oil and gas wells face higher risk of rare leukemia, studies show

    Source: The Conversation – USA (3) – By Lisa McKenzie, Associate Professor of Health, Department of Environmental & Occupational Health, University of Colorado Anschutz Medical Campus

    The U.S. has nearly 1 million oil and natural gas wells. Some, like the one here in Commerce City, Colo., are within a few thousand feet of schools and neighborhoods. RJ Sangosti/Getty Images

    Acute lymphocytic leukemia is one of the most commonly diagnosed cancers in children, although it is rare. It begins in the bone marrow and rapidly progresses.

    Long-term survival rates exceed 90%, but many survivors face lifelong health challenges. Those include heart conditions, mental health struggles and a greater chance of developing a second cancer.

    Overall cancer rates in the U.S. have declined since 2002, but childhood acute lymphocytic leukemia rates continue to rise. This trend underscores the need for prevention rather than focusing only on treatment for this disease.

    A growing body of literature suggests exposure to the types of chemicals emitted from oil and natural gas wells increases the risk of developing childhood acute lymphocytic leukemia.

    Heavy machinery injects water under the surface of the earth to push oil and natural gas out.
    NurPhoto/GettyImages

    We are environmental epidemiologists focused on understanding the health implications of living near oil and natural gas development operations in Colorado and Pennsylvania. Both states experienced a rapid increase in oil and natural gas development in residential areas beginning in the early 21st century. We’ve studied this issue in these states, using different datasets and some different approaches.

    2 studies, similar findings

    Both of our studies used a case-control design. This design compares children with cancer, known as cases, with children without cancer, known as controls. We used data from statewide birth and cancer registries.

    We also used specialized mapping techniques to estimate exposure to oil and natural gas development during sensitive time windows, such as pregnancy or early childhood.

    The Colorado study looked at children born between 1992 and 2019. The study included 451 children diagnosed with leukemia and 2,706 children with no cancer diagnosis. It considered how many oil and natural gas wells were near a child’s home and how intense the activity was at each well. Intensity of activity included the volume of oil and gas production and phase of well production.

    The Colorado study found that children ages 2-9 living in areas with the highest density and intensity wells within eight miles (13 kilometers) of their home were at least two times more likely to be diagnosed with acute lymphocytic leukemia. Children with wells within three miles (five kilometers), of their home bore the greatest risk.

    The Pennsylvania study looked at 405 children diagnosed with leukemia between 2009 and 2017 and 2,080 children without any cancer diagnosis. This study found that children living within 1.2 miles (two kilometers) of oil and natural gas wells at birth were two to three times more likely to be diagnosed with acute lymphocytic leukemia between ages 2 to 7 than those who lived farther than 1.2 miles away.

    The risk of developing leukemia was more pronounced in children who were exposed during their mother’s pregnancy.

    The results of our two studies are also supported by a previous study in Colorado published in 2017. That study found children diagnosed with acute lymphocytic leukemia were four times more likely to live in areas with a high density of oil and natural gas wells than children diagnosed with other cancers.

    Policy implications

    To extract oil and natural gas from underground reserves, heavy drilling equipment injects water and chemicals into the earth under high pressure. Petroleum and contaminated wastewater are returned to the surface. It is well established that these activities can emit cancer-causing chemicals. Those include benzene, as well as other pollutants, to the air and water.

    The U.S. is the world’s largest producer of oil and natural gas. There are almost 1 million producing wells across the country, and many of these are located in or near residential areas. This puts millions of children at increased risk of exposure to cancer-causing chemicals.

    In the U.S., oil and natural gas development is generally regulated at the state level. Policies aimed at protecting public health include establishing minimum distances between a new well and existing homes, known as a setback distance. These policies also include requirements for emission control technologies on new and existing wells and restrictions on the construction of new wells.

    Setbacks offer a powerful solution to reduce noise, odors and other hazards experienced by communities near oil and gas wells. However, it is challenging to establish a universal setback that optimally addresses all hazards. That’s because noise, air pollutants and water contaminants dissipate at different rates depending on location and other factors.

    In addition, setbacks focus exclusively on where to place oil and natural gas wells but do not impose any restrictions on releases of air pollutants or greenhouse gases. Therefore, they do not address regional air quality issues or mitigate climate change.

    In many U.S. cities there are set distances that oil and gas wells are allowed to be from places such as schools and neighborhoods. In this Frederick, Colo., neighborhood the oil rig is very near houses.
    UGC/GettyImages

    Furthermore, current U.S. setback distances range from just 200 feet to 3,200 feet. Our results indicate that even the largest setback of 3,200 feet (one kilometer) is not sufficient to protect children from an increased leukemia risk.

    Our results support a more comprehensive policy approach that considers both larger setback distances and mandatory monitoring and control of hazardous emissions on both new and existing wells.

    Future research

    More research is needed in other states, such as Texas and California, that have oil and natural gas development in residential areas, as well as on other pediatric cancers.

    One such cancer is acute myeloid leukemia. This is another type of leukemia that starts in bone marrow and rapidly progresses. This cancer has exhibited a strong link to benzene exposure in adult workers in several industries, including the petroleum industry. Researchers have also documented a moderate cancer link for children exposed to vehicular benzene.

    It remains unclear whether benzene is the culprit or if another agent or combination of hazards is an underlying cause of acute myeloid leukemia.

    Even though questions remain, we believe the existing evidence coupled with the seriousness of childhood acute lymphocytic leukemia supports enacting further protective measures. We also believe policymakers should consider the cumulative effects from wells, other pollution sources and socioeconomic stressors on children and communities.

    Read more of our stories about Colorado and Pennsylvania.

    Lisa McKenzie receives funding from the American Cancer Society and the University of Colorado Cancer Center.

    Nicole Deziel receives funding from the American Cancer Society, the National Institutes of Health, and the Yale School of Public Health.

    ref. Children living near oil and gas wells face higher risk of rare leukemia, studies show – https://theconversation.com/children-living-near-oil-and-gas-wells-face-higher-risk-of-rare-leukemia-studies-show-252994

    MIL OSI Analysis

  • MIL-OSI Analysis: Data can show if government programs work or not, but the Trump administration is suppressing the necessary information

    Source: The Conversation – USA (2) – By Sarah James, Assistant Professor of Political Science, Gonzaga University

    Do government programs work? It’s impossible to find out with no data. Andranik Hakobyan/iStock via Getty Images Plus

    The U.S. has the highest rate of maternal mortality among developed nations. Since 1987, the Centers for Disease Control and Prevention has administered the Pregnancy Risk Assessment Monitoring System to better understand when, where and why maternal deaths occur.

    In April 2025, the Trump administration put the department in charge of collecting and tracking this data on leave.

    It’s just one example of how the administration is deleting and disrupting American data of all kinds.

    The White House is also collecting less information about everything from how many Americans have health insurance to the number of students enrolled in public schools, and making government-curated data of all kinds off-limits to the public. President Donald Trump is also trying to get rid of entire agencies, like the Department of Education, that are responsible for collecting important data tied to poverty and inequality.

    His administration has also begun deleting websites and respositories that share government data with the public.

    Why data is essential for the safety net

    I study the role that data plays in political decision-making, including when and how government officials decide to collect it. Through years of research, I’ve found that good data is essential – not just for politicians, but for journalists, advocates and voters. Without it, it’s much harder to figure out when a policy is failing, and even more difficult to help people who aren’t politically well connected.

    Since Trump was sworn in for a second time, I have been keeping an eye on the disruption, removal and defunding of data on safety net programs such as food assistance and services for people with disabilities.

    I believe that disrupting data collection will make it harder to figure out who qualifies for these programs, or what happens when people lose their benefits. I also think that all this missing data will make it harder for supporters of safety net programs to rebuild them in the future.

    Why the government collects this data

    There’s no way to find out whether policies and programs are working without credible data collected over a long period of time.

    For example, without a system to accurately measure how many people need help putting food on their tables, it’s hard to figure out how much the country should spend on the Supplemental Nutrition Assistant Program, formerly known as food stamps, the federal supplemental nutrition program for women, infants and children, known as WIC, and related programs. Data on Medicaid eligibility and enrollment before and after the passage of the Affordable Care Act in 2010 offers another example. National data showed that millions of Americans gained health insurance coverage after the ACA was rolled out.

    Many institutions and organizations, such as universities, news organizations, think tanks, and nonprofits focused on particular issues like poverty and inequality or housing, collect data on the impact of safety net policies on low-income Americans.

    No doubt these nongovernmental data collection efforts will continue, and maybe even increase. However, it’s highly unlikely that these independent efforts can replace any of the government’s data collection programs – let alone all of them.

    The government, because it takes the lead in implementing official policies, is in a unique position to collect and store sensitive data collected over long periods of time. That’s why the disappearance of thousands of official websites can have very long-term consequences.

    What makes Trump’s approach stand out

    The Trump administration’s pausing, defunding and suppressing of government data marks a big departure from his predecessors.

    As early as the 1930s, U.S. social scientists and local policymakers realized the potential for data to show which policies were working and which were a waste of money. Since then, policymakers across the political spectrum have grown increasingly interested in using data to make government work better.

    This focus on data grew starting in 2001, when President George W. Bush made holding government accountable to measurable outcomes a top priority.

    He saw data as a powerful tool for reducing waste and assessing policy outcomes. His signature education reform, the No Child Left Behind Act, radically expanded the collection and reporting of student achievement data at K-12 public schools.

    President George W. Bush speaks about education in 2005 at a high school in Falls Church, Va., outlining his plans for the No Child Left Behind Act.
    Alex Wong/Getty Images)

    How this contrasts with the Obama and Biden administrations

    Presidents Barack Obama and Joe Biden emphasized the importance of data for evaluating the impact of their policies on low-income people, who have historically had little political clout.

    Obama initiated a working group to identify ways to collect, analyze and incorporate more useful data into safety net policies. Biden implemented several of the group’s suggestions.

    For example, he insisted on the collection of demographic data and its analysis when assessing the impacts of new safety net policies. This approach shaped how his administration handled changes in home loan practices, the expansion of broadband access and the establishment of outreach programs for enrolling people in Medicaid and Medicare.

    Why rebuilding will be hard

    It’s harder to make a case for safety net programs when you don’t have relevant data. For example, programs that help low-income people see a doctor, get fresh food and find housing can be more cost-effective than simply having them continue to live in poverty.

    Blocking data collection may also make restoring government funding after a program gets cut or shut down even more challenging. That’s because it will be more challenging for people who in the past benefited from these programs to persuade their fellow taxpayers that there is a need for investing in a expanding program or creating a new one.

    Without enough data, even well-intended policies in the future may worsen the very problems they’re meant to fix, long after the Trump administration has concluded.

    Sarah James does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Data can show if government programs work or not, but the Trump administration is suppressing the necessary information – https://theconversation.com/data-can-show-if-government-programs-work-or-not-but-the-trump-administration-is-suppressing-the-necessary-information-259760

    MIL OSI Analysis

  • MIL-OSI Analysis: College ‘general education’ requirements help prepare students for citizenship − but critics say it’s learning time taken away from useful studies

    Source: The Conversation – USA (2) – By Kelly Ritter, Professor of Writing and Communication, Georgia Institute of Technology

    Students learn about the arts and humanities, social sciences, and science and mathematics in general education. Olga Pankova/Moment via Getty Images

    What do Americans think of when they hear the words “general education”?

    By definition, general education covers introductory college courses in arts and humanities, social sciences, and science and mathematics. It has different names, including core curriculum or distribution requirements, depending on the college or university.

    It is also sometimes called liberal education, including by the American Association of Colleges and Universities, which describes it as providing “a sense of social responsibility, as well as strong and transferable intellectual and practical skills.”

    The liberal label can be fodder for conservative groups who argue that today’s general education is part of an indoctrination into higher education’s purported left-leaning belief systems. Some other conservatives support general education as a concept but want more emphasis on so-called traditional values and less on cross-cultural understanding. These initiatives position general education and college as a space for ideological battles.

    As a scholar of historical connections between literacy and social class, I know that general education was designed to provide opportunity for all students without regard for their political preferences.

    The value of a college education can be shaped by political affiliation.
    bernarddobo/iStock via Getty Images

    An education for all

    Eighty years ago, a group of Harvard University faculty created what many colleges and universities still follow as a template for general education. This plan was outlined in the book “General Education in a Free Society.”

    Harvard’s plan was meant for all students, including veterans studying under the GI Bill, and others we today refer to as first generation, where neither parent had a college degree.

    General education made college more accessible to students who were not becoming doctors or lawyers but who also wanted careers outside the vocational trades. It helped make college a place for educating all citizens, not just students of socioeconomic privilege.

    Expanding access to higher education was central to the 1947 special report Higher Education for American Democracy, commissioned by President Harry Truman. The goal was to provide a foundational education for all, especially in math and science. But the report, commonly known as the Truman Commission Report, also included disciplines that help students understand the world – such as writing and communication, literature, psychology and history.

    The purposes of general education are central to two competing views of college today, views that I also hear expressed by students and parents I’ve met in my 28 years as a professor.

    One view of college is of an on-campus experience steeped in the liberal arts that holistically prepares students to live in a functioning democracy. These benefits are seen as worth the time and costs.

    The other view is of college as a sum of career-focused credentials that can begin and end anywhere, not specific to one college campus. These benefits are completely financial, to be gained via the cheapest, quickest means.

    Both of these views are informed by national perspectives that further divide citizens on higher education as a whole, such as Vice President JD Vance’s 2021 statement that “there was a wisdom in what Richard Nixon said approximately 40, 50 years ago. He said, and I quote, ‘The professors are the enemy.’”

    Both these groups of Americans, however, hope that obtaining a college degree will pay off for graduates who find employment and reach a standard of living better than their parents’ generation.

    For the first group, general education is critical to developing the whole student for jobs and life. For the latter, it is an expensive obstacle to it.

    Not surprisingly, these views on education and college often correspond to political party identification and whether a person attended college themselves.

    A July 2023 Lumina Foundation and Gallup Poll showed that only 36% of Americans have a “great deal” of confidence in higher education, with significant partisan differences between the 20% of Republicans who have this confidence, the 56% of Democrats and the 35% of independents who have it. There are also measurable differences between those who have earned a postgraduate degree and those who have not.

    To cut costs, more students are searching for ways to complete general education requirements before they begin college.
    PeopleImages/E+ via Getty Images

    Questioning value

    As college costs continue to rise in 2025, families are struggling – even taking on payment plans for everyday purchases, also known as phantom debt – to make ends meet.

    General education represents about a third of the requirements of a bachelor’s degree and most of an associate degree.

    For those who see college as a waste of money, general education courses are a calculable loss on future income. In the past two decades, this – and the increasingly competitive admissions process for college – has contributed to a tenfold increase in low-income students who take Advanced Placement courses and a 50% increase since 2021 in the number of students in dual-credit coursework. Both programs allow students to complete general education-equivalent courses for free while still in high school.

    Complete College America, a nonprofit advocacy group that works with states to increase college completion rates, supports these moves by students and parents, classifying general education under “gateway courses” to be completed “as soon as possible.”

    Other groups promote stackable units of credit toward college degrees. This push to complete general education requirements before entering college is gaining momentum, despite studies that show Advanced Placement classes, and exams, favor and benefit mostly white, middle- to upper-class students because these students tend to have more time and resources to devote to AP coursework and also take multiple exams in order to earn college credit.

    For college students, general education can offer benefits beyond career attainment.
    ferrantraite/E+ via Getty Images

    Understanding the world

    While arguments for streamlining college and its costs are evergreen, foundational lessons taught across fields of study are as relevant in 2025 as they were in 1945. The U.S. faces threats to its democracy, is navigating rapid advances in technology, and is adapting to population shifts that will change how its residents live and work.

    General education gives students broad foundational knowledge that can be used in a variety of careers. By design, it teaches an understanding of the world outside one’s own and how to live in it – a core requirement for a functioning democracy.

    Kelly Ritter does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. College ‘general education’ requirements help prepare students for citizenship − but critics say it’s learning time taken away from useful studies – https://theconversation.com/college-general-education-requirements-help-prepare-students-for-citizenship-but-critics-say-its-learning-time-taken-away-from-useful-studies-257083

    MIL OSI Analysis

  • MIL-OSI Analysis: Poll finds bipartisan agreement on a key issue: Regulating AI

    Source: The Conversation – USA – By Adam Eichen, Ph.D. Candidate in Political Science, UMass Amherst

    Are concerns about AI a bridge across the polarization divide? ZargonDesign/iStock via Getty Images

    In the run-up to the vote in the U.S. Senate on President Donald Trump’s spending and tax bill, Republicans scrambled to revise the bill to win support of wavering GOP senators. A provision included in the original bill was a 10-year moratorium on any state law that sought to regulate artificial intelligence. The provision denied access to US$500 million in federal funding for broadband internet and AI infrastructure projects for any state that passed any such law.

    The inclusion of the AI regulation moratorium was widely viewed as a win for AI firms that had expressed fears that states passing regulations on AI would hamper the development of the technology. However, many federal and state officials from both parties, including state attorneys general, state legislators and 17 Republican governors, publicly opposed the measure.

    In the last hours before the passage of the bill, the Senate struck down the provision by a resounding 99-1 vote. In an era defined by partisan divides on issues such as immigration, health care, social welfare, gender equality, race relations and gun control, why are so many Republican and Democratic political leaders on the same page on the issue of AI regulation?

    Whatever motivated lawmakers to permit AI regulation, our recent poll shows that they are aligned with the majority of Americans who view AI with trepidation, skepticism and fear, and who want the emerging technology regulated.

    Bipartisan sentiments

    We are political scientists who use polls to study partisan polarization in the United States, as well as the areas of agreement that bridge the divide that has come to define U.S. politics. In April 2025, we fielded a nationally representative poll that sought to capture what Americans think about AI, including what they think AI will mean for the economy and society going forward.

    The public is generally pessimistic. We found that 65% of Americans said they believe AI will increase the spread of false information. Fifty-six percent of Americans worry AI will threaten the future of humanity. Fewer than 3 in 10 Americans told us AI will make them more productive (29%), make people less lonely (21%) or improve the economy (22%).

    While Americans tend to be deeply divided along partisan lines on most issues, the apprehension regarding AI’s impact on the future appears to be relatively consistent across Republicans and Democrats. For example, only 19% of Republicans and 22% of Democrats said they believe that artificial intelligence will make people less lonely. Respondents across the parties are in lockstep when it comes to their views on whether AI will make them personally more productive, with only 29% − both Republicans and Democrats − agreeing. And 60% of Democrats and 53% Republicans said they believe AI will threaten the future of humanity.

    On the question of whether artificial intelligence should be strictly regulated by the government, we found that close to 6 in 10 Americans (58%) agree with this sentiment. Given the partisan differences in support for governmental regulation of business, we expected to find evidence of a partisan divide on this question. However, our data finds that Democrats and Republicans are of one mind on AI regulation, with majorities of both Democrats (66%) and Republicans (54%) supporting strict AI regulation.

    When we take into account demographic and political characteristics such as race, educational attainment, gender identity, income, ideology and age, we again find that partisan identity has no significant impact on opinion regarding the regulation of AI.

    State of anxiety

    In the years ahead, the debate over AI and the government’s role in regulating it is likely to intensify, on both the state and federal levels. As each day seems to bring new advances in AI’s capability and reach, the future is shaping up to be one in which human beings coexist – and hopefully flourish – alongside AI. This new reality has made the American public, both Democrats and Republicans, justifiably nervous, and our polling captures this widespread trepidation.

    Lawmakers and technology leaders alike could address this anxiety by better communicating the pitfalls and potential of AI, and take seriously the concerns of the public. After all, the public is not alone in its trepidation. Many experts in the field also have substantial worries about the future of AI.

    One of the fundamental political questions moving forward, then, will be to what degree regulators put guardrails on this emerging and transformative technology in order to protect Americans from AI’s negative consequences.

    The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    ref. Poll finds bipartisan agreement on a key issue: Regulating AI – https://theconversation.com/poll-finds-bipartisan-agreement-on-a-key-issue-regulating-ai-259780

    MIL OSI Analysis

  • MIL-OSI Analysis: Supreme Court justices’ political leanings got a lot more newspaper coverage after the 2016 death of Scalia – and reporters have been mentioning them ever since

    Source: The Conversation – USA – By Joshua Boston, Associate Professor of Political Science, Bowling Green State University

    Reporters used to treat the Supreme Court as a nonpolitical institution, but not anymore. Tetra Images/Getty

    The U.S. Supreme Court has always ruled on politically controversial issues. From elections to civil rights, from abortion to free speech, the justices frequently weigh in on the country’s most debated problems.

    And because of the court’s influence over national policy, political parties and interest groups battle fiercely over who gets appointed to the high court.

    The public typically finds out about the court – including its significant decisions and the politics surrounding appointments – from the news media. While elected officeholders and candidates make direct appeals to their voters, the justices and Supreme Court nominees are different – they largely rely on the news to disseminate information about the court, giving the public at least a cursory understanding.

    Recently, something has changed in newspaper coverage of the Supreme Court. As scholars of judicial politics, political institutions and political behavior, we set out to understand precisely how media coverage of the court has changed over the past 40 years. Specifically, we analyzed the content of every article referencing the Supreme Court in five major newspapers from 1980 to 2023.

    Of course, people get their news from a variety of sources, but we have no reason to believe the trends we uncovered in our research of traditional newspapers do not apply broadly. Research indicates that alternative media sources largely follow the lead of traditional beat reporters.

    What we found: Politics has a much stronger presence in articles today than in years past, with a notable increase beginning in 2016.

    When public goodwill prevailed

    Not many cases have been more important in the past quarter-century or, from a partisan perspective, more contentious than Bush v. Gore – the December 2000 ruling that stopped a ballot recount, resulting in then-Texas Governor George W. Bush defeating Democratic candidate Al Gore and winning the presidential election.

    Bush v. Gore is particularly interesting to us because nine unelected, life-tenured justices functionally decided an election.

    The New York Times story about the Supreme Court’s decision in Bush v. Gore indicated the justices’ names and votes but neither the party of the president who appointed them nor their ideological leanings.
    Screenshot, The New York Times

    Surprisingly, the court’s public support didn’t suffer, ostensibly because the court had built up a sufficient store of public goodwill.

    One reason public support remained steady following Bush v. Gore might be newspaper coverage. Although the court’s decision reflected the justices’ ideologies, with the more conservative members effectively voting to end the recount and its more liberal members voting in favor of the recount, newspapers largely ignored the role of politics in the decision.

    For example, the New York Times case coverage indicated the justices’ names and their votes but mentioned neither the party of the president who appointed them nor their ideological leanings. The words “Democrat,” “Republican,” “liberal” and “conservative” – what we call political frames – do not appear in the Dec. 13, 2000, story about the decision.

    This epitomizes court-related newspaper articles from the 1980s to the early 2000s, when reporters treated the court as a nonpolitical institution. According to our research, court-related news articles in The New York Times, The Washington Post, Chicago Tribune, Los Angeles Times and The Wall Street Journal hardly used political frames during that time.

    Instead, newspapers perpetuated a dominant belief among the public that Supreme Court decisions were based almost completely on legal principles rather than political preferences. This belief, in turn, bolstered support for the court.

    Recent newspaper coverage reveals a starkly different pattern.

    A contemporary political court

    It would be nearly impossible to read contemporary articles about the Supreme Court without getting the impression that it is just as political as Congress and the presidency.

    Analyzing our data from 1980 to 2023, the average number of political frames per article tripled. To be sure, politics has always played a role in the court’s decisions. Now, newspapers are making that clear. The question is when this change occurred.

    Across the five major newspapers, reporting about the court has gradually become more political over time. That isn’t surprising: America has been gradually polarizing since the 1980s as well, and the changes in news media coverage reflect that polarization.

    Take February of 2016, when Justice Antonin Scalia unexpectedly died. Of course, justices have died while serving on the court before. But Scalia was a conservative icon, and his death could have swung the court to the center or the left.

    How the politics of naming his successor played out after Scalia’s death was unprecedented.

    President Barack Obama’s nomination effort to put Merrick Garland on the court were stonewalled. The Senate majority leader, Republican Mitch McConnell of Kentucky, said the Senate would not consider any nomination until after the presidential election, nine months from Scalia’s death.

    Republican candidate Donald Trump, seeing an opening, promised to fill the vacancy with a conservative justice who would overturn Roe v. Wade. The court and the 2016 election became inseparable.

    President Barack Obama and first lady Michelle Obama pay respects to Justice Antonin Scalia, whose 2016 death brought lasting change in newspaper coverage of the court.
    Tom Williams/CQ Roll Call via Getty Images

    Scalia vacancy changed everything

    February 2016 brought about an abrupt and lasting change in newspaper coverage. The day before Scalia’s death, a typical article referencing the court used 3.22 political frames.

    The day after, 10.48.

    We see an uptick in political frames if we consider annual changes as well. In 2015, newspapers averaged 3.50 political frames per article about the Supreme Court. Then, in 2016, 5.30.

    Using a variety of statistical methods to identify enduring framing shifts, we consistently find February 2016 as the moment newspapers shifted to higher levels of political framing of the court. We find the number of political frames in newspapers remained elevated through 2023.

    How stories frame something shapes how people think about it.

    If an article frames a court decision as “originalist” – an analytical approach that says constitutional texts should be interpreted as they were understood at the time they became law – then readers might think of the court as legalistic.

    But if the newspaper were to frame the decision as “conservative,” then readers might think of the court as ideological.

    We found in our study that when people read an article about a court decision using political frames, court approval declines. That’s because most people desire a legal court rather than a political one. No wonder polls today find the court with precariously low public support.

    We do not necessarily hold journalists responsible for the court’s dramatic decline in public support. The bigger issue may be the court rather than reporters. If the court acts politically, and the justices behave ideologically, then reporters are doing their job: writing accurate stories.

    That poses yet another problem. Before Trump’s three court appointments, the bench was known for its relative balance. Sometimes decisions were liberal; other times, conservative.

    In June 2013, the court provided protections to same-sex marriages. Two days earlier, the court struck down part of the Voting Rights Act. A liberal win, a conservative win – that’s what we might expect from a legal institution.

    Today the court is different. For most salient issues, the court supports conservative policies.

    Given, first, the media’s willingness to emphasize the court’s politics, and second, the justices’ ideologically consistent decisions across critical issues, it is unlikely that the news media retreats from political framing anytime soon.

    If that’s the case, the court may need to adjust to its low public approval.

    The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    ref. Supreme Court justices’ political leanings got a lot more newspaper coverage after the 2016 death of Scalia – and reporters have been mentioning them ever since – https://theconversation.com/supreme-court-justices-political-leanings-got-a-lot-more-newspaper-coverage-after-the-2016-death-of-scalia-and-reporters-have-been-mentioning-them-ever-since-259120

    MIL OSI Analysis

  • MIL-OSI Analysis: Philly’s City Council turned down a new rental inspection program − studies show that might harm tenants’ health

    Source: The Conversation – USA – By Gabriel L. Schwartz, Assistant Professor of Health Management and Policy, Drexel University

    Tenants who complain to landlords about housing conditions can risk eviction. Photo Jeff Fusco/The Conversation U.S., CC BY-NC-ND

    As Philadelphia Mayor Cherelle Parker’s US$2 billion housing plan moves forward, heated debates continue about another set of municipal housing proposals that could transform Philadelphia tenants’ rights.

    In June 2025, Philadelphia’s City Council considered three housing bills, collectively known as the Safe Healthy Homes Act. The package was introduced by Nicolas O’Rourke, an at-large council member who belongs to the Working Families Party.

    One of the bills authorized the city to create a fund for tenants to relocate if their buildings are condemned by city inspectors. It was signed into law, though it remains unclear how the fund will be financed.

    The other two bills stalled. One was an ordinance that would broadly strengthen tenants’ rights, and the other – known as the Right to Repairs – would shift how Philadelphia ensures housing is safe for tenants, empowering the city to proactively inspect rentals for housing code violations.

    These bills deal with housing policy, but they’re also matters of public health.

    I know this because I am a researcher in Philadelphia who studies how housing affects our health outcomes. And in particular, recent research by myself and others suggests the fate of the Rights to Repairs legislation could have major implications for Philadelphians’ well-being.

    Housing protections today

    To understand this new evidence, it’s important to first understand the system of housing regulations Philadelphia has now, in the absence of the proposed Right to Repairs legislation.

    When a landlord rents an apartment, Pennsylvania law mandates that apartment must be habitable and free of hazards such as mold, cockroaches and dangerous dilapidation.

    This legal principle is known as the “implied warranty of habitability.”

    All 50 states except Arkansas have some kind of policy like this, though they vary in how much they hold landlords responsible for tenants’ safety.

    Under Pennsylvania’s warranty and related municipal law, if conditions deteriorate in a rental property, Philadelphia tenants are first supposed to alert their landlord, who has 30 days to fix the given violation – such as rodents or lead exposure.

    If landlords refuse, however, tenants are in a bind. They could file a complaint with the Department of Licenses and Inspections, which might come and issue a citation. Tenants could also file a lawsuit against their landlord, and they are entitled to withhold rent. But all of these options risk provoking your landlord – at potentially high cost.

    Invoking your warranty rights as a tenant can therefore be tricky. You have to know your rights, document repair requests in writing, and be willing to take your landlord to task legally.

    That’s challenging in a city like Philadelphia, where most renters – outside of a pilot program in some ZIP codes – aren’t guaranteed lawyers in housing court.

    Indeed, nationally, 9 in 10 landlords have lawyers in housing cases, while 9 in 10 tenants do not.

    The stakes are high for tenants. If they complain, they risk eviction – and that’s amid a shortage of affordable housing in Philadelphia and across the country.

    In 2018 alone, according to a local news investigation, Philadelphia landlords filed over 2,000 eviction cases soon after tenants raised habitability issues, despite such retaliatory evictions being illegal. More up-to-date estimates are hard to come by, as these illegal evictions are not systematically tracked.

    Tenants have little choice. Philadelphia does not require that an apartment pass an inspection before the city issues rental licenses or certificates of rental suitability. If housing violations arise, it’s on tenants to assert and defend their rights.

    Philadelphia City Council member Nicolas O’Rourke introduced a housing legislation package guided by three rights – the right to safety, the right to repairs and the right to relocation. Only the right to relocation bill was passed.
    Lisa Lake for MoveOn via Getty Images

    Do habitability laws work?

    Housing quality protections for tenants, in other words, largely boil down to implied warranties of habitability, plus associated fines the city can issue. But this works only if tenants are able to properly document violations, submit complaints and defend themselves from the blowback.

    Despite warranties forming the backbone of Philadelphia’s housing quality governance system – and concerns that these laws saddle tenants with unreasonable enforcement responsibilities – little is known about whether warranties are even effective. Do they keep tenants from getting sick due to poor housing conditions?

    To find out, fellow researchers and I examined what happened when nine states enacted implied warranty of habitability laws like the one in place in Pennsylvania today. We wanted to know whether renters’ health improved after warranty policies were enacted, compared with other states where such laws didn’t go into effect over the same period.

    We also used homeowners as a control group, comparing whether renters’ health uniquely improved when these laws were enacted. Homeowners are useful here because we wouldn’t expect homeowners’ health to be affected by these laws.

    Our findings were stark: We found no improvements for renters at all, across a slew of housing-related health outcomes, even 10 years after enactment.

    There were no effects on renters’ asthma, respiratory allergies, bronchitis, mental health, hospitalizations, or even less clinical outcomes such as self-rated health.

    To be clear, implied warranties of habitability are important laws and are surely helpful for individual tenants. Broadly speaking, however, our findings suggest that these policies simply don’t work.

    That is likely especially true in Pennsylvania, a state whose implied warranty of habitability was given an F- by researchers who evaluated the comprehensiveness of states’ policies for protecting tenants’ well-being.

    A 2014 study in neighboring New Jersey helps shed light on why these policies fall short.

    Researchers there examined 40,000 eviction cases, looking for whether tenants successfully raised implied warranty of habitability violations as a defense. Given how often landlords retaliate after violation complaints are made, one might expect thousands of tenants party to these lawsuits to have invoked their warranty rights.

    The result? Only 80 tenants did so – 80 out of 40,000.

    In practice, then, existing data paints a bleak picture: The vast majority of tenants lack the financial resources, legal knowledge, alternative housing options or freedom from fear necessary to protect themselves from unsafe conditions at home.

    Proactive rental inspections show more success

    What policies might work instead? Cities such as Rochester, New York, may provide an answer.

    In 2005, Rochester implemented a more proactive rental inspection program to combat their child lead-poisoning crisis – a problem Philadelphia shares.

    This meant that Rochester’s municipal inspectors began proactively inspecting rental units on a regular basis and issuing fines for any violations they found. Tenants did not have to file a complaint and therefore weren’t forced into adversarial disputes with their landlords.

    The results were dramatic. By 2012, childhood lead poisoning in Rochester had dropped by 85%. This decline was nearly 2.5 times faster than the rest of New York state.

    Further, scientists found that units that were inspected every three years had one-third of the rate of housing code violations as units inspected every six years.

    Whether the Right to Repair is good policy for Philadelphia is a question for city legislators. But research is increasingly clear: The city’s current housing policies do not protect tenants from unsafe housing, while proactive rental inspections show real promise for fighting persistent housing-related health problems.

    Read more of our stories about Philadelphia.

    Gabriel L. Schwartz’s research described in this article was funded through a pilot grant from the UCSF Benioff Homelessness and Housing Initiative. UCSF had no role in the design, completion, or reporting of that study. The views expressed in this article solely represent the scientific opinion of the author, and do not necessarily represent the opinion of either UCSF or his employer.

    ref. Philly’s City Council turned down a new rental inspection program − studies show that might harm tenants’ health – https://theconversation.com/phillys-city-council-turned-down-a-new-rental-inspection-program-studies-show-that-might-harm-tenants-health-260266

    MIL OSI Analysis

  • MIL-OSI Analysis: Research replication can determine how well science is working – but how do scientists replicate studies?

    Source: The Conversation – USA – By Amanda Kay Montoya, Associate Professor of Psychology, University of California, Los Angeles

    Some research teams work on replicating prior studies to assess the value of a body of work. AzmanL/E+ via Getty Images

    Back in high school chemistry, I remember waiting with my bench partner for crystals to form on our stick in the cup of blue solution. Other groups around us jumped with joy when their crystals formed, but my group just waited. When the bell rang, everyone left but me. My teacher came over, picked up an unopened bag on the counter and told me, “Crystals can’t grow if the salt is not in the solution.”

    To me, this was how science worked: What you expect to happen is clear and concrete. And if it doesn’t happen, you’ve done something wrong.

    If only it were that simple.

    It took me many years to realize that science is not just some series of activities where you know what will happen at the end. Instead, science is about discovering and generating new knowledge.

    Now, I’m a psychologist studying how scientists do science. How do new methods and tools get adopted? How do changes happen in scientific fields, and what hinders changes in the way we do science?

    One practice that has fascinated me for many years is replication research, where a research group tries to redo a previous study. Like with the crystals, getting the same result from different teams doesn’t always happen, and when you’re on the team whose crystals don’t grow, you don’t know if the study didn’t work because the theory is wrong, or whether you forgot to put the salt in the solution.

    The replication crisis

    A May 2025 executive order by President Donald Trump emphasized the “reproducibility crisis” in science. While replicability and reproducibility may sound similar, they’re distinct.

    Reproducibility is the ability to use the same data and methods from a study and reproduce the result. In my editorial role at the journal Psychological Science, I conduct computational reproducibility checks where we take the reported data and check that all the results in the paper can be reproduced independently.

    But we’re not running the study over again, or collecting new data. While reproducibility is important, research that is incorrect, fallible and sometimes harmful can still be reproducible.

    By contrast, replication is when an independent team repeats the same process, including collecting new data, to see if they get the same results. When research replicates, the team can be more confident that the results are not a fluke or an error.

    Reproducibility and replicability are both important, but have key differences.
    Open Economics Guide, CC BY

    The “replication crisis,” a term coined in psychology in the early 2010s, has spread to many fields, including biology, economics, medicine and computer science. Failures to replicate high-profile studies concern many scientists in these fields.

    Why replicate?

    Replicability is a core scientific value: Researchers want to be able to find the same result again and again. Many important findings are not published until they are independently replicated.

    In research, chance findings can occur. Imagine if one person flipped a coin 10 times and got two heads, then told the world that “coins have a 20% chance of coming up heads.” Even though this is an unlikely outcome – about 4% – it’s possible.

    Replications can correct these chance outcomes, as well as scientific errors, to ensure science is self-correcting.

    For example, in the search for the Higgs boson, two research centers at CERN, the European Council for Nuclear Research, ATLAS and CMS, independently replicated the detection of a particle with a large unique mass, leading to the 2013 Nobel Prize in physics.

    The ATLAS experiment at the Large Hadron Collider at CERN is one of two that led to the discovery of the Higgs boson.
    CERN, CC BY

    The initial measurements from the two centers actually estimated the mass of the particle as slightly different. So while the two centers didn’t find identical results, the teams evaluated them and determined they were close enough. This variability is a natural part of the scientific process. Just because results are not identical does not mean they are not reliable.

    Research centers like CERN have replication built into their process, but this is not feasible for all research. For projects that are relatively low cost, the original team will often replicate their work prior to publication – but doing so does not guarantee that an independent team could get the same results.

    Because the results on vaccine efficacy were so clear, replication wasn’t necessary and would have slowed the process of getting the vaccine to people.
    XKCD, CC BY-NC

    When projects are costly, urgent or time-specific, independently replicating them prior to disseminating results is often not feasible. Remember when people across the country were waiting for a COVID-19 vaccine?

    The initial Pfizer-BioNTech COVID-19 vaccine took 13 months from the start of the trial to authorization from the Food and Drug Administration. The results of the initial study were so clear and convincing that a replication would have unnecessarily delayed getting the vaccine out to the public and slowing the spread of disease.

    Since not every study can be replicated prior to publication, it’s important to conduct replications after studies are published. Replications help scientists understand how well research processes are working, identify errors and self-correct. So what’s the process of conducting a replication?

    The replication process

    Researchers could independently replicate the work of other teams, like at CERN. And that does happen. But when there are only two studies – the original and the replication – it’s hard to know what to do when they disagree. For that reason, large multigroup teams often conduct replications where they are all replicating the same study.

    Alternatively, if the purpose is to estimate the replicability of a body of research – for example, cancer biology – each team might replicate a different study, and the focus is on the percentage of studies that replicate across many studies.

    These large-scale replication projects have arisen around the world and include ManyLabs, ManyBabies, Psychological Accelerator and others.

    Replicators start by learning as much as possible about how the original study was conducted. They can collect details about the study from reading the published paper, discussing the work with its original authors and consulting online materials.

    The replicators want to know how the participants were recruited, how the data was collected and using what tools, and how the data was analyzed.

    But sometimes, studies may leave out important details, like the questions participants were asked or the brand of equipment used. Replicators have to make these difficult decisions themselves, which can affect the outcome.

    Replicators also often explicitly change details of the study. For example, many replication studies are conducted with larger samples – more participants – than the original study, to ensure the results are reliable.

    Registration and publication

    Sadly, replication research is hard to publish: Only 3% of papers in psychology, less than 1% in education and 1.2% in marketing are replications.

    If the original study replicates, journals may reject the paper because there is no “new insight.” If it doesn’t replicate, journals may reject the paper because they assume the replicators made a mistake – remember the salt crystals.

    Because of these issues, replicators often use registration to strengthen their claims. A preregistration is a public document describing the plan for the study. It is time-stamped to before the study is conducted.

    This type of document improves transparency by making changes in the plan detectable to reviewers. Registered reports take this a step further, where the research plan is subject to peer review before conducting the study.

    If the journal approves the registration, they commit to publishing the results of the study regardless of the results. Registered reports are ideal for replication research because the reviewers don’t know the results when the journal commits to publishing the paper, and whether the study replicates or not won’t affect whether it gets published.

    About 58% of registered reports in psychology are replication studies.

    Replication research often uses the highest standards of research practice: large samples and registration. While not all replication research is required to use these practices, those that do contribute greatly to our confidence in scientific results.

    Replication research is a useful thermometer to understand if scientific processes are working as intended. Active discussion of the replicability crisis, in both scientific and political spaces, suggests to many researchers that there is room for growth. While no field would expect a replication rate of 100%, new processes among scientists aim to improve the rates from those in the past.

    Amanda Kay Montoya is an Associate Professor at the University of California, Los Angeles. She serves on the Board of Directors for the Center for Open Science. She receives funding from the US-National Science Foundation.

    ref. Research replication can determine how well science is working – but how do scientists replicate studies? – https://theconversation.com/research-replication-can-determine-how-well-science-is-working-but-how-do-scientists-replicate-studies-260771

    MIL OSI Analysis

  • Babies born with DNA from three people hailed as breakthrough – but questions remain

    Source: ForeignAffairs4

    Source: The Conversation – UK – By Cathy Herbrand, Professor of Medical and Family Sociology, De Montfort University

    Ten years after the UK became the first country to legalise mitochondrial donation, the first results from the use of these high-profile reproductive technologies – designed to prevent passing on genetic disorders – have finally been published.

    So far, eight children have been born, all reportedly healthy, thanks to the long-term efforts of scientists and doctors in Newcastle, England. Should this be a cause for excitement, disappointment or concern? Perhaps, I would suggest, it could be a bit of all three.

    The New England Journal of Medicine has published two papers on a groundbreaking fertility treatment that could prevent devastating inherited diseases. The technique, called mitochondrial donation, was used to help 22 women who carry faulty genes that would otherwise pass serious genetic disorders – such as Leigh syndrome – to their children. These disorders affect the body’s ability to produce energy at the cellular level and can cause severe disability or death in babies.

    The technique, developed by the Newcastle team, involves creating an embryo using DNA from three people: nuclear DNA from the intended mother and father, and healthy mitochondrial DNA from a donor egg. During the parliamentary debates leading up to The Human Fertilisation and Embryology (Mitochondrial Donation) Regulations in 2015, there were concerns about the effectiveness of the procedure and its potential side effects.

    The announcement that this technology has led to the birth of eight apparently healthy children therefore marks a major scientific achievement for the UK, which has been widely praised by numerous scientists and patient support groups. However, these results should not detract from some important questions they also raise.

    First, why has it taken so long for any updates on the application of this technology, including its outcomes and its limitations, to be made public? Especially given the significant public financial investment made into its development.

    In a country positioning itself as a leader in the governance and practice of reproductive and genomic medicine, transparency should be a central principle. Transparency not only supports the progress of other research teams but also keeps the public and patients well informed.

    Second, what is the significance of these results? While eight babies were born using this technology, this figure contrasts starkly with the predicted number of 150 babies per year likely to be born using the technique.

    The Human Fertilisation and Embryology Authority, the UK regulator in this area, has approved 32 applications since 2017 when the Newcastle team obtained its licence, but the technique was used with only 22 of them, resulting in eight babies. Does this constitute sufficiently robust data to prove the effectiveness of the technology and was it worth the considerable efforts and investments over almost two decades of campaigning, debate and research?

    As I wrote when this law was passed, officials should have been more realistic about how many people this treatment could actually help. By overestimating the number of patients who might benefit, they risked giving false hope to families who wouldn’t be eligible for the procedure.

    The safety question

    Third, is it safe enough? In two of the eight cases, the babies showed higher levels of maternal mitochondrial DNA, meaning the risk of developing a mitochondrial disorder cannot be ruled out. This potential for a “reversal” – where the faulty mitochondria reassert themselves – was also highlighted in a recent study conducted in Greece involving patients who used the technique to treat infertility problems.

    As a result, the technology is no longer framed by the Newcastle team as a way to prevent the transmission of mitochondrial disorders, but rather to reduce the risk. But is the risk reduction enough to justify offering the technique to more patients? And what will the risk of reassertion mean for the children born through it and their parents, who may live with the continuing uncertainty that the condition could emerge later in life?

    As some experts have suggested, it may be worth testing this technology on women who have fertility problems but don’t carry mitochondrial diseases. This would help doctors better understand the risks of the faulty mitochondria coming back, before using the technique only on women who could pass these serious genetic conditions to their children.

    This leads to a fourth question. What has been the patient experience with this technology? It would be valuable to know how many people applied for mitochondrial donation, why some were not approved, and, among those 32 approved cases, why only 22 proceeded with treatment.

    It also raises important questions about how patients who were either unable to access the technology, or for whom it was ultimately unsuccessful feel, particularly after investing significant time, effort and hope in the process. How do they come to terms with not having the healthy biological child they had been offered?

    This is not to say we shouldn’t celebrate these births and what they represent for the UK in terms of scientific achievement. The birth of eight healthy children represents a genuine scientific breakthrough that families affected by mitochondrial diseases have waited decades to see. However, some important questions remain unanswered, and more evidence is needed and it should be communicated in a timely manner to make conclusions about the long-term use of the technology.

    Breakthroughs come with responsibilities. If the UK wants to maintain its position as a leader in reproductive medicine, it must be more transparent about both the successes and limitations of this technology. The families still waiting to have the procedure – and those who may never receive it – deserve nothing less than complete honesty about what this treatment can and cannot deliver.

    The Conversation

    Cathy Herbrand receives funding from the Economic and Social Research Council.

    ref. Babies born with DNA from three people hailed as breakthrough – but questions remain – https://theconversation.com/babies-born-with-dna-from-three-people-hailed-as-breakthrough-but-questions-remain-261385

  • MIL-OSI Analysis: Babies born with DNA from three people hailed as breakthrough – but questions remain

    Source: The Conversation – UK – By Cathy Herbrand, Professor of Medical and Family Sociology, De Montfort University

    Ten years after the UK became the first country to legalise mitochondrial donation, the first results from the use of these high-profile reproductive technologies – designed to prevent passing on genetic disorders – have finally been published.

    So far, eight children have been born, all reportedly healthy, thanks to the long-term efforts of scientists and doctors in Newcastle, England. Should this be a cause for excitement, disappointment or concern? Perhaps, I would suggest, it could be a bit of all three.

    The New England Journal of Medicine has published two papers on a groundbreaking fertility treatment that could prevent devastating inherited diseases. The technique, called mitochondrial donation, was used to help 22 women who carry faulty genes that would otherwise pass serious genetic disorders – such as Leigh syndrome – to their children. These disorders affect the body’s ability to produce energy at the cellular level and can cause severe disability or death in babies.

    The technique, developed by the Newcastle team, involves creating an embryo using DNA from three people: nuclear DNA from the intended mother and father, and healthy mitochondrial DNA from a donor egg. During the parliamentary debates leading up to The Human Fertilisation and Embryology (Mitochondrial Donation) Regulations in 2015, there were concerns about the effectiveness of the procedure and its potential side effects.

    The announcement that this technology has led to the birth of eight apparently healthy children therefore marks a major scientific achievement for the UK, which has been widely praised by numerous scientists and patient support groups. However, these results should not detract from some important questions they also raise.

    First, why has it taken so long for any updates on the application of this technology, including its outcomes and its limitations, to be made public? Especially given the significant public financial investment made into its development.

    In a country positioning itself as a leader in the governance and practice of reproductive and genomic medicine, transparency should be a central principle. Transparency not only supports the progress of other research teams but also keeps the public and patients well informed.

    Second, what is the significance of these results? While eight babies were born using this technology, this figure contrasts starkly with the predicted number of 150 babies per year likely to be born using the technique.

    The Human Fertilisation and Embryology Authority, the UK regulator in this area, has approved 32 applications since 2017 when the Newcastle team obtained its licence, but the technique was used with only 22 of them, resulting in eight babies. Does this constitute sufficiently robust data to prove the effectiveness of the technology and was it worth the considerable efforts and investments over almost two decades of campaigning, debate and research?

    As I wrote when this law was passed, officials should have been more realistic about how many people this treatment could actually help. By overestimating the number of patients who might benefit, they risked giving false hope to families who wouldn’t be eligible for the procedure.

    The safety question

    Third, is it safe enough? In two of the eight cases, the babies showed higher levels of maternal mitochondrial DNA, meaning the risk of developing a mitochondrial disorder cannot be ruled out. This potential for a “reversal” – where the faulty mitochondria reassert themselves – was also highlighted in a recent study conducted in Greece involving patients who used the technique to treat infertility problems.

    As a result, the technology is no longer framed by the Newcastle team as a way to prevent the transmission of mitochondrial disorders, but rather to reduce the risk. But is the risk reduction enough to justify offering the technique to more patients? And what will the risk of reassertion mean for the children born through it and their parents, who may live with the continuing uncertainty that the condition could emerge later in life?

    As some experts have suggested, it may be worth testing this technology on women who have fertility problems but don’t carry mitochondrial diseases. This would help doctors better understand the risks of the faulty mitochondria coming back, before using the technique only on women who could pass these serious genetic conditions to their children.

    This leads to a fourth question. What has been the patient experience with this technology? It would be valuable to know how many people applied for mitochondrial donation, why some were not approved, and, among those 32 approved cases, why only 22 proceeded with treatment.

    It also raises important questions about how patients who were either unable to access the technology, or for whom it was ultimately unsuccessful feel, particularly after investing significant time, effort and hope in the process. How do they come to terms with not having the healthy biological child they had been offered?

    This is not to say we shouldn’t celebrate these births and what they represent for the UK in terms of scientific achievement. The birth of eight healthy children represents a genuine scientific breakthrough that families affected by mitochondrial diseases have waited decades to see. However, some important questions remain unanswered, and more evidence is needed and it should be communicated in a timely manner to make conclusions about the long-term use of the technology.

    Breakthroughs come with responsibilities. If the UK wants to maintain its position as a leader in reproductive medicine, it must be more transparent about both the successes and limitations of this technology. The families still waiting to have the procedure – and those who may never receive it – deserve nothing less than complete honesty about what this treatment can and cannot deliver.

    Cathy Herbrand receives funding from the Economic and Social Research Council.

    ref. Babies born with DNA from three people hailed as breakthrough – but questions remain – https://theconversation.com/babies-born-with-dna-from-three-people-hailed-as-breakthrough-but-questions-remain-261385

    MIL OSI Analysis

  • MIL-OSI Submissions: East African countries and open borders: great strides, but still a long way to go

    Source: The Conversation – Africa – By Alan Hirsch, Senior Research Fellow New South Institute, Emeritus Professor at The Nelson Mandela School of Public Governance, University of Cape Town

    It’s not uncommon to find a Ugandan taxi driver in Rwanda’s capital, Kigali, just as one regularly meets Zimbabwean Uber drivers in South Africa. But there is a big difference. A Ugandan working in Rwanda most likely has a secure legal right to be there, whereas Zimbabweans working in South Africa are often uncertain of their current or future legality.

    East Africa has made greater strides towards the free flow of people crossing borders and seeking work than most of Africa. Only the Economic Community of West African States (Ecowas) is in the same league.

    While the African Union’s Free Movement of Persons protocol has faltered at a continental level, some of the regional economic communities have made progress. The Southern African Development Community (SADC) allows visa-free travel across almost all its borders.

    Ecowas and the East African Community (EAC) have driven ambitiously towards regional common markets including the freeing up of job-seeking, residential settlement and business development across the borders of member states.

    The New South Institute, a think-tank focused on governance reforms in the global south, is nearing the end of a research programme on migration governance reform in Africa. Our new report is on East Africa.

    We have found that unlike much of the global north, the African continent is moving towards more open borders for people. In some of the global south the promise of economic growth outweighs political fears. Yet progress is slow, and not coordinated. Mostly migration reform happens in regions and between neighbours.

    The progress in the East African Community is particularly notable compared with other African regional communities. We identify a number of reasons for this, including strong leadership and co-operation between state and non-state actors.

    The commitment to free movement

    The East African Community adopted its Common Market Protocol in 2010. The bloc is made up of Tanzania, Uganda, Kenya, Rwanda, Burundi, South Sudan, the DRC and Somalia.

    The regional body’s common market pact includes the movement of goods, services, capital and people. It gives people the right – on paper at least – to find employment across borders, the right to reside and the right to establish a business. There is also a commitment to the harmonisation and mutual recognition of academic and professional qualifications and labour policies to ease mobility.

    Even before the common market protocol, the regional bloc began to establish one-stop border posts on many of its internal borders to facilitate the flow of goods and people. Though they don’t all operate the same way or equally well, they have been successful at easing movement.

    Uneven outcomes

    The common market’s impact on the movement of people has been uneven within the region. Most integrated are Uganda, Kenya and Rwanda, which allow the cross-border movement of citizens with standardised identity documents – they do not need passports.

    It is also relatively easy to get jobs across these borders.

    Tanzania and Burundi are close to the inner circle but still require passports, though no visas. The three states which joined more recently, South Sudan, the DRC and Somalia, are all fragile states with governance systems that do not always meet the standards needed for acceptance into all the privileges of the regional bloc.

    In practice there is differential treatment. Generally, it is more difficult for citizens of the three latecomers to get regular access and jobs in their regional partners.

    Another limitation when it comes to the mobility of people is that little progress has been made in the formal harmonisation of education, health and social welfare systems between member states. This inhibits job seeking across borders.

    In addition, national labour laws, which tend to require permits for foreigners, still apply to varying degrees in the region. Some countries are more permissive. For example, Kenya, Uganda and Rwanda have a reciprocal no-fee work permit agreement.

    Another shortcoming has been that the outcomes of court processes in enforcing the freedom of movement have been disappointing. This is so even though the regional bloc has an active East African Court of Justice. Its legal mandate includes the enforcement of the bloc’s treaty and its protocols.

    In some cases the court has found that national actions inhibiting the movement of persons were trumped by the regional protocol. It has instructed the errant governments to comply. But its ability to enforce the decisions is minimal.

    Reasons for success

    Leadership has been important. The fact that the strongest economy in the region, Kenya, has been part of the leading echelon is significant.

    Rwanda and Uganda have led by example too. Rwanda was one of the first countries on the continent to offer visa-free entry to all other African countries. For its part, Uganda is widely admired for its refugee inclusion programmes.

    Another factor outlined in our report has been the opportunity for collaboration fostered by relationships between formal institutions, such as governments, and non-state actors such as the International Organisation for Migration. Interactions between these various players have created opportunities for officials and policymakers from states of the region to meet, discuss issues of concern, and develop relationships of trust and understanding.

    Another non-state donor-funded actor, TradeMark Africa, which was established in 2010 to support in the implementation of the common market in east Africa, provided considerable support. For example it supported the implementation of the regional One-Stop Border Post programme..

    Way forward

    Based on our report we identified changes that could make a positive difference.

    Firstly, the development of reliable, harmonised systems in the region to collect and manage data on population mobility and employment. This would build confidence that policy was being made on the basis of reliable information.

    Secondly, reducing friction in cross-border monetary transactions, including migrants’ remittances. This would make it easier for migrants to send some of their income to their countries of origin.

    Thirdly, improvements to population registers, identity documents, passports and cross-border migration management systems. Improvements would build mutual trust in the integrity of systems and pave the way for further commitments to lowering migration barriers.

    Fourth, cooperation on cross-border access to social services such as health and education. This is one of the most important intermediate steps towards freeing up mobility for the citizens of the region.

    Fifth, reconsidering some of the amendments made to weaken the East African Court of Justice in 2007. This would strengthen the de jure powers of the court, adding considerably to the entrenchment of cross-border rights in the region.

    Ultimately, the key constraint in the region is political and security instability, which holds back social and economic development. Nevertheless, incremental progress on mobility is possible despite issues in the fragile states, even though it may result in asymmetric progress within the East African Community.

    Alan Hirsch’s work on migration governance is part of his responsibilities while employed as a Senior Research Fellow at the New South Institute.

    ref. East African countries and open borders: great strides, but still a long way to go – https://theconversation.com/east-african-countries-and-open-borders-great-strides-but-still-a-long-way-to-go-261021

    MIL OSI

  • What will batteries of the future be made of? Four scientists discuss the options – podcast

    Source: ForeignAffairs4

    Source: The Conversation – UK – By Gemma Ware, Host, The Conversation Weekly Podcast, The Conversation

    The majority of the world’s rechargeable batteries are now made using lithium-ion. Most rely on a combination of different rare earth metals such as cobalt or nickel for their electrodes. But around the world, teams of researchers are looking for alternative – and more sustainable – materials to build the batteries of the future.

    In this episode of The Conversation Weekly podcast, we speak to four scientists  who are testing a variety of potential battery materials about the promises they may offer.

    When lithium-ion batteries emerged in the 1990s, they were a huge breakthrough, says Laurence Hardwick, a professor of electrochemistry at the University of Liverpool in the UK. He explains that lithium-ion batteries “ became commercialised at the same time as the mobile electronics industry really took off”. But their subsequent use in electric cars now presents “a challenge of scale”, given the use of rare earth minerals within their components.

    Hardwick is director of the Stephenson Institute for Renewable Energy, named after the 19th-century engineer George Stephenson – builder of the world’s first inter-city rail link between Liverpool and Manchester, which passed close by to the University of Liverpool’s campus.

    Hardwick’s work focuses on what other materials could be used either in conjunction with lithium, or on their own, to diversify battery manufacturing away from rare earth metals. Part of this includes research on solid-state batteries, which use ceramic plates rather than a solvents to conduct the ions that provide the charge. “ Solid-state batteries offer a lot of potential energy-gaining benefits and safety benefits,” he says.

    Sodium-ion is also being touted as a potential alternative to lithium-ion batteries. Robert Armstrong, principal research fellow in chemistry at the University of St Andrews in Scotland, is part of a consortium of UK-based researchers working on questions around sodium-ion batteries, including what type of electrodes and electrolytes work best.

    Like potassium-ion, which is also a potential battery candidate, sodium-ion is heavier than lithium-ion, but Armstrong says sodium is  fairly evenly abundant: “So you don’t have the supply issues that might affect lithium-ion, and you’re not like to see the same price volatility.”

    Some Chinese manufacturers in China, such as BYD and CATL, are pushing ahead with sodium-ion batteries for cars, despite the fact they’re heavier than lithium-ion batteries. There’s also interest in sodium-based technology in countries in the Arabian Gulf that use desalination plants. “They’ve got all this sodium kicking around. Why not make use of it?” says Armstrong.

    Batteries which biodegrade

    A version of the soil-fuelled Terracell battery
    Terracell on display at the Prototypes for Humanity 2024 showcase in Dubai.
    Gemma Ware, CC BY-SA

    Other researchers are looking at how to make batteries out of plant-based materials that are biodegradable. Bill Yen, a PhD candidate in electrical engineering at Stanford University, is part of a team who are developing Terracell, a type of battery that generates power using microbes in the soil.

    Their inspiration was how to power environmental sensors in damp environments without leaving lots of electronic waste behind at the end of the battery’s life. Terracell won the energy category of the Prototypes for Humanity 2024 event in 2024 in Dubai, a  showcase for sustainable solutions to the world’s problems.

    Also in Dubai was Ulugbek Asimov, a professor of mechanical and construction engineering at Northumbria University in the UK, who is developing BioPower Cells, a type of rechargeable battery made from waste products such as coffee which doesn’t contain any rare earth metals. “  And at the end of its lifespan, we drop it into boiling water and it will be turned into liquid ionic fertilizer,” Asimov said.

    Listen to The Conversation Weekly to hear the conversations with these four scientists about their work and the batteries of the future.


    Applications are now open for early career researchers to submit their projects for the Prototypes for Humanity 2025 awards and showcase in Dubai.

    This episode of The Conversation Weekly was written and produced by Gemma Ware with assistance from Mend Mariwany and Katie Flood. Mixing and sound design by Eloise Stevens and theme music by Neeta Sarl.

    Listen to The Conversation Weekly via any of the apps listed above, download it directly via our RSS feed or find out how else to listen here. A transcript of this episode is available on Apple Podcasts or Spotify.

    The Conversation

    Bill Yen has received funding for his work on Terracell from National Science Foundation, the Agricultural and Food Research Initiative and support from the Alfred P. Sloan Foundation,VMware Research, and 3M. Laurence Hardwick has received funding from the Faraday Institution and is a member of the Royal Society of Chemistry. Ulugbek Asimoz has received funding from the Northern Accelerator Proof of Concept to develop certain stages of the BioPower Cells project, which will be a spinout company from Northumbria University in the future. Robert Armstrong has received funding from the Faraday Institution and funding from EPSRC and Leverhulme Trust.

    ref. What will batteries of the future be made of? Four scientists discuss the options – podcast – https://theconversation.com/what-will-batteries-of-the-future-be-made-of-four-scientists-discuss-the-options-podcast-261294

  • MIL-OSI Analysis: What will batteries of the future be made of? Four scientists discuss the options – podcast

    Source: The Conversation – UK – By Gemma Ware, Host, The Conversation Weekly Podcast, The Conversation

    The majority of the world’s rechargeable batteries are now made using lithium-ion. Most rely on a combination of different rare earth metals such as cobalt or nickel for their electrodes. But around the world, teams of researchers are looking for alternative – and more sustainable – materials to build the batteries of the future.

    In this episode of The Conversation Weekly podcast, we speak to four scientists  who are testing a variety of potential battery materials about the promises they may offer.

    When lithium-ion batteries emerged in the 1990s, they were a huge breakthrough, says Laurence Hardwick, a professor of electrochemistry at the University of Liverpool in the UK. He explains that lithium-ion batteries “ became commercialised at the same time as the mobile electronics industry really took off”. But their subsequent use in electric cars now presents “a challenge of scale”, given the use of rare earth minerals within their components.

    Hardwick is director of the Stephenson Institute for Renewable Energy, named after the 19th-century engineer George Stephenson – builder of the world’s first inter-city rail link between Liverpool and Manchester, which passed close by to the University of Liverpool’s campus.

    Hardwick’s work focuses on what other materials could be used either in conjunction with lithium, or on their own, to diversify battery manufacturing away from rare earth metals. Part of this includes research on solid-state batteries, which use ceramic plates rather than a solvents to conduct the ions that provide the charge. “ Solid-state batteries offer a lot of potential energy-gaining benefits and safety benefits,” he says.

    Sodium-ion is also being touted as a potential alternative to lithium-ion batteries. Robert Armstrong, principal research fellow in chemistry at the University of St Andrews in Scotland, is part of a consortium of UK-based researchers working on questions around sodium-ion batteries, including what type of electrodes and electrolytes work best.

    Like potassium-ion, which is also a potential battery candidate, sodium-ion is heavier than lithium-ion, but Armstrong says sodium is  fairly evenly abundant: “So you don’t have the supply issues that might affect lithium-ion, and you’re not like to see the same price volatility.”

    Some Chinese manufacturers in China, such as BYD and CATL, are pushing ahead with sodium-ion batteries for cars, despite the fact they’re heavier than lithium-ion batteries. There’s also interest in sodium-based technology in countries in the Arabian Gulf that use desalination plants. “They’ve got all this sodium kicking around. Why not make use of it?” says Armstrong.

    Batteries which biodegrade

    Terracell on display at the Prototypes for Humanity 2024 showcase in Dubai.
    Gemma Ware, CC BY-SA

    Other researchers are looking at how to make batteries out of plant-based materials that are biodegradable. Bill Yen, a PhD candidate in electrical engineering at Stanford University, is part of a team who are developing Terracell, a type of battery that generates power using microbes in the soil.

    Their inspiration was how to power environmental sensors in damp environments without leaving lots of electronic waste behind at the end of the battery’s life. Terracell won the energy category of the Prototypes for Humanity 2024 event in 2024 in Dubai, a  showcase for sustainable solutions to the world’s problems.

    Also in Dubai was Ulugbek Asimov, a professor of mechanical and construction engineering at Northumbria University in the UK, who is developing BioPower Cells, a type of rechargeable battery made from waste products such as coffee which doesn’t contain any rare earth metals. “  And at the end of its lifespan, we drop it into boiling water and it will be turned into liquid ionic fertilizer,” Asimov said.

    Listen to The Conversation Weekly to hear the conversations with these four scientists about their work and the batteries of the future.


    Applications are now open for early career researchers to submit their projects for the Prototypes for Humanity 2025 awards and showcase in Dubai.

    This episode of The Conversation Weekly was written and produced by Gemma Ware with assistance from Mend Mariwany and Katie Flood. Mixing and sound design by Eloise Stevens and theme music by Neeta Sarl.

    Listen to The Conversation Weekly via any of the apps listed above, download it directly via our RSS feed or find out how else to listen here. A transcript of this episode is available on Apple Podcasts or Spotify.

    Bill Yen has received funding for his work on Terracell from National Science Foundation, the Agricultural and Food Research Initiative and support from the Alfred P. Sloan Foundation,VMware Research, and 3M. Laurence Hardwick has received funding from the Faraday Institution and is a member of the Royal Society of Chemistry. Ulugbek Asimoz has received funding from the Northern Accelerator Proof of Concept to develop certain stages of the BioPower Cells project, which will be a spinout company from Northumbria University in the future. Robert Armstrong has received funding from the Faraday Institution and funding from EPSRC and Leverhulme Trust.

    ref. What will batteries of the future be made of? Four scientists discuss the options – podcast – https://theconversation.com/what-will-batteries-of-the-future-be-made-of-four-scientists-discuss-the-options-podcast-261294

    MIL OSI Analysis

  • New discovery at Cern could hint at why our universe is made up of matter and not antimatter

    Source: ForeignAffairs4

    Source: The Conversation – UK – By William Barter, UKRI Future Leaders Fellow, University of Edinburgh

    Why didn’t the universe annihilate itself moments after the big bang? A new finding at Cern on the French-Swiss border brings us closer to answering this fundamental question about why matter dominates over its opposite – antimatter.

    Much of what we see in everyday life is made up of matter. But antimatter exists in much smaller quantities. Matter and antimatter are almost direct opposites. Matter particles have an antimatter counterpart that has the same mass, but the opposite electric charge. For example, the matter proton particle is partnered by the antimatter antiproton, while the matter electron is partnered by the antimatter positron.

    However, the symmetry in behaviour between matter and antimatter is not perfect. In a paper published this week in Nature, the team working on an experiment at Cern, called LHCb, has reported that it has discovered differences in the rate at which matter particles called baryons decay relative to the rate of their antimatter counterparts. In particle physics, decay refers to the process where unstable subatomic particles transform into two or more lighter, more stable particles.

    According to cosmological models, equal amounts of matter and antimatter were made in the big bang. If matter and antimatter particles come in contact, they annihilate one another, leaving behind pure energy. With this in mind, it’s a wonder that the universe doesn’t consist only of leftover energy from this annihilation process.

    However, astronomical observations show that there is now a negligible amount of antimatter in the universe compared to the amount of matter. We therefore know that matter and antimatter must behave differently, such that the antimatter has disappeared while the matter has not.

    Understanding what causes this difference in behaviour between matter and antimatter is a key unanswered question. While there are differences between matter and antimatter in our best theory of fundamental quantum physics, the standard model, these differences are far too small to explain where all the antimatter has gone.

    So we know there must be additional fundamental particles that we haven’t found yet, or effects beyond those described in the standard model. These would give rise to large enough differences in the behaviour of matter and antimatter for our universe to exist in its current form.

    Revealing new particles

    Highly precise measurements of the differences between matter and antimatter are a key topic of research because they have the potential to be influenced by and reveal these new fundamental particles, helping us discover the physics that led to the universe we live in today.

    Differences between matter and antimatter have previously been observed in the behaviour of another type of particle, mesons, which are made of a quark and an antiquark. There are also hints of differences in how the matter and antimatter versions of a further type of particle, the neutrino, behave as they travel.

    Big Bang
    Equivalent amounts of matter and antimatter were generated by the Big Bang.
    Triff / Shutterstock

    The new measurement from LHCb has found differences between baryons and antibaryons, which are made of three quarks and three antiquarks respectively. Significantly, baryons make up most of the known matter in our universe, and this is the first time that we have observed differences between matter and antimatter in this group of particles.

    The LHCb experiment at the Large Hadron Collider is designed to make highly precise measurements of differences in the behaviour of matter and antimatter. The experiment is operated by an international collaboration of scientists, made up of over 1,800 people based in 24 countries. In order to achieve the new result, the LHCb team studied over 80,000 baryons (“lambda-b” baryons, which are made up of a beauty quark, an up quark and a down quark) and their antimatter counterparts.

    Crucially, we found that these baryons decay to specific subatomic particles (a proton, a kaon and two pions) slightly more frequently – 5% more often – than the rate at which the same process happens with antiparticles. While small, this difference is statistically significant enough to be the first observation of differences in behaviour between baryon and antibaryon decays.

    To date, all measurements of matter-antimatter differences have been consistent with the small level present in the standard model. While the new measurement from LHCb is also in line with this theory, it is a major step forward. We have now seen differences in the behaviour of matter and antimatter in the group of particles that dominate the known matter of the universe. It’s a potential step in the direction of understanding why that situation came to be after the big bang.

    With the current and forthcoming data runs of LHCb we will be able to study these differences forensically, and, we hope, tease out any sign of new fundamental particles that might be present.

    The Conversation

    William Barter works for the University of Edinburgh. He receives funding from UKRI. He is a member of the LHCb collaboration at Cern.

    ref. New discovery at Cern could hint at why our universe is made up of matter and not antimatter – https://theconversation.com/new-discovery-at-cern-could-hint-at-why-our-universe-is-made-up-of-matter-and-not-antimatter-261274

  • MIL-OSI Submissions: New discovery at Cern could hint at why our universe is made up of matter and not antimatter

    Source: The Conversation – UK – By William Barter, UKRI Future Leaders Fellow, University of Edinburgh

    Why didn’t the universe annihilate itself moments after the big bang? A new finding at Cern on the French-Swiss border brings us closer to answering this fundamental question about why matter dominates over its opposite – antimatter.

    Much of what we see in everyday life is made up of matter. But antimatter exists in much smaller quantities. Matter and antimatter are almost direct opposites. Matter particles have an antimatter counterpart that has the same mass, but the opposite electric charge. For example, the matter proton particle is partnered by the antimatter antiproton, while the matter electron is partnered by the antimatter positron.

    However, the symmetry in behaviour between matter and antimatter is not perfect. In a paper published this week in Nature, the team working on an experiment at Cern, called LHCb, has reported that it has discovered differences in the rate at which matter particles called baryons decay relative to the rate of their antimatter counterparts. In particle physics, decay refers to the process where unstable subatomic particles transform into two or more lighter, more stable particles.

    According to cosmological models, equal amounts of matter and antimatter were made in the big bang. If matter and antimatter particles come in contact, they annihilate one another, leaving behind pure energy. With this in mind, it’s a wonder that the universe doesn’t consist only of leftover energy from this annihilation process.

    However, astronomical observations show that there is now a negligible amount of antimatter in the universe compared to the amount of matter. We therefore know that matter and antimatter must behave differently, such that the antimatter has disappeared while the matter has not.

    Understanding what causes this difference in behaviour between matter and antimatter is a key unanswered question. While there are differences between matter and antimatter in our best theory of fundamental quantum physics, the standard model, these differences are far too small to explain where all the antimatter has gone.

    So we know there must be additional fundamental particles that we haven’t found yet, or effects beyond those described in the standard model. These would give rise to large enough differences in the behaviour of matter and antimatter for our universe to exist in its current form.

    Revealing new particles

    Highly precise measurements of the differences between matter and antimatter are a key topic of research because they have the potential to be influenced by and reveal these new fundamental particles, helping us discover the physics that led to the universe we live in today.

    Differences between matter and antimatter have previously been observed in the behaviour of another type of particle, mesons, which are made of a quark and an antiquark. There are also hints of differences in how the matter and antimatter versions of a further type of particle, the neutrino, behave as they travel.

    Equivalent amounts of matter and antimatter were generated by the Big Bang.
    Triff / Shutterstock

    The new measurement from LHCb has found differences between baryons and antibaryons, which are made of three quarks and three antiquarks respectively. Significantly, baryons make up most of the known matter in our universe, and this is the first time that we have observed differences between matter and antimatter in this group of particles.

    The LHCb experiment at the Large Hadron Collider is designed to make highly precise measurements of differences in the behaviour of matter and antimatter. The experiment is operated by an international collaboration of scientists, made up of over 1,800 people based in 24 countries. In order to achieve the new result, the LHCb team studied over 80,000 baryons (“lambda-b” baryons, which are made up of a beauty quark, an up quark and a down quark) and their antimatter counterparts.

    Crucially, we found that these baryons decay to specific subatomic particles (a proton, a kaon and two pions) slightly more frequently – 5% more often – than the rate at which the same process happens with antiparticles. While small, this difference is statistically significant enough to be the first observation of differences in behaviour between baryon and antibaryon decays.

    To date, all measurements of matter-antimatter differences have been consistent with the small level present in the standard model. While the new measurement from LHCb is also in line with this theory, it is a major step forward. We have now seen differences in the behaviour of matter and antimatter in the group of particles that dominate the known matter of the universe. It’s a potential step in the direction of understanding why that situation came to be after the big bang.

    With the current and forthcoming data runs of LHCb we will be able to study these differences forensically, and, we hope, tease out any sign of new fundamental particles that might be present.

    William Barter works for the University of Edinburgh. He receives funding from UKRI. He is a member of the LHCb collaboration at Cern.

    ref. New discovery at Cern could hint at why our universe is made up of matter and not antimatter – https://theconversation.com/new-discovery-at-cern-could-hint-at-why-our-universe-is-made-up-of-matter-and-not-antimatter-261274

    MIL OSI

  • MIL-OSI Analysis: From coal to crops: Dayak women lead a just transition through backyard farming

    Source: The Conversation – Indonesia – By Aidy Halimanjaya, Associate lecturer, Universitas Katolik Parahyangan

    The global shift toward renewable energy is no longer a choice but a necessity: the climate crisis intensifies, with 2024 confirmed as the warmest year on record.

    Yet in Indonesia, coal remains an economic lifeline for several regions. In East Kutai, East Kalimantan, coal mining accounts for nearly 75% of the district’s gross regional domestic product (GRDP).

    The end of the coal mining era will come at a cost to local residents, many of whom risk losing their current jobs — especially after their traditional forest-based livelihoods have already been eroded by environmental degradation tied to fossil fuel extraction.

    Aulia, 31, a Dayak women from East Kutai, admitted:

    We’re heavily dependent on mining—it’s the only thing that gives us a substantial income.

    Yet, amid this dilemma, indigenous Dayak women are unfolding a quiet revolution.

    By growing food crops in their backyards, these women not only generate income but also demonstrate that sustainable agriculture can align with local traditions. Their initiative is an inspiration, especially for communities near mining sites seeking alternative sources of income.

    Mining’s hidden toll on women and indigenous communities

    While coal fuels East Kalimantan’s economy, its benefits are unevenly distributed. In 2024, Kutai Kartanegara and East Kutai regencies were ranked first and third among the province’s poorest regions.

    Instead of prosperity, many residents face environmental degradation and the loss of traditional livelihoods (land-based livelihood). This is especially true for women, who are often marginalised in decision-making and excluded from the mining sector.

    Since the forest was converted into a mining pit, the indigenous Dayak Basap community, which once relied on the forest for its livelihood, has lost its traditional living space and been forced to adapt to survive.

    Many men have turned to mining, while women have sought other ways to support their families: some teach, others run small businesses, and many now grow chillies, spinach, and watercress in their backyards.

    From backyards to resistance: A community’s fight for survival

    With the changing economic landscape, Basap Dayak women are turning to their yards as a source of alternative income. There, they grow food crops that yield quick harvests, are in high demand, and may influence local inflation — such as chillies. Spinach and watercress are also among the popular choices.

    This shift is driven by a 2024 pilot project from Just Transition Indonesia and Parahyangan University, supported by Energi Muda, a local NGO focused on energy transition issues.

    On a 700-square-metre plot, local residents have learned to blend traditional farming with modern permaculture techniques, including composting and crop rotation. Permaculture is a holistic approach to agriculture and land management that mimics patterns found in surrounding natural ecosystems. Local youth are also engaged as community mobilisers to support the post-coal transition.

    The results are promising. With agricultural science and technological support from the startup HARA, Dayak Basap women have overcome challenges such as acidic soil and water pollution caused by mining. Through seed cultivation, their crop yields have even outperformed those of conventional farming methods previously tested.

    They’ve also learned to sell their harvests directly to consumers — such as restaurants and cracker producers — cutting out middlemen and increasing their bargaining power. This combination of traditional knowledge and modern innovation is not only enhancing community capacity but also delivering tangible economic benefits.

    When innovation meets tradition: Overcoming barriers

    However, the journey is far from easy. Formerly mined land takes a long time to recover. Acidic soil and water contaminated with heavy metals pose serious challenges, while limited access to tools and fertilisers remains a significant barrier. In some cases, communities must purchase pre-grown seedlings to speed up the planting process.

    This chilli planting program has been very good. It’s just that the condition of the land was inadequate and hard to improve. If there’s a chance, maybe we can try farming that lasts more than just one season—Indigenous Dayak women.

    Furthermore, the transition from shifting cropping to a long-term management system requires ongoing training. This kind of adaptation certainly cannot be achieved overnight and requires intensive mentoring.

    A just transition must be grassroots-led

    Initiatives like these offer valuable lessons.

    First, the energy transition must involve local communities—especially women—from the outset.

    Second, collective, community-based approaches have proven more sustainable than top-down programmes, which often fail to address real needs on the ground.

    Third, policy support must be directed toward grassroots initiatives like this. The focus should not only be on meeting transition targets, but also on ensuring social and ecological justice.

    In the global context, Indonesia has expressed its commitment through the Paris Agreement and the Just Energy Transition Partnership (JETP). However, this commitment must be grounded in the lived experiences of communities, particularly indigenous women and those directly impacted by extractive industries.

    A just energy transition requires gradual steps, targeted programme support, inclusive partnerships, and genuine commitment from all stakeholders.

    The story of the Dayak Basap women is more than one of resilience—it is a roadmap for a just energy transition. Their success proves that economic diversification is possible, even in coal-dependent regions. But that success hinges on the quality of support: whether it truly meets community needs and is led by strong local leadership.

    Aidy Halimanjaya terafiliasi sebagai pendiri dan direktur Yayasan Transisi berkeadilan Indonesia. Ia menerima dana dari Bank Indonesia melalui Universitas Parahyangan.

    ref. From coal to crops: Dayak women lead a just transition through backyard farming – https://theconversation.com/from-coal-to-crops-dayak-women-lead-a-just-transition-through-backyard-farming-260827

    MIL OSI Analysis

  • MIL-OSI Analysis: Do women really need more sleep than men? A sleep psychologist explains

    Source: The Conversation – Global Perspectives – By Amelia Scott, Honorary Affiliate and Clinical Psychologist at the Woolcock Institute of Medical Research, and Macquarie University Research Fellow, Macquarie University

    klebercordeiro/Getty

    If you spend any time in the wellness corners of TikTok or Instagram, you’ll see claims women need one to two hours more sleep than men.

    But what does the research actually say? And how does this relate to what’s going on in real life?

    As we’ll see, who gets to sleep, and for how long, is a complex mix of biology, psychology and societal expectations. It also depends on how you measure sleep.

    What does the evidence say?

    Researchers usually measure sleep in two ways:

    • by asking people how much they sleep (known as self-reporting). But people are surprisingly inaccurate at estimating how much sleep they get

    • using objective tools, such as research-grade, wearable sleep trackers or the gold-standard polysomnography, which records brain waves, breathing and movement while you sleep during a sleep study in a lab or clinic.

    Looking at the objective data, well-conducted studies usually show women sleep about 20 minutes more than men.

    One global study of nearly 70,000 people who wore wearable sleep trackers found a consistent, small difference between men and women across age groups. For example, the sleep difference between men and women aged 40–44 was about 23–29 minutes.

    Another large study using polysomnography found women slept about 19 minutes longer than men. In this study, women also spent more time in deep sleep: about 23% of the night compared to about 14% for men. The study also found only men’s quality of sleep declined with age.

    The key caveat to these findings is that our individual sleep needs vary considerably. Women may sleep slightly more on average, just as they are slightly shorter on average. But there is no one-size-fits-all sleep duration, just as there is no universal height.

    Suggesting every woman needs 20 extra minutes (let alone two hours) misses the point. It’s the same as insisting all women should be shorter than all men.

    Even though women tend to sleep a little longer and deeper, they consistently report poorer sleep quality. They’re also about 40% more likely to be diagnosed with insomnia.

    This mismatch between lab findings and the real world is a well-known puzzle in sleep research, and there are many reasons for it.

    For instance, many research studies don’t consider mental health problems, medications, alcohol use and hormonal fluctuations. This filters out the very factors that shape sleep in the real world.

    This mismatch between the lab and the bedroom also reminds us sleep doesn’t happen in a vacuum. Women’s sleep is shaped by a complex mix of biological, psychological and social factors, and this complexity is hard to capture in individual studies.

    Let’s start with biology

    Sleep problems begin to diverge between the sexes around puberty. They spike again during pregnancy, after birth and during perimenopause.

    Fluctuating levels of ovarian hormones, particularly oestrogen and progesterone, seem to explain some of these sex differences in sleep.

    For example, many girls and women report poorer sleep during the premenstrual phase just before their periods, when oestrogen and progesterone begin to fall.

    Perhaps the most well-documented hormonal influence on our sleep is the decline in oestrogen during perimenopause. This is linked to increased sleep disturbances, particularly waking at 3am and struggling to get back to sleep.

    Some health conditions also play a part in women’s sleep health. Thyroid disorders and iron deficiency, for instance, are more common in women and are closely linked to fatigue and disrupted sleep.

    How about psychology?

    Women are at much higher risk of depression, anxiety and trauma-related disorders. These very often accompany sleep problems and fatigue. Cognitive patterns, such as worry and rumination, are also more common in women and known to affect sleep.

    Women are also prescribed antidepressants more often than men, and these medications tend to affect sleep.

    Society also plays a role

    Caregiving and emotional labour still fall disproportionately on women. Government data released this year suggests Australian women perform an average nine more hours of unpaid care and work each week than men.

    While many women manage to put enough time aside for sleep, their opportunities for daytime rest are often scarce. This puts a lot of pressure on sleep to deliver all the restoration women need.

    In my work with patients, we often untangle the threads woven into their experience of fatigue. While poor sleep is the obvious culprit, fatigue can also signal something deeper, such as underlying health issues, emotional strain, or too-high expectations of themselves. Sleep is certainly part of the picture, but it’s rarely the whole story.

    For instance, rates of iron deficiency (which we know is more common in women and linked to sleep problems) are also higher in the reproductive years. This is just as many women are raising children and grappling with the “juggle” and the “mental load”.

    Women in perimenopause are often navigating full-time work, teenagers, ageing parents and 3am hot flashes. These women may have adequate or even high-quality sleep (according to objective measures), but that doesn’t mean they wake feeling restored.

    Most existing research also ignores gender-diverse populations. This limits our understanding of how sleep is shaped not just by biology, but by things such as identity and social context.

    So where does this leave us?

    While women sleep longer and better in the lab, they face more barriers to feeling rested in everyday life.

    So, do women need more sleep than men? On average, yes, a little. But more importantly, women need more support and opportunity to recharge and recover across the day, and at night.

    Amelia Scott is a member of the psychology education subcommittee of the Australasian Sleep Association. She receives funding from Macquarie University.

    ref. Do women really need more sleep than men? A sleep psychologist explains – https://theconversation.com/do-women-really-need-more-sleep-than-men-a-sleep-psychologist-explains-259985

    MIL OSI Analysis

  • Do women really need more sleep than men? A sleep psychologist explains

    Source: ForeignAffairs4

    Source: The Conversation – Global Perspectives – By Amelia Scott, Honorary Affiliate and Clinical Psychologist at the Woolcock Institute of Medical Research, and Macquarie University Research Fellow, Macquarie University

    klebercordeiro/Getty

    If you spend any time in the wellness corners of TikTok or Instagram, you’ll see claims women need one to two hours more sleep than men.

    But what does the research actually say? And how does this relate to what’s going on in real life?

    As we’ll see, who gets to sleep, and for how long, is a complex mix of biology, psychology and societal expectations. It also depends on how you measure sleep.

    What does the evidence say?

    Researchers usually measure sleep in two ways:

    • by asking people how much they sleep (known as self-reporting). But people are surprisingly inaccurate at estimating how much sleep they get

    • using objective tools, such as research-grade, wearable sleep trackers or the gold-standard polysomnography, which records brain waves, breathing and movement while you sleep during a sleep study in a lab or clinic.

    Looking at the objective data, well-conducted studies usually show women sleep about 20 minutes more than men.

    One global study of nearly 70,000 people who wore wearable sleep trackers found a consistent, small difference between men and women across age groups. For example, the sleep difference between men and women aged 40–44 was about 23–29 minutes.

    Another large study using polysomnography found women slept about 19 minutes longer than men. In this study, women also spent more time in deep sleep: about 23% of the night compared to about 14% for men. The study also found only men’s quality of sleep declined with age.

    The key caveat to these findings is that our individual sleep needs vary considerably. Women may sleep slightly more on average, just as they are slightly shorter on average. But there is no one-size-fits-all sleep duration, just as there is no universal height.

    Suggesting every woman needs 20 extra minutes (let alone two hours) misses the point. It’s the same as insisting all women should be shorter than all men.

    Even though women tend to sleep a little longer and deeper, they consistently report poorer sleep quality. They’re also about 40% more likely to be diagnosed with insomnia.

    This mismatch between lab findings and the real world is a well-known puzzle in sleep research, and there are many reasons for it.

    For instance, many research studies don’t consider mental health problems, medications, alcohol use and hormonal fluctuations. This filters out the very factors that shape sleep in the real world.

    This mismatch between the lab and the bedroom also reminds us sleep doesn’t happen in a vacuum. Women’s sleep is shaped by a complex mix of biological, psychological and social factors, and this complexity is hard to capture in individual studies.

    Let’s start with biology

    Sleep problems begin to diverge between the sexes around puberty. They spike again during pregnancy, after birth and during perimenopause.

    Fluctuating levels of ovarian hormones, particularly oestrogen and progesterone, seem to explain some of these sex differences in sleep.

    For example, many girls and women report poorer sleep during the premenstrual phase just before their periods, when oestrogen and progesterone begin to fall.

    Perhaps the most well-documented hormonal influence on our sleep is the decline in oestrogen during perimenopause. This is linked to increased sleep disturbances, particularly waking at 3am and struggling to get back to sleep.

    Some health conditions also play a part in women’s sleep health. Thyroid disorders and iron deficiency, for instance, are more common in women and are closely linked to fatigue and disrupted sleep.

    How about psychology?

    Women are at much higher risk of depression, anxiety and trauma-related disorders. These very often accompany sleep problems and fatigue. Cognitive patterns, such as worry and rumination, are also more common in women and known to affect sleep.

    Women are also prescribed antidepressants more often than men, and these medications tend to affect sleep.

    Society also plays a role

    Caregiving and emotional labour still fall disproportionately on women. Government data released this year suggests Australian women perform an average nine more hours of unpaid care and work each week than men.

    While many women manage to put enough time aside for sleep, their opportunities for daytime rest are often scarce. This puts a lot of pressure on sleep to deliver all the restoration women need.

    In my work with patients, we often untangle the threads woven into their experience of fatigue. While poor sleep is the obvious culprit, fatigue can also signal something deeper, such as underlying health issues, emotional strain, or too-high expectations of themselves. Sleep is certainly part of the picture, but it’s rarely the whole story.

    For instance, rates of iron deficiency (which we know is more common in women and linked to sleep problems) are also higher in the reproductive years. This is just as many women are raising children and grappling with the “juggle” and the “mental load”.

    Women in perimenopause are often navigating full-time work, teenagers, ageing parents and 3am hot flashes. These women may have adequate or even high-quality sleep (according to objective measures), but that doesn’t mean they wake feeling restored.

    Most existing research also ignores gender-diverse populations. This limits our understanding of how sleep is shaped not just by biology, but by things such as identity and social context.

    So where does this leave us?

    While women sleep longer and better in the lab, they face more barriers to feeling rested in everyday life.

    So, do women need more sleep than men? On average, yes, a little. But more importantly, women need more support and opportunity to recharge and recover across the day, and at night.

    The Conversation

    Amelia Scott is a member of the psychology education subcommittee of the Australasian Sleep Association. She receives funding from Macquarie University.

    ref. Do women really need more sleep than men? A sleep psychologist explains – https://theconversation.com/do-women-really-need-more-sleep-than-men-a-sleep-psychologist-explains-259985

  • MIL-OSI Submissions: Catholic clergy are speaking out on immigration − more than any other political issue except abortion

    Source: The Conversation – USA (3) – By Evan Stewart, Assistant Professor of Sociology, UMass Boston

    Catholic bishops invited by Mark Seitz, center, the bishop of El Paso, Texas, lead a march in solidarity with migrants on March 24, 2025, in downtown El Paso. AP Photo/Andres Leighton

    Catholic priests across the U.S. discuss immigration with their congregations more than leaders in many other faith traditions, according to our new research published in the journal Sociological Focus.

    Catholic priests also said they discussed immigration more than nearly all other political issues, including hunger in their communities, capital punishment, health care and the environment. Abortion was the only one priests discussed slightly more often.

    Our study, which uses data from the 2022 National Survey of Religious Leaders, found that 71% of Catholic priests surveyed said they spoke about any political issue with their congregations. Among them, just over half talked about immigration.

    In white conservative Protestant congregations, Black Protestant congregations and non-Christian congregations, only about a quarter of leaders who discussed political issues said they talked about immigration. Leaders of white liberal Protestant congregations, however, talked about the topic almost as much as Catholic leaders did.

    Why it matters

    The United States has a long history of religious leaders addressing political matters, on both the left and the right – and today is no different.

    With immigration raids on the rise across the country and an unprecedented level of funding approved for deportations, Catholic bishops in the U.S. are speaking out. Many of them have called for compassion and care for migrants and the need to uphold human dignity and due process, regardless of someone’s immigration status – in line with Catholic social teaching.

    As sociologists who study politics and religion, we wanted to know what is happening on the ground in congregations. Given the church’s teachings about caring for the vulnerable, we expected that Catholic clergy might be particularly likely to speak out.

    However, the percentage of people affiliated with a religious congregation is decreasing, and those who do attend are increasingly politically conservative. Rank and file Catholics are very divided on their support for immigrants, according to a 2024 national survey by the Center for Applied Research in the Apostolate.

    In this context, we were curious about whether clergy would discuss a political issue such as immigration with their congregations or say they avoid it altogether.

    What still isn’t known

    The survey we used is from 2022, before some of today’s immigration enforcement policies took effect. That said, these findings demonstrate that immigration was on the radar for Catholic leaders before the recent changes under the current administration.

    Because we focused on survey data, we got a good picture of trends among Catholic leaders nationwide. However, we could look only at whether religious leaders reported discussing immigration; we could not know exactly what they said, or how. There is much more to learn about what kinds of political messages come from the pulpit today and what messages tend to stick with congregants.

    We did find that Catholic leaders of congregations where the majority of worshipers are Hispanic were much more likely to talk about immigration, compared with leaders of non-Catholic Hispanic congregations and Catholic leaders of mostly white congregations. Because Hispanic communities in the U.S. are facing the brunt of the immigration crackdown, this finding shows that Catholic leaders have been addressing the needs of their communities.

    What’s next

    Catholic parishioners may be exposed to different opinions about immigration from religious and political leaders. Diane, one of the authors, is furthering this research by conducting interviews with Catholics in Greater Boston. By asking church members to talk through their attitudes toward immigrants, we can learn more about how people make sense of complicated ethical questions.

    The Research Brief is a short take on interesting academic work.

    Diane Beckman received funding from Duke University to conduct research using data from the National Survey of Religious Leaders.

    Evan Stewart does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Catholic clergy are speaking out on immigration − more than any other political issue except abortion – https://theconversation.com/catholic-clergy-are-speaking-out-on-immigration-more-than-any-other-political-issue-except-abortion-260485

    MIL OSI

  • The government wants local authorities to embrace AI – here’s one way it could work in practice

    Source: ForeignAffairs4

    Source: The Conversation – UK – By Alex Lord, Professor, Lever Chair of Urban Planning, University of Liverpool

    Francesco Scatena/Shutterstock

    Few issues ignite communities more fiercely than what to do with land. The prospect of releasing small portions of green belt land for housing developments, a windfarm proposal or plans for a new road can transform mild-mannered citizens into passionate advocates overnight.

    This visceral connection between people and place perfectly illustrates the famous observation that “all politics is local”. In England, the principle that every citizen should be given the opportunity to “have their say” on planning matters is enshrined in law. Before any planning document is adopted, local authorities must give the public the chance to provide feedback.

    The logic for this is based on a common-sense morality: before binding decisions are made about how an area might change, the local people who have to live with those decisions should be given the opportunity to endorse or reject that plan.

    In practice this is a hugely cumbersome process. Local authorities have to make sense of thousands of comments. This prompted my colleagues and I at the University of Liverpool to begin thinking about how AI could be used to make this process more efficient.

    Once a local authority publishes the relevant local planning document, every citizen, company, public, private or third sector organisation has the right to submit a written response. These may address the entire document or focus on a specific issue.

    In all cases, the local authority is obliged to collate, comprehend and concisely summarise all public submissions. They will then decide whether the document requires amendments or if further evidence is needed to justify the proposals.

    This creates an overwhelming burden for planning departments up and down the country. In high-development areas, submissions often number in the tens of thousands. And individual submissions range from a few sentences to over 100 pages.

    Planners must read, absorb and synthesise all this information into a final report which will be used to make a decision. This report must fairly represent the aggregate views across all submissions.

    Beyond the sheer volume of responses, human cognitive limitations and biases further complicate the process. Some submissions may be given greater emphasis than others. Recently read submissions are likely to have a greater influence on the reader than those reviewed earlier.

    A digital solution

    These challenges prompted us to explore alternatives. We partnered with Greater Cambridge Shared Planning – the planning authority for Cambridge City and South Cambridgeshire District Councils – to develop an AI-powered solution. Our tool, Plan AI, would read and summarise public submissions to the planning process.

    In 2025, my colleagues and I conducted a real-world experiment. Three live public consultation exercises were processed in parallel – once by planners and once by Plan AI.

    It took a planning officer just over 60 hours in total to download and process 320 submissions. Eighteen hours of this time was used to summarise each submission – a task that took Plan AI only 16 minutes. In that time, the AI tool was also able to create comprehensive reports identifying key themes, referenced sources and geographic analysis of the submissions.

    A subsequent qualitative assessment found there to be no discernible difference in the quality of the summaries produced by the human planning officer and those by Plan AI. In fact, the general overview document produced by Plan AI is a significant addition to what would normally be produced. It included a geographic analysis of the origins of submissions – crucial information for planners to understand which communities and demographic groups were participating in the consultation.

    Close up of a solar farm
    Controversial planning proposals can attract tens of thousands of public comments.
    pjhpix/Shutterstock

    The future of planning

    The UK government has set out a vision for local authorities to embrace AI for reducing administrative burden and improving the efficiency of government. For example, it recently rolled out an AI tool, developed with Google DeepMind, to digitise planning records.

    The implications of experiments like these are far reaching. Planners can focus on their core expertise – assessing applications and supporting government priorities for housing, new towns and infrastructure renewal – rather than spending countless hours processing public comments.

    AI can process vast amounts of text more consistently and comprehensively than humans. It can also identify connections between submissions that might otherwise be missed.

    With the administrative burden drastically reduced, local authorities could potentially consult citizens more frequently across a wider range of planning issues, making planning even more democratic. Planners freed from paperwork could also dedicate more time to meaningful public engagement.

    Of course, one danger with AI is that it could be used on the other side of the consultation, to generate a large volume of submissions in an attempt to over-amplify a particular point of view. However, AI tools could be used to defend against this.

    PlanAI or similar programmes can generate an immediate summary of a comment submission, an ideal opportunity to insert a verification check that the submitter is indeed human. Putting the human back in the loop in this way reduces the potential for AI to be used to skew consultations.

    By building the right tools and systems, we can create planning processes that are both more efficient and more responsive to citizen input – a win for democracy and effective governance alike.

    The Conversation

    PlanAI was developed under a paid contract with Greater Cambridge Shared Planning. At the time of publication, it is not sold or marketed to other governments or authorities, but may be so in the future. Alex Lord and the other researchers involved received funding from the UK government’s PropTech initiative and Greater Cambridge Shared Planning.

    ref. The government wants local authorities to embrace AI – here’s one way it could work in practice – https://theconversation.com/the-government-wants-local-authorities-to-embrace-ai-heres-one-way-it-could-work-in-practice-258449

  • MIL-OSI Analysis: The government wants local authorities to embrace AI – here’s one way it could work in practice

    Source: The Conversation – UK – By Alex Lord, Professor, Lever Chair of Urban Planning, University of Liverpool

    Francesco Scatena/Shutterstock

    Few issues ignite communities more fiercely than what to do with land. The prospect of releasing small portions of green belt land for housing developments, a windfarm proposal or plans for a new road can transform mild-mannered citizens into passionate advocates overnight.

    This visceral connection between people and place perfectly illustrates the famous observation that “all politics is local”. In England, the principle that every citizen should be given the opportunity to “have their say” on planning matters is enshrined in law. Before any planning document is adopted, local authorities must give the public the chance to provide feedback.

    The logic for this is based on a common-sense morality: before binding decisions are made about how an area might change, the local people who have to live with those decisions should be given the opportunity to endorse or reject that plan.

    In practice this is a hugely cumbersome process. Local authorities have to make sense of thousands of comments. This prompted my colleagues and I at the University of Liverpool to begin thinking about how AI could be used to make this process more efficient.

    Once a local authority publishes the relevant local planning document, every citizen, company, public, private or third sector organisation has the right to submit a written response. These may address the entire document or focus on a specific issue.

    In all cases, the local authority is obliged to collate, comprehend and concisely summarise all public submissions. They will then decide whether the document requires amendments or if further evidence is needed to justify the proposals.

    This creates an overwhelming burden for planning departments up and down the country. In high-development areas, submissions often number in the tens of thousands. And individual submissions range from a few sentences to over 100 pages.

    Planners must read, absorb and synthesise all this information into a final report which will be used to make a decision. This report must fairly represent the aggregate views across all submissions.

    Beyond the sheer volume of responses, human cognitive limitations and biases further complicate the process. Some submissions may be given greater emphasis than others. Recently read submissions are likely to have a greater influence on the reader than those reviewed earlier.

    A digital solution

    These challenges prompted us to explore alternatives. We partnered with Greater Cambridge Shared Planning – the planning authority for Cambridge City and South Cambridgeshire District Councils – to develop an AI-powered solution. Our tool, Plan AI, would read and summarise public submissions to the planning process.

    In 2025, my colleagues and I conducted a real-world experiment. Three live public consultation exercises were processed in parallel – once by planners and once by Plan AI.

    It took a planning officer just over 60 hours in total to download and process 320 submissions. Eighteen hours of this time was used to summarise each submission – a task that took Plan AI only 16 minutes. In that time, the AI tool was also able to create comprehensive reports identifying key themes, referenced sources and geographic analysis of the submissions.

    A subsequent qualitative assessment found there to be no discernible difference in the quality of the summaries produced by the human planning officer and those by Plan AI. In fact, the general overview document produced by Plan AI is a significant addition to what would normally be produced. It included a geographic analysis of the origins of submissions – crucial information for planners to understand which communities and demographic groups were participating in the consultation.

    Controversial planning proposals can attract tens of thousands of public comments.
    pjhpix/Shutterstock

    The future of planning

    The UK government has set out a vision for local authorities to embrace AI for reducing administrative burden and improving the efficiency of government. For example, it recently rolled out an AI tool, developed with Google DeepMind, to digitise planning records.

    The implications of experiments like these are far reaching. Planners can focus on their core expertise – assessing applications and supporting government priorities for housing, new towns and infrastructure renewal – rather than spending countless hours processing public comments.

    AI can process vast amounts of text more consistently and comprehensively than humans. It can also identify connections between submissions that might otherwise be missed.

    With the administrative burden drastically reduced, local authorities could potentially consult citizens more frequently across a wider range of planning issues, making planning even more democratic. Planners freed from paperwork could also dedicate more time to meaningful public engagement.

    Of course, one danger with AI is that it could be used on the other side of the consultation, to generate a large volume of submissions in an attempt to over-amplify a particular point of view. However, AI tools could be used to defend against this.

    PlanAI or similar programmes can generate an immediate summary of a comment submission, an ideal opportunity to insert a verification check that the submitter is indeed human. Putting the human back in the loop in this way reduces the potential for AI to be used to skew consultations.

    By building the right tools and systems, we can create planning processes that are both more efficient and more responsive to citizen input – a win for democracy and effective governance alike.

    PlanAI was developed under a paid contract with Greater Cambridge Shared Planning. At the time of publication, it is not sold or marketed to other governments or authorities, but may be so in the future. Alex Lord and the other researchers involved received funding from the UK government’s PropTech initiative and Greater Cambridge Shared Planning.

    ref. The government wants local authorities to embrace AI – here’s one way it could work in practice – https://theconversation.com/the-government-wants-local-authorities-to-embrace-ai-heres-one-way-it-could-work-in-practice-258449

    MIL OSI Analysis

  • MIL-OSI Submissions: Why drones and AI can’t quickly find missing flood victims, yet

    Source: The Conversation – USA – By Robin R. Murphy, Professor of Computer Science and Engineering, Texas A&M University

    The landscape In the aftermath of a flood makes it challenging to spot victims. AP Photo/Gerald Herbert

    For search and rescue, AI is not more accurate than humans, but it is far faster.

    Recent successes in applying computer vision and machine learning to drone imagery for rapidly determining building and road damage after hurricanes or shifting wildfire lines suggest that artificial intelligence could be valuable in searching for missing persons after a flood.

    Machine learning systems typically take less than one second to scan a high-resolution image from a drone versus one to three minutes for a person. Plus, drones often produce more imagery to view than is humanly possible in the critical first hours of a search when survivors may still be alive.

    Unfortunately, today’s AI systems are not up to the task.

    We are robotics reseachers who study the use of drones in disasters. Our experiences searching for victims of flooding and numerous other events show that current implementations of AI fall short.

    However, the technology can play a role in searching for flood victims. The key is AI-human collaboration.

    Drones have become standard equipment for first responders, but floods pose unique challenges.
    Eric Smalley, CC BY-ND

    AI’s potential

    Searching for flood victims is a type of wilderness search and rescue that presents unique challenges. The goal for machine learning scientists is to rank which images have signs of victims and indicate where in those images search-and-rescue personnel should focus. If the responder sees signs of a victim, they pass the GPS location in the image to search teams in the field to check.

    The ranking is done by a classifier, which is an algorithm that learns to identify similar instances of objects – cats, cars, trees – from training data in order to recognize those objects in new images. For example, in a search-and-rescue context, a classifier would spot instances of human activity such as garbage or backpacks to pass to wilderness search-and-rescue teams, or even identify the missing person themselves.

    A classifier is needed because of the sheer volume of imagery that drones can produce. For example, a single 20-minute flight can produce over 800 high-resolution images. If there are 10 flights – a small number – there would be over 8,000 images. If a responder spends only 10 seconds looking at each image, it would take over 22 hours of effort. Even if the task is divided among a group of “squinters,” humans tend to miss areas of images and show cognitive fatigue.

    The ideal solution is an AI system that scans the entire image, prioritizes images that have the strongest signs of victims, and highlights the area of the image for a responder to inspect. It could also decide whether the location should be flagged for special attention by search-and-rescue crews.

    Where AI falls short

    While this seems to be a perfect opportunity for computer vision and machine learning, modern systems have a high error rate. If the system is programmed to overestimate the number of candidate locations in hopes of not missing any victims, it will likely produce too many false candidates. That would mean overloading squinters or, worse, the search-and-rescue teams, which would have to navigate through debris and muck to check the candidate locations.

    Developing computer vision and machine learning systems for finding flood victims is difficult for three reasons.

    One is that while existing computer vision systems are certainly capable of identifying people visible in aerial imagery, the visual indicators of a flood victim are often very different compared with those for a lost hiker or fugitive. Flood victims are often obscured, camouflaged, entangled in debris or submerged in water. These visual challenges increase the possibility that existing classifiers will miss victims.

    Second, machine learning requires training data, but there are no datasets of aerial imagery where humans are tangled in debris, covered in mud and not in normal postures. This lack also increases the possibility of errors in classification.

    Third, many of the drone images often captured by searchers are oblique views, rather than looking straight down. This means the GPS location of a candidate area is not the same as the GPS location of the drone. It is possible to compute the GPS location if the drone’s altitude and camera angle are known, but unfortunately those attributes rarely are. The imprecise GPS location means teams have to spend extra time searching.

    How AI can help

    Fortunately, with humans and AI working together, search-and-rescue teams can successfully use existing systems to help narrow down and prioritize imagery for further inspection.

    In the case of flooding, human remains may be tangled among vegetation and debris. Therefore, a system could identify clumps of debris big enough to contain remains. A common search strategy is to identify the GPS locations of where flotsam has gathered, because victims may be part of these same deposits.

    A machine learning algorithm identified piles of debris large enough to contain bodies in an aerial image of a flood aftermath.
    Center for Robot-Assisted Search and Rescue and University of Maryland

    An AI classifier could find debris commonly associated with remains, such as artificial colors and construction debris with straight lines or 90-degree corners. Responders find these signs as they systematically walk the riverbanks and flood plains, but a classifier could help prioritize areas in the first few hours and days, when there may be survivors, and later could confirm that teams didn’t miss any areas of interest as they navigated the difficult landscape on foot.

    Robin R. Murphy receives funding from the National Science Foundation. She is affiliated with the Center for Robot-Assisted Search and Rescue.

    Thomas Manzini is affiliated with the Center for Robot Assisted Search & Rescue (CRASAR), and his work is funded by the National Science Foundation’s AI Institute for Societal Decision Making (AI-SDM).

    ref. Why drones and AI can’t quickly find missing flood victims, yet – https://theconversation.com/why-drones-and-ai-cant-quickly-find-missing-flood-victims-yet-261035

    MIL OSI

  • From tea towels to TV remotes: eight everyday bacterial hotspots – and how to clean them

    Source: ForeignAffairs4

    Source: The Conversation – UK – By Manal Mohammed, Senior Lecturer, Medical Microbiology, University of Westminster

    Parkin Srihawong/Shutterstock

    From your phone to your sponge, your toothbrush to your trolley handle, invisible armies of bacteria are lurking on the everyday objects you touch the most. Most of these microbes are harmless – some even helpful – but under the right conditions, a few can make you seriously ill.

    But here’s the catch: some of the dirtiest items in your life are the ones you might least expect.

    Here are some of the hidden bacteria magnets in your daily routine, and how simple hygiene tweaks can protect you from infection.


    Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.


    Shopping trolley handles

    Shopping trolleys are handled by dozens of people each day, yet they’re rarely sanitised. That makes the handles a prime spot for germs, particularly the kind that spread illness.

    One study in the US found that over 70% of shopping carts were contaminated with coliform bacteria, a group that includes strains like E. coli, often linked to faecal contamination. Another study found Klebsiella pneumoniae, Citrobacter freundii and Pseudomonas species on trolleys.

    Protect yourself: Always sanitise trolley handles before use, especially since you’ll probably be handling food, your phone or touching your face.

    Kitchen sponges

    That sponge by your sink? It could be one of the dirtiest items in your home. Sponges are porous, damp and often come into contact with food: ideal conditions for bacteria to thrive.

    After just two weeks, a sponge can harbour millions of bacteria, including coliforms linked to faecal contamination, according to the NSF Household Germ Study and research on faecal coliforms.

    Protect yourself: Disinfect your sponge weekly by microwaving it, soaking it in vinegar, or running it through the dishwasher. Replace it if it smells – even after cleaning. Use different sponges for different tasks (for example, one for dishes, another for cleaning up after raw meat).

    Chopping boards

    Chopping boards can trap bacteria in grooves left by knife cuts. Salmonella and E. coli can survive for hours on dry surfaces and pose a risk if boards aren’t cleaned properly.

    Protect yourself: Use separate boards for raw meat and vegetables. Wash thoroughly with hot, soapy water, rinse well and dry completely. Replace boards that develop deep grooves.

    Tea towels

    Reusable kitchen towels quickly become germ magnets. You use them to dry hands, wipe surfaces and clean up spills – often without washing them often enough.

    Research shows that E. coli and salmonella can live on cloth towels for hours.

    Protect yourself: Use paper towels when possible, or separate cloth towels for different jobs. Wash towels regularly in hot water with bleach or disinfectant.

    Mobile phones

    Phones go everywhere with us – including bathrooms – and we touch them constantly. Their warmth and frequent handling make them ideal for bacterial contamination.

    Research shows phones can carry harmful bacteria, including Staphylococcus aureus.

    Protect yourself: Avoid using your phone in bathrooms and wash your hands often. Clean it with a slightly damp microfibre cloth and mild soap. Avoid harsh chemicals or direct sprays.

    Toothbrushes near toilets

    Flushing a toilet releases a plume of microscopic droplets, which can land on nearby toothbrushes. A study found that toothbrushes stored in bathrooms can harbour E. coli, Staphylococcus aureus and other microbes.




    Read more:
    Toothbrushes and showerheads covered in viruses ‘unlike anything we’ve seen before’ – new study


    Protect yourself: Store your toothbrush as far from the toilet as possible. Rinse it after each use, let it air-dry upright and replace it every three months – or sooner if worn.

    Bathmats

    Cloth bathmats absorb water after every shower, creating a warm, damp environment where bacteria and fungi can thrive.

    Protect yourself: Hang your bathmat to dry after each use and wash it weekly in hot water. For a more hygienic option, consider switching to a wooden mat or a bath stone: a mat made from diatomaceous earth, which dries quickly and reduces microbial growth by eliminating lingering moisture.

    Pet towels and toys

    Pet towels and toys stay damp and come into contact with saliva, fur, urine and outdoor bacteria. According to the US national public health agency, the Centers for Disease Control and Prevention, pet toys can harbour E. coli, Staphylococcus aureus and Pseudomonas aeruginosa.

    Protect your pet (and yourself): Wash pet towels weekly with hot water and pet-safe detergent. Let toys air dry or use a dryer. Replace worn or damaged toys regularly.

    Shared nail and beauty tools

    Nail clippers, cuticle pushers and other grooming tools can spread harmful bacteria if they’re not properly cleaned. Contaminants may include Staphylococcus aureus – including MRSA, a strain resistant to antibiotics – Pseudomonas aeruginosa, the bacteria behind green nail syndrome, and Mycobacterium fortuitum, linked to skin infections from pedicures and footbaths.

    Protect yourself: Bring your own tools to salons or ask how theirs are sterilised. Reputable salons will gladly explain their hygiene practices.

    Airport security trays

    Airport trays are handled by hundreds of people daily – and rarely cleaned. Research has found high levels of bacteria, including E. coli.

    Protect yourself: After security, wash your hands or use sanitiser, especially before eating or touching your face.

    Hotel TV remotes

    Studies show hotel remote controls can be dirtier than toilet seats. They’re touched by many hands and rarely sanitised.

    Common bacteria include E. coli, enterococcus and Staphylococcus aureus, including MRSA, according to research.

    Protect yourself: Wipe the remote with antibacterial wipes when you arrive. Some travellers even put it in a plastic bag. Always wash your hands after using shared items.

    Bacteria are everywhere, including on the items you use every day. You can’t avoid all germs, and most won’t make you sick. But with a few good habits, such as regular hand washing, cleaning and smart storage, you can help protect yourself and others.

    It’s all in your hands.

    The Conversation

    Manal Mohammed does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. From tea towels to TV remotes: eight everyday bacterial hotspots – and how to clean them – https://theconversation.com/from-tea-towels-to-tv-remotes-eight-everyday-bacterial-hotspots-and-how-to-clean-them-260784

  • Worries about the UK economy are justified, but can the government afford to gamble on raising taxes?

    Source: ForeignAffairs4

    Source: The Conversation – UK – By Alan Shipman, Senior Lecturer in Economics, The Open University

    Gloomy economic figures have heaped more pressure on the British government and its promise to improve growth. And if that wasn’t enough, there have also been some stark warnings about public finances and the country’s ability to service its debts.

    All of this has led to a growing expectation that the UK chancellor Rachel Reeves will have to bring in some significant tax hikes later this year, or reduce government spending.

    But both of these options could worsen the long-term economic outlook, by further constraining GDP growth. That was precisely the fate of governments that pursued an agenda of “austerity” – cuts in spending and higher taxes – to tackle the expanded public debt after the financial crisis of 2008.

    It was a strategy that ultimately led to higher public debt. Put simply, when governments spend less, GDP tends to fall. And when GDP falls and a country is less productive, tax revenues go down too.


    Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.


    To make things even more complicated for the chancellor, the UK government has also widened its debt risk by changing its fiscal rules to acknowledge extra financial responsibilities.

    This adjustment gave the government more financial assets, including student loans and public pension holdings. But it also meant taking on more liabilities, including the pension schemes it would have to bail out if necessary.

    In July 2025, the Office for Budget Responsibility (OBR) identified several other sectors – including universities, housing associations and water companies – whose large debts could become government liabilities in the future.

    A bigger balance sheet automatically means more public financial risk. And climate change further raises these risks, the OBR says, by forcing the government to spend more on dealing with environmental damage and eroding fossil-fuel taxes, which still raise around £24 billion for the Treasury.

    The OBR is also concerned about the rising cost of pensions for an ageing population. In fact, the UK’s system is not particularly expensive, partly due to its reliance on private pensions (funded by employers and employees).

    Yet this reliance brings a different kind of government cost. For these private sector schemes have attempted to insulate themselves against the strains of an ageing population, as more employees retire than join the workforce (and as retirees live longer).

    Often this has involved shifting from “defined benefit” plans, which guarantee retirement income, to “defined contribution” plans, where payouts depend on how much members pay in and how well funds are invested.

    But that shift has also made it harder for the government to borrow the money it needs for public spending.

    Defined benefit funds, seeking a steady long-term return, used to be big buyers of UK government bonds (gilts) – the financial assets that the government sells to raise money. In contrast, defined contribution funds invest mainly in equities (company shares), which promise a higher return on investment that can grow pension pots faster.

    UK industrial policy supports this shift from gilts to other assets. It wants pension funds to invest in innovation and infrastructure as a way of stimulating its often mentioned mission of economic growth.

    The growth gamble

    Yet the move by pensions towards equities is steadily deflating demand for new government bonds. This then forces the government to pay higher interest rates to attract enough buyers, often from overseas.

    There is also pressure on the government to relax the “triple lock” on state pensions. This pledge – to raise the basic state pension by at least 2.5% every year, and maintained by all parties since 2011 – is costing around three times as much as was projected at launch, despite fewer pensioners escaping poverty since it was introduced.

    Overall, inflation and an ageing population have lifted state spending on pensions to around 5% of GDP.

    These pressures all strengthen the view that the government will need another tax-raising budget this year. How else will it pay for its plans for spending on healthcare, housing, infrastructure and defence?

    Reeves sought to assure voters that £40 billion in tax hikes in October 2024 rises were enough to plug an inherited “black hole”. But she is already struggling to preserve those projections, after a politically painful retreat from welfare changes designed to save £5 billion.

    Hopes that a faster-growing economy would narrow the deficit, by boosting tax receipts and reducing spending requirements, have not been fulfilled.

    Yet calls for significant tax increases – which could dampen growth – may still be be resisted.

    Under pressure, she may well consider a compromise like a “wealth tax” targeting the richest, that would also satisfy the Labour left. Yet the only way to really raise significant extra funds is to increase income tax, VAT or national insurance, which would be extremely risky politically.

    But all economic policy comes with risk. And she may end up sticking with her position and putting her (taxpayers’) money on the hope that today’s deficit will eventually be narrowed by faster growth. Relying on more investment to solve economic problems depends on investors trusting the economic stability of the UK, which is a gamble. But it is a gamble the government may still be willing to take.

    The Conversation

    Alan Shipman has received funding from the British Academy/Leverhulme Trust and the Harry Ransom Center, University of Texas at Austin.

    ref. Worries about the UK economy are justified, but can the government afford to gamble on raising taxes? – https://theconversation.com/worries-about-the-uk-economy-are-justified-but-can-the-government-afford-to-gamble-on-raising-taxes-260880

  • Britons are less likely than Americans to invest in stocks – but they may not have the full picture

    Source: ForeignAffairs4

    Source: The Conversation – UK – By Sam Pybis, Senior Lecturer in Economics, Manchester Metropolitan University

    ymgerman/Shutterstock

    UK chancellor Rachel Reeves would like Britons to invest more in stocks – particularly UK stocks – rather than keep their money in cash. She has even urged the UK finance industry to be less negative about investing and highlight the potential gains as well as the risks.

    Stock ownership is important for governments for a variety of reasons. Boosting capital markets can encourage business expansion, job creation and long-term economic growth. It can also give people another source of income in later life, especially as long-term investing can offer greater returns than saving.

    But in the UK, excluding workplace pensions, only 23% of people have invested in the stock market, compared to nearly two-thirds in the US. Survey results suggest that American consumers are generally more comfortable with financial risks.


    Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.


    And it appears that a greater degree of risk translates into closer political engagement. During market shocks driven by US president Donald Trump’s tariff chaos, many Americans tracked headlines – and their portfolios – closely. This contrasts with the UK, where most people keep their savings in safer assets like cash savings accounts or premium bonds.

    If Britons are more risk-averse, media coverage that tends to be noisier when markets fall than when they recover may be having an impact. While concerns regarding market volatility may be valid, they can overshadow the long-term benefits of investing.

    One key opportunity that many British consumers have missed out on is the rise of low-cost, diversified exchange-traded funds (ETFs), which have made investing more accessible and affordable. An ETF allows investors to buy or sell baskets of shares on an exchange. For example, a FTSE100 ETF gives investors exposure to the UK’s top 100 companies without having to buy each one individually.

    This is exactly the kind of long-term, low-cost investing that Reeves appears to be promoting. But should savers be worried about current market volatility – much of it driven by trade tensions and tariff uncertainty? One view, of course, is that volatility is simply part of investing.

    But it could also be argued that big shifts within the space of a single month are often exaggerated. People are also likely to be put off by news headlines, which tend to exaggerate the swings in the market.

    Examining daily excess returns in the US stock market from November 2024 to April 2025, I plotted cumulative returns (which show how an investment grows over time by adding up past returns) within each month. April 2025 stands out. Despite experiencing several sharp daily losses, the market rebounded swiftly in the days that followed.

    This pattern isn’t new. Historically, markets have shown a remarkable ability to recover from short-term shocks. Yet many potential investors could be deterred by alarming headlines that, while factually accurate, often highlight single-day declines without broader context.

    The reality is that the stock market is frequently a series of short-lived storms. These are volatile, yes, but often followed by calm and recovery.

    Fear and caution

    During market downturns, it’s common for people to try to understand why this time is worse or analyse if this crash is more serious than previous ones.

    The fear these headlines generate could feed into barriers to long-term investing in the UK. And that’s one of the challenges the chancellor faces in encouraging more Britons to invest.

    For those already invested in the stock market, short-term declines are part of the journey. They are risks that can be borne with the understanding that markets tend to recover over time.

    My analysis of daily US stock market data since 1926 shows that after sharp daily drops, the market often rebounds quickly (see pie chart below). In fact, more than a quarter of recoveries occur within just a few days.

    But this resilience is rarely the focus of media coverage. It’s far more common to see headlines reporting that the market is down than to see follow-ups highlighting how quickly it bounced back.

    Research has shown that negative economic information is likely to have a greater impact on public attitudes. For example, a sharp drop in the stock market might dominate front pages, while a steady recovery over the following weeks barely gets a mention. The imbalance reinforces a sense of crisis, even when the broader picture is less bleak.

    front page of daily mail newspaper from april 2025 with the headline 'meltdown'
    Markets went on to recover in April 2025… but did the headlines reflect this?
    David G40/Shutterstock

    Unbalanced reporting can distort perceptions, discouraging potential investors who might otherwise benefit from long-term participation in the market. It appears that American perceptions of their finances are also affected by news coverage in a similar way.

    Over the long term, the difference between stock market returns and the generally lower returns from government bonds is known as the “equity risk premium puzzle”. Economists have long debated why this gap is so large. Some observers argue it may narrow in the future. But many others, including the chancellor, believe that investing in the stock market remains a beneficial long-term strategy.

    If more people are to benefit from long-term investing, it’s vital to tell the full story. That means not just highlighting when markets fall, but following up on how they recover afterwards.

    The Conversation

    Sam Pybis does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Britons are less likely than Americans to invest in stocks – but they may not have the full picture – https://theconversation.com/britons-are-less-likely-than-americans-to-invest-in-stocks-but-they-may-not-have-the-full-picture-259485

  • Design and Disability at the V&A is a rich, thought-provoking exhibition

    Source: ForeignAffairs4

    Source: The Conversation – UK – By Laudan Nooshin, Professor of Music, School of Communication and Creativity, City St George’s, University of London

    One of the first things to greet visitors at the V&A’s new Design and Disability exhibition is a striking blue bench by artist Finnegan Shannon titled, Do You Want Us Here Or Not? This exhibit is a response to the often inadequate seating in museums, which not only acts as a barrier to accessibility for many people, but is more widely symptomatic of ableist approaches to museum and exhibition design.

    In this case, the invitation to “Please sit here!” sets the tone for the whole exhibition, which also includes a large sensory map of the layout (located at wheelchair level), a tactile map, and QR codes that link to audio description for blind and partially sighted visitors, and also British Sign Language interpretation.


    Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.


    Aiming to showcase the radical contributions of disabled, deaf and neurodivergent people to design history and contemporary culture from the 1940s until the present, the exhibition goes well beyond this, addressing an impressively wide range of issues around access, disability and exclusion. It also reveals how ableism operates across a range of exclusions, such as race, gender, class and more.

    As the introductory notes point out: “Disabled people past and present have challenged and confronted the imbalance of design in society. This exhibition highlights disabled individuals at the heart of design history … It is both a celebration and a call to action.”

    While the fight for disability justice goes back many decades – also documented in the exhibition – it’s only relatively recently that questions of access and equality have gone beyond the physical. These include a wide range of issues related to neuro-inclusion and sensory access, including calm spaces and sensory maps that indicate noisy areas.

    My own interest in sound in museums has come partly out of research focusing on the role of acoustics in creating accessible spaces, and from my own experience of noise sensitivity conditions hyperacusis and misophonia. Inclusive sonic design seeks to address how sound operates as a factor of social inclusion and exclusion in places like museums.

    The V&A exhibition comprises three sections: visibility, tools and living. Visibility focuses on design and art as fundamental tools of activism and includes work created as part of disability justice movements over many decades. This section is a stark reminder of the justice and rights that only come about through extensive struggles.

    Tools highlight the extraordinary contribution to design innovation made by disabled people. Living explores stories of disabled people claiming space and imagining the worlds that they want to live in.

    Sections two and three both advocate for the social model of disability in which people are rendered disabled by their environment, something that calls for design solutions (as opposed to the medical model in which people are required to navigate and find solutions to their “problem”).

    The exhibition draws attention to a wide range of physical and sensory exclusions, both in the displays and the design of the space itself. The in-house design team includes staff with personal experience of disability who also worked closely with external partners living with disability.

    There are plenty of exhibits that can be experienced through touch. For partially sighted visitors, there are strong visual contrasts in the wall colours and the edges of displays are lit up. And there are raised edgings on all exhibits for people using a cane – all of which help with navigation.

    There are also quiet areas and plenty of seating. Some of these features are already being incorporated into gallery and exhibition design, and hopefully will soon become standard.

    I particularly liked the way various issues intersect in the exhibition, in which a range of exclusions are set alongside one another: race, hearing impairment, youth exclusion and stammering, for example.

    Other favourites included the B1 Blue Flame rattling football used for blind football, which visitors can pick up, feel, smell, shake and listen to. The Deaf Rave set and Woojer Vest are designed for deaf clubbers and performers and use vibrating tactile discs that amplify sound vibrations.

    The beautiful blanket and pillow entitled Public S/Pacing by Helen Statford offers an invitation to rest, drawing attention to “crip time”, accepting “a different pace to non-disabled norms, challenging conventions of productivity, and resting in radical ways that would actually benefit society at large”.

    The blanket highlights the failures of the design of public spaces to include disabled people, “challenging ableist assumptions with care and visibility”. The reverse of the blanket has a quotation from Rhiannon Armstrong’s Radical Act of Stopping (2016), embroidered by Poppy Nash.

    The exhibition includes many examples of “disability gain” by which design aimed at a particular group of people unintentionally benefits others, too. An example is the smartphone touchscreen, based on technology developed by engineers Wayne Westerman and John Elias as an alternative to the standard keyboard, which Westerman was unable to use due to severe hand pain.

    Initially marketed to people with hand disabilities, the technology was later sold to Apple where it revolutionised mobile phone technology.

    The final panel of the exhibition is titled Label for Missing Objects, an imaginative and fitting way to mark the continuing story of designing a world that works for “every body and every mind”.

    Design and Disability is a rich, thought-provoking and landmark exhibition. Kudos to the V&A, although its importance is so obvious, I wonder why it took this long to host a show dedicated to disabled artists and designers and the wider social impact of their work.

    I very much hope there are plans for the exhibition to tour the UK and beyond, and to become a permanent gallery at the V&A, so that it can inform curation and design work in other museums.

    Design and Disability at the V&A runs until February 15 2026.

    The Conversation

    Laudan Nooshin received funding from the AHRC for the project Place-making Through Sound: Designing for Inclusivity and Wellbeing (2023-24).

    ref. Design and Disability at the V&A is a rich, thought-provoking exhibition – https://theconversation.com/design-and-disability-at-the-vanda-is-a-rich-thought-provoking-exhibition-261135