Category: Reportage

  • MIL-OSI Global: How discussion becomes discord: Three avoidable steps on the path to polarization

    Source: The Conversation – Canada – By Emma Lei Jing, Assistant Professor, People and Organizations, Neoma Business School

    From tariffs and sovereignty to politics and conflict, there’s no shortage of controversial topics for us to grapple with. (Shutterstock)

    Many of us have become immersed in debates with family about a contentious political issue, or found ourselves on the other side of a political divide than our friends. In these contentious times, it can be all too easy for courteous debate to devolve into polarized discord.

    From tariffs and sovereignty to politics and conflict, there’s no shortage of controversial topics for us to grapple with. Canada just emerged from a divisive federal election, while in the United States, President Donald Trump signed a record 143 executive orders in his first 100 days in office, many of which touched on contentious topics.

    We recently conducted a study on the debate around harm reduction. Here in Canada, supervised consumption sites is one issue that has generated support and opposition from community members, healthcare and government agencies, police, addiction services and many others. And it has led to some becoming entrenched in polarized positions.

    Our research traced a path which led participants farther apart. Eventually, opposing camps became deeply divided and unwilling to engage with anyone holding different views, and it didn’t happen at random.

    What went wrong, and what set opposing groups on the path to discord?

    Signposts on the path to polarization

    Through an in-depth qualitative case study of addiction services in Alberta, our analysis showed that when the topic of harm reduction was first introduced, arguments were based mostly on evidence and reason.

    Harm reduction proponents pointed to the life-saving benefits of harm reduction and the inadequacies of traditional approaches, whereas opponents talked about the effectiveness of more traditional approaches.

    We saw genuine, and sometimes successful, efforts to persuade those who disagreed to change their minds.

    However, we identified a systematic progression from civil discourse to the formation of echo chambers. From that, we offer ways to steer conversations from developing into irreconcilable echo chambers.

    When emotions rise, people talk less about the pros and cons of an approach and more about what should be the right approach.
    (Shutterstock)

    Phase 1: Emotion deepens the divide

    In the case of the harm reduction debate, an opioid crisis shook Alberta. A steep increase in overdose deaths heightened urgency and intensity around the debate and ushered in more emotionally charged arguments. Before long, a moral component developed in the debate.

    When emotions rise, people talk less about the pros and cons of an approach and more about what should be the right approach.

    Disagreements escalate as the discussion veers away from logic and arguments become more morally and emotionally charged. This heightened a sense of being right, and the opposite view being wrong, provides fertile ground for polarization.

    This phase is where there is the greatest opportunity to change course. Be aware of the rising emotional energy. If the debate is getting heated, avoid framing arguments in terms of what’s right and wrong and stay focused on evidence and reason.

    Phase 2: Heightened hostility

    This is where things get personal.

    As emotional rhetoric takes hold, participants pull farther apart and animosity grows. They start characterizing people on either side of the debate as morally right or wrong.

    Just as we saw in phase one, a watershed event deepened the divide in Alberta. A newly elected provincial government took a distinctly different approach than the previous government, leaving advocates on one side feeling vindicated and their opponents shocked, dismayed and angry.

    In phase two, the issue itself takes a back seat, and participants started blaming their opponents for making matters worse. There is less dialogue about an approach being right or wrong, and more about the people involved being right or wrong.

    This is possibly the last chance to turn things around. At this point, we should be mindful about the importance of neutral and respectful language. One way to do this is by avoiding making things personal, such as blaming one another for a situation.

    Disagreements escalate as a discussion veers away from logic and arguments become more morally and emotionally charged.
    (Shutterstock)

    Phase 3: Disdain, disgust and self-isolation

    By now, logical arguments have been abandoned, replaced with intense expressions of disgust and disdain for opponents. No longer interested in persuading the other side, the focus shifts to solidifying a position as both sides withdraw from debate and only engage with like-minded people.

    In our study, this phase, like the previous phases, was brought on by a distinct event. A second provincial election ushered in an abrupt reversal in leadership and harm reduction policies. Any attempts to work together were abandoned and participants started entrenching themselves in self-constructed echo chambers.

    In this most devastating and possibly irreparable phase, we noted that the rhetoric wasn’t even about what was right or wrong anymore. It was more about expressing disgust toward one another, leaving no room for facts, evidence or even different opinions, firmly establishing two entrenched sides.

    Moral convictions and emotions play a critical role in escalating disagreements. The damage caused when civil arguments are subtly replaced with moral convictions and moral emotions can impact how we co-operate and interact with one another, even in our day-to-day conversations with families and friends.

    In the context of addiction services in Alberta, there has now been an extended period of “cooling down” where both sides are taking a wait-and-see approach. We suggest that this is creating a climate where an engaged discussion with fact-based arguments can again be possible.

    But even better would be a more proactive approach where participants of a debate recognize the warning signs and take actions early.

    Trish Reay received funding from the Social Sciences and Humanities Research Council that supported this research.

    Elizabeth Goodrick, Emma Lei Jing, and Jo-Louise Huq do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    ref. How discussion becomes discord: Three avoidable steps on the path to polarization – https://theconversation.com/how-discussion-becomes-discord-three-avoidable-steps-on-the-path-to-polarization-257709

    MIL OSI – Global Reports

  • MIL-OSI Global: Alzheimer’s: bacteria that cause stomach ulcers may protect the brain, our new research indicates

    Source: The Conversation – UK – By Gefei Chen, Associate professor, Karolinska Institutet

    _H pylori_ is more commonly known as the culprit of stomach infections. Corona Borealis Studio/ Shutterstock

    Every three seconds, someone in the world develops dementia. Alzheimer’s disease is the most common form of dementia, accounting for between 60% and 70% of all cases.

    Although scientists have made significant progress in understanding the disease, there’s still no cure. That’s partly because Alzheimer’s disease has multiple causes – many of which are still not fully understood.

    Two proteins which are widely believed to play central roles in Alzheimer’s disease are amyloid-beta and tau. Amyloid-beta forms sticky plaques on the outside of brain cells. This disrupts communication between neurons. Tau accumulate inside brain cells, where it twists into tangles. This ultimately leads to cell death. These plaques and tangles are the hallmark features of Alzheimer’s disease.

    This understanding, known as the amyloid hypothesis, has shaped research for decades and led to treatments that aim to clear amyloid from the brain. Monoclonal antibody drugs have been approved in recent years for this purpose.

    But they only work in the early stages of the disease. They do not reverse existing damage and may cause serious side-effects such as brain swelling and bleeding. Most importantly, they only target amyloid-beta, leaving tau untreated.

    But in a surprise twist, recent research published by my colleagues and me has found that a protein from Helicobacter pylori – a bacteria best known for causing stomach ulcers – can block the toxic buildup of both amyloid-beta and tau. This unexpected finding may point to a new strategy for the fight against Alzheimer’s disease.


    Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.


    Our discovery began with a very different question. We were initially studying how H pylori interacts with other microbes. Some bacteria form protective communities called biofilms, which rely on amyloid assemblies (similar in structure to the plaques which form in the brain) as a structural scaffold. This led us to wonder: could H pylori influence bacterial biofilms by also interfering with amyloid assemblies in humans?

    We turned our attention to a well-known H pylori protein called CagA. While half of the protein is known to trigger harmful effects in human cells (referred to as the C-terminal region), the other half (the protein’s N-terminal region) may have protective properties. To our surprise, this N-terminal fragment, called CagAN, dramatically reduced the formation of both bacterial amyloids and biofilms in the bacterial species Escherichia coli and Pseudomonas.

    Encouraged by these results, we tested whether the same protein fragment could block the buildup of human amyloid-beta proteins. To do this, we incubated amyloid-beta molecules in the lab: some were treated with CagAN, while others were left as normal. We then tracked amyloid formation using a fluorescence reader and an electron microscope.

    The protein derived from H pylori blocked amyloid-beta plaques from forming.
    Signal Scientific Visuals/ Shutterstock

    We found that treated samples had far less amyloid clump formation during the testing period. Even at very low concentrations, CagAN almost completely stopped amyloid-beta from forming amyloid aggregates.

    To understand how CagAN worked, we used nuclear magnetic resonance (which allows us to look at how molecules interact with each other) to examine how the protein interacts with amyloid-beta. We also used computer modelling to investigate possible mechanisms. Remarkably, CagAN also blocked tau aggregation – suggesting it acts on multiple toxic proteins involved in Alzheimer’s disease.

    Blocking the disease

    Our study has shown us that a fragment from the Helicobacter pylori protein can effectively block the buildup of the two proteins that are implicated in Alzheimer’s disease. This suggests that bacterial proteins – or drugs modelled after them – could someday block the earliest signs of Alzheimer’s.

    What’s more, the benefits may extend beyond Alzheimer’s disease.

    In additional experiments, the same bacterial fragment blocked the aggregation of IAPP (a protein involved in type 2 diabetes) and alpha-synuclein (linked to Parkinson’s disease). All of these conditions are driven by the accumulation of toxic amyloid aggregates.

    That a single bacterial fragment could interfere with so many proteins suggests exciting therapeutic potential. Though these conditions affect different parts of the body, they may be linked through cross-talk between amyloid proteins – a shared mechanism that CagAN could help disrupt.

    Of course, it’s important to be clear: this research is still at an early stage. All of our experiments were conducted in lab settings, not yet in animals or humans. Still, the findings open a new path.

    Our study also uncovered the underlying mechanisms for how CagAN blocked the amyloid-beta and tau from forming amyloid aggregates. One of the ways in which CagAN did this was by preventing the proteins from coming together to form clumps. They also prevented small, premature amyloid aggregates from forming as well. In the future, we will continue the detailed mechanism study and evaluate the effects in animal models.

    These results also prompt a question: could H pylori, long seen only as harmful, also have a protective side? Some studies have hinted at a connection between H pylori infection and Alzheimer’s disease, though the relationship remains unclear. Our discovery adds a new layer to this discussion, suggesting that part of H pylori may actually interfere with the molecular events that lead to Alzheimer’s disease.

    That means in the future, we may need to take a more precise and personalised approach. Instead of aiming to eliminate H pylori completely with antibiotics, it might be more important to understand, in different biological contexts, which parts of the bacterium are harmful, and which might actually be beneficial.

    As medicine continues to move toward greater precision, the goal may no longer be to wipe out every microbe, but to understand how some of them might work with us rather than against us.

    Gefei Chen is also affiliated with Uppsala University.

    ref. Alzheimer’s: bacteria that cause stomach ulcers may protect the brain, our new research indicates – https://theconversation.com/alzheimers-bacteria-that-cause-stomach-ulcers-may-protect-the-brain-our-new-research-indicates-259018

    MIL OSI – Global Reports

  • MIL-OSI Global: Alzheimer’s: bacteria that cause stomach ulcers may protect the brain, our new research indicates

    Source: The Conversation – UK – By Gefei Chen, Associate professor, Karolinska Institutet

    _H pylori_ is more commonly known as the culprit of stomach infections. Corona Borealis Studio/ Shutterstock

    Every three seconds, someone in the world develops dementia. Alzheimer’s disease is the most common form of dementia, accounting for between 60% and 70% of all cases.

    Although scientists have made significant progress in understanding the disease, there’s still no cure. That’s partly because Alzheimer’s disease has multiple causes – many of which are still not fully understood.

    Two proteins which are widely believed to play central roles in Alzheimer’s disease are amyloid-beta and tau. Amyloid-beta forms sticky plaques on the outside of brain cells. This disrupts communication between neurons. Tau accumulate inside brain cells, where it twists into tangles. This ultimately leads to cell death. These plaques and tangles are the hallmark features of Alzheimer’s disease.

    This understanding, known as the amyloid hypothesis, has shaped research for decades and led to treatments that aim to clear amyloid from the brain. Monoclonal antibody drugs have been approved in recent years for this purpose.

    But they only work in the early stages of the disease. They do not reverse existing damage and may cause serious side-effects such as brain swelling and bleeding. Most importantly, they only target amyloid-beta, leaving tau untreated.

    But in a surprise twist, recent research published by my colleagues and me has found that a protein from Helicobacter pylori – a bacteria best known for causing stomach ulcers – can block the toxic buildup of both amyloid-beta and tau. This unexpected finding may point to a new strategy for the fight against Alzheimer’s disease.


    Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.


    Our discovery began with a very different question. We were initially studying how H pylori interacts with other microbes. Some bacteria form protective communities called biofilms, which rely on amyloid assemblies (similar in structure to the plaques which form in the brain) as a structural scaffold. This led us to wonder: could H pylori influence bacterial biofilms by also interfering with amyloid assemblies in humans?

    We turned our attention to a well-known H pylori protein called CagA. While half of the protein is known to trigger harmful effects in human cells (referred to as the C-terminal region), the other half (the protein’s N-terminal region) may have protective properties. To our surprise, this N-terminal fragment, called CagAN, dramatically reduced the formation of both bacterial amyloids and biofilms in the bacterial species Escherichia coli and Pseudomonas.

    Encouraged by these results, we tested whether the same protein fragment could block the buildup of human amyloid-beta proteins. To do this, we incubated amyloid-beta molecules in the lab: some were treated with CagAN, while others were left as normal. We then tracked amyloid formation using a fluorescence reader and an electron microscope.

    The protein derived from H pylori blocked amyloid-beta plaques from forming.
    Signal Scientific Visuals/ Shutterstock

    We found that treated samples had far less amyloid clump formation during the testing period. Even at very low concentrations, CagAN almost completely stopped amyloid-beta from forming amyloid aggregates.

    To understand how CagAN worked, we used nuclear magnetic resonance (which allows us to look at how molecules interact with each other) to examine how the protein interacts with amyloid-beta. We also used computer modelling to investigate possible mechanisms. Remarkably, CagAN also blocked tau aggregation – suggesting it acts on multiple toxic proteins involved in Alzheimer’s disease.

    Blocking the disease

    Our study has shown us that a fragment from the Helicobacter pylori protein can effectively block the buildup of the two proteins that are implicated in Alzheimer’s disease. This suggests that bacterial proteins – or drugs modelled after them – could someday block the earliest signs of Alzheimer’s.

    What’s more, the benefits may extend beyond Alzheimer’s disease.

    In additional experiments, the same bacterial fragment blocked the aggregation of IAPP (a protein involved in type 2 diabetes) and alpha-synuclein (linked to Parkinson’s disease). All of these conditions are driven by the accumulation of toxic amyloid aggregates.

    That a single bacterial fragment could interfere with so many proteins suggests exciting therapeutic potential. Though these conditions affect different parts of the body, they may be linked through cross-talk between amyloid proteins – a shared mechanism that CagAN could help disrupt.

    Of course, it’s important to be clear: this research is still at an early stage. All of our experiments were conducted in lab settings, not yet in animals or humans. Still, the findings open a new path.

    Our study also uncovered the underlying mechanisms for how CagAN blocked the amyloid-beta and tau from forming amyloid aggregates. One of the ways in which CagAN did this was by preventing the proteins from coming together to form clumps. They also prevented small, premature amyloid aggregates from forming as well. In the future, we will continue the detailed mechanism study and evaluate the effects in animal models.

    These results also prompt a question: could H pylori, long seen only as harmful, also have a protective side? Some studies have hinted at a connection between H pylori infection and Alzheimer’s disease, though the relationship remains unclear. Our discovery adds a new layer to this discussion, suggesting that part of H pylori may actually interfere with the molecular events that lead to Alzheimer’s disease.

    That means in the future, we may need to take a more precise and personalised approach. Instead of aiming to eliminate H pylori completely with antibiotics, it might be more important to understand, in different biological contexts, which parts of the bacterium are harmful, and which might actually be beneficial.

    As medicine continues to move toward greater precision, the goal may no longer be to wipe out every microbe, but to understand how some of them might work with us rather than against us.

    Gefei Chen is also affiliated with Uppsala University.

    ref. Alzheimer’s: bacteria that cause stomach ulcers may protect the brain, our new research indicates – https://theconversation.com/alzheimers-bacteria-that-cause-stomach-ulcers-may-protect-the-brain-our-new-research-indicates-259018

    MIL OSI – Global Reports

  • MIL-OSI Global: Nineteen Eighty-Four might have been inspired by George Orwell’s fear of drowning

    Source: The Conversation – UK – By Nathan Waddell, Associate Professor in Twentieth-Century Literature, University of Birmingham

    George Orwell had a traumatic relationship with the sea. In August 1947, while he was writing Nineteen Eighty-Four (1949) on the island of Jura in the Scottish Hebrides, he went on a fishing trip with his young son, nephew and niece.

    Having misread the tidal schedules, on the way back Orwell mistakenly piloted the boat into rough swells. He was pulled into the fringe of the Corryvreckan whirlpool off the coasts of Jura and Scarba. The boat capsized and Orwell and his relatives were thrown overboard.

    It was a close call – a fact recorded with characteristic detachment by Orwell in his diary that same evening: “On return journey today ran into the whirlpool & were all nearly drowned.” Though he seems to have taken the experience in his stride, this may have been a trauma response: detachment ensures the ability to persist after a near-death experience.

    We don’t know for sure if Nineteen Eighty-Four was influenced by the Corryvreckan incident. But it’s clear that the novel was written by a man fixated on water’s terrifying power.


    This article is part of Rethinking the Classics. The stories in this series offer insightful new ways to think about and interpret classic books and artworks. This is the canon – with a twist.


    Nineteen Eighty-Four isn’t typically associated with fear of death by water. Yet it’s filled with references to sinking ships, drowning people and the dread of oceanic engulfment. Fear of drowning is a torment that social dissidents might face in Room 101, the torture chamber to which all revolutionaries are sent in the appropriately named totalitarian state of Oceania.

    An early sequence in the novel describes a helicopter attack on a ship full of refugees, who are bombed as they fall into the sea. The novel’s protagonist, Winston Smith, has a recurring nightmare in which he dreams of his long-lost mother and sister trapped “in the saloon of a sinking ship, looking up at him through the darkening water”.

    George Orwell in 1943.
    National Union of Journalists

    The sight of them “drowning deeper every minute” takes Winston back to a culminating moment in his childhood when he stole chocolate from his mother’s hand, possibly condemning his sister to starvation. These watery graves imply that Winston is drowning in guilt.

    The “wateriness” of Nineteen Eighty-Four may have another interesting historical source. In his essay My Country Right or Left (1940), Orwell recalls that when he had just become a teenager he read about the “atrocity stories” of the first world war.

    Orwell states in this same essay that “nothing in the whole war moved [him] so deeply as the loss of the Titanic had done a few years earlier”, in 1912. What upset Orwell most about the Titanic disaster was that in its final moments it “suddenly up-ended and sank bow foremost, so that the people clinging to the stern were lifted no less than 300 feet into the air before they plunged into the abyss”.

    Sinking ships and dying civilisations

    Orwell never forgot this image. Something similar to it appears in his novel Keep the Aspidistra Flying (1936) where the idea of a sinking passenger liner evokes the collapse of modern civilisation, just as the Titanic disaster evoked the end of Edwardian industrial confidence two decades beforehand.

    The Titanic disaster had a profound impact on Orwell.
    Wiki Commons

    References to sinking ships and drowning people appear at key moments in many other works by Orwell, too. But did the full impact of the Titanic surface in Nineteen Eighty-Four?

    Sinking ships were part of Orwell’s descriptive toolkit. In Nineteen Eighty-Four, a novel driven by memories of unsympathetic water, they convey nightmares. Filled with references to water and liquidity, it’s one of the most aqueous novels Orwell produced, relying for many of its most shocking episodes on imagery of desperate people drowning or facing imminent death on sinking sea craft.

    The thought of trapped passengers descending into the depths survives in Winston’s traumatic memories of his mother and sister, who, in the logic of his dreams, are alive inside a sinking ship’s saloon.


    Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.


    There’s no way to prove that the Nineteen Eighty-Four is “about” the Titanic disaster, but in the novel, and indeed in Orwell’s wider body of work, there are too many tantalising hints to let the matter rest.

    Thinking about fear of death by water takes us into Orwell’s terrors just as it takes us into Winston’s, allowing readers to see the frightened boy inside the adult man and, indeed, inside the author who dreamed up one of the 20th century’s most famous nightmares.

    Beyond the canon

    As part of the Rethinking the Classics series, we’re asking our experts to recommend a book or artwork that tackles similar themes to the canonical work in question, but isn’t (yet) considered a classic itself. Here is Nathan Waddell’s suggestion:

    As soon as the news broke of the Titanic’s sinking, literary works of all shapes and sizes started to appear in tribute to the disaster and its victims. As the century went on, and as research into the tragedy developed (particularly after the ships wreckage was discovered in 1985), more nuanced literary responses to the sinking became possible.

    One such response is Beryl Bainbridge’s Whitbread-prize-winning novel Every Man for Himself (1996). It reimagines the disaster from the first-person perspective of an imaginary character, Morgan, the fictional nephew of the historically real financier J. P. Morgan (who was due to sail on the Titanic but changed plans before it sailed).

    This article features references to books that have been included for editorial reasons, and may contain links to bookshop.org. If you click on one of the links and go on to buy something from bookshop.org The Conversation UK may earn a commission.

    Nathan Waddell does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Nineteen Eighty-Four might have been inspired by George Orwell’s fear of drowning – https://theconversation.com/nineteen-eighty-four-might-have-been-inspired-by-george-orwells-fear-of-drowning-251289

    MIL OSI – Global Reports

  • MIL-OSI Global: Why Israel-Iran tensions might not raise prices at the pump as much as feared (for now)

    Source: The Conversation – UK – By Adi Imsirovic, Lecturer in Energy Systems, University of Oxford

    GreenOak/Shutterstock

    The unexpected attack by Israel on Iran, a major oil-producing nation, may undermine anaemic global economic growth and hinder central banks’ ability to cope in an already uncertain market.

    Iran exports up to 2 million barrels of oil and refined petroleum products per day (million barrels per day – mbd). Due to long-standing sanctions, most of this oil is sold to China at discounted prices.

    Normally, a sudden loss of the Iranian exports (equivalent to around 2% of global oil supply) would trigger panic. But Opec (the Organisation of the Petroleum Exporting Countries) is in the process of reversing the production cuts imposed early in the COVID pandemic (and subsequently). This leaves the organisation with an unusually large spare capacity of at least four million barrels per day, most of which is held by Saudi Arabia (up to 3.5 million) and the UAE (about one million).

    On top of that, the International Energy Agency (IEA) holds more than 1.2 billion barrels of emergency reserves across OECD countries, ready to be deployed if needed. China, too, has significant reserves, though the line between its commercial and strategic stocks is less clear.

    Additionally, some 40 million barrels of Iranian oil are stranded aboard anchored ships near China, unsold due to declining industrial demand and electric vehicles hitting petrol consumption. In May, China’s refinery throughput fell 1.8% year-on-year, with no signs of a swift rebound. What’s more, the IEA is expecting global oil production to exceed 1.8 mbd, compared to its earlier projection of only 0.72 mbd, leaving a massive surplus of supply over demand.

    China has proven to be an opportunistic buyer. It did not buy the excess Iranian oil supplies at US$65 (£48) a barrel earlier this year, and whether it buys at US$75 (at the time of writing) or higher, may be a signal of how seriously it views the Middle East tensions. Meanwhile, other Asian importers have been quick to secure prompt shipments from west Africa, and have eyes on US supplies as well.

    Thanks to this surplus capacity and stagnant demand, the oil market’s reaction has been more muted than many feared. Prices briefly spiked by US$10 but have since eased. It appears that the market is assessing whether the hostilities will escalate. If so, the impact on energy prices and inflation could be more significant.

    A conflict of convenience

    It remains somewhat unclear why Israeli prime minister Benjamin Netanyahu chose this moment to strike Iran, especially in the middle of peace negotiations between Iran and the United States. In a recent interview, former Israeli leader Ehud Barak admitted that even a full-scale attack would only delay Iran’s nuclear ambitions by weeks or months at best, with US support.

    Diplomacy, then, may remain the more effective route. This was the rationale behind the Iran nuclear deal brokered under US president Barack Obama, a deal later dismantled by Trump under pressure from Netanyahu.




    Read more:
    Why are the US and Israel not on the same page over how to deal with Iran? Expert Q&A


    So, Netanyahu’s endgame might be political survival and diverting attention from the humanitarian catastrophe in Gaza.

    If Iran feels sufficiently cornered, it may retaliate by shutting down the Strait of Hormuz – a strategic chokepoint through which up to 20 million barrels of oil pass daily. A lot of that oil can be diverted through alternative supply routes such as a large (6 mbd) Saudi East-West pipeline leading to the Red Sea. There is also the UAE pipeline, which avoids the Strait of Hormuz and leads to the port of Fujairah, in the Gulf of Oman.

    Iran could close off the Strait of Hormuz, causing widespread disruption.
    CeltStudio/Shutterstock

    Nevertheless, the increased risk and higher shipping costs would certainly result in much higher prices at the pump. The cost of insurance for ships travelling through the Strait of Hormuz have jumped 60% since the start of the conflict. That, combined with the broader economic fallout, could have global repercussions.

    The World Bank recently downgraded its global growth forecast to 2.3% for 2025 – nearly half a percentage point below previous estimates. While a worldwide recession is not yet predicted, the bank warned that growth this decade could be the slowest since the 1960s.

    Among the leading culprits is Trump’s tariff policy, which has strained global trade, reduced efficiency and effectively imposed a tax on consumers both in the US and elsewhere. The fear of inflation has led to rising long-term bond yields.

    Expectations of higher inflation and high bond yields, in turn, constrain central banks from stimulating the economy by cutting interest rates. This is a key tool used by the US Federal Reserve to influence the cost of borrowing throughout the US economy and thus attempt to stimulate economic activity.

    And in spite of the recent US-UK trade agreement, the deal includes a 10% tariff on imports from the UK – with steel still at 25%.

    UK economic growth had already slipped into negative territory before the conflict began. Now, with the added strain of geopolitical instability, households are bracing for higher petrol prices at the pump, sluggish wage growth and rising unemployment. The conflict in the Middle East may not have sparked a global oil crisis yet, but it certainly won’t improve anyone’s cost of living.

    Adi Imsirovic is affiliated with Center for Strategic and International Studies (CSIS) in Washington.

    ref. Why Israel-Iran tensions might not raise prices at the pump as much as feared (for now) – https://theconversation.com/why-israel-iran-tensions-might-not-raise-prices-at-the-pump-as-much-as-feared-for-now-259211

    MIL OSI – Global Reports

  • MIL-OSI Global: England is expanding free school meals – here’s what could happen if they were given to all children

    Source: The Conversation – UK – By Sanghamitra Bandyopadhyay, Professor of Development Economics , Queen Mary University of London

    Children in Jharkhand state, India, eating their midday meal at school. Mohammad Shahnawaz/Shutterstock

    The UK government has announced an extension of free school meals in England to all children whose parents receive universal credit, in order to address child hunger and poverty.

    The government claims that half a million more pupils will now have access to school lunches for free. The total number of children registered for free school meals in England is currently about 2.2 million, or about 26% of the total school population. In addition, all children in infant school, aged between four and seven, are entitled to receive a hot lunch at school.

    But given the high rates of child poverty in the UK, and the value a decent meal provides, there is evidence that free school meals for all children could provide significant benefits in England.

    The provision in Scotland and Wales is more generous: free school meals for children from primary one to five in Scotland (ages four to ten) and for all children in primary school in Wales. But other countries make provision for all children, in both primary and secondary education, to receive meals at school.


    Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.


    Child poverty in the UK continues to be historically high. In 2023-24, 3.4 million children – 23% of all children in the UK – were in relative income poverty. Incidence of child poverty is particularly acute in cities.

    In the UK, the COVID-19 pandemic and Brexit resulted in a rise in unemployment. This in turn led to widespread instances of extreme poverty and child hunger. The lack of active policies in the UK to address child hunger, malnourishment and increasing childhood obesity has been widely criticised by the British Medical Association.

    The UK’s experience of high levels of child poverty is in stark contrast with most other high-income countries. The UK ranked 37th out of 39 by child income poverty, ahead only of Turkey and Colombia, in 2023. In comparison, the UK’s adult poverty rate is close to the OECD average, ranking 23rd out of 39 high-income countries. This implies that child poverty can be high even if adult poverty levels are relatively low.

    Global policy choices

    Providing nutritious free school meals is a fundamental cornerstone of government policy to ensure child welfare. It’s used as a poverty alleviation measure all over the world. Almost half of the world’s school meals are free, feeding 418 million children.

    Many of these programmes are based in developing countries. The world’s largest free school meal programme runs in India: the “mid-day meal scheme” feeds 125 million children aged six to 14 and costs the equivalent of £2 billion each year. Similar successful programmes are run in Brazil and some African countries, with another having recently been launched in Indonesia.

    But schemes in Finland and Sweden also cover almost all school children.

    There is a growing body of global evidence on the wider beneficial effects of free school meals on child poverty. Free school meals in India have resulted in higher cognitive outcomes. They have increased school enrolment and school attendance, and thus educational outcomes.

    They have also been found to have an intergenerational effect. In India, fewer shorter children were born to women who had benefited from the country’s school food programme.

    Nutritionally balanced school meals have proven health benefits.
    Pixel-Shot/Shutterstock

    Nutritionally balanced children’s school meals are also associated with lower incidence of obesity. Studies in the US and UK, for example, have shown universal provision is linked to lower obesity rates.

    Research into the Swedish scheme has found that children who have free school meals with prescribed nutritional standards not only have higher educational attainment and better health outcomes in adulthood, but also higher incomes. Children from families in the lowest income quartile in Sweden who received free school meals for nine years increased their lifetime income by 6%.

    Other tangible economic benefits include significant reductions in potential healthcare costs as a result of malnutrition and non-communicable diseases. A 2025 European Union report estimates the return from investment in school meal programmes is at least sevenfold, up to a possible €34 for every €1 spent.

    While there is rich scientific and economic evidence that universal free school meals are immensely beneficial, a child’s access to nutrition and government support to obtain nourishment is also a fundamental human right. The School Meals Coalition is an international consortium of 108 countries to achieve free school meals for all by 2030. The UK is one of the few advanced countries not signed up to it.

    Sanghamitra Bandyopadhyay does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. England is expanding free school meals – here’s what could happen if they were given to all children – https://theconversation.com/england-is-expanding-free-school-meals-heres-what-could-happen-if-they-were-given-to-all-children-258337

    MIL OSI – Global Reports

  • MIL-OSI Global: The UK’s warm homes plan has been saved – here’s how Labour can learn from a decade of failed insulation schemes

    Source: The Conversation – UK – By Madeleine Pauker, PhD Candidate, Science Policy Research Unit, University of Sussex

    Natalia Nosova/Shutterstock

    The UK government confirmed in its June 2025 spending review that it will honour its manifesto pledge and not cut the £13.2 billion warm homes plan, as had been speculated. The money will be spent over the next four years, marking a significant increase on funding for energy-related home upgrades compared to that offered by the previous government.

    The plan encompasses several programmes for cutting energy bills and reducing carbon emissions by making homes easier to heat and replacing gas boilers and other fossil fuel heating systems. Low-income homeowners and renters will receive grants for “retrofit” upgrades such as insulation, solar panels and heat pumps through schemes delivered by energy companies and councils.

    All homeowners can benefit from the boiler upgrade scheme, which offers £7,500 towards the cost of a heat pump, and those living in the least energy efficient homes can get free loft or cavity wall insulation. Councils and housing associations will also receive funding to make upgrades to their properties.

    The British government has provided some form of financial support for insulation and other energy efficiency measures since the 1970s. Millions of homes were insulated over the 2000s, but over the last decade support has been cut and the number of households taking up grants has collapsed. Programmes have also not been designed to provide comprehensive, high-quality retrofits.

    Over the next few years, the warm homes plan will significantly increase the amount of funding available for retrofitting homes. This is an opportunity to reshape the UK’s strategy for fixing its cold, leaky housing stock, reduce reliance on gas heating and lower household energy bills.

    How support for retrofitting has evolved

    For the last 30 years, energy companies have been required to provide insulation and other energy efficiency measures to households. These programmes are funded by levies on energy bills rather than public spending.

    From 1994 to 2015 any homeowner, landlord, or renter could receive energy efficiency measures such as insulation from energy companies. Additional publicly funded schemes sought to eliminate fuel poverty and targeted low-income households. This approach proved broadly successful throughout the 2000s and early 2010s. At its peak in 2008-11, one in five UK households received insulation, more efficient boilers or another form of support.

    However, these schemes were never designed to provide the comprehensive retrofits that modern climate targets demand. Ultimately, they failed to take a whole-house approach that could address multiple energy-efficiency issues at once.

    A pivotal moment came in 2015 when the Conservative-Liberal Democrat coalition government removed universal eligibility from supplier-led schemes and shaved £30 off annual household bills. Low-income and vulnerable households, which had already constituted a priority group under energy company-led schemes, became the only demographic eligible for support. Following this decision – plus other modifications to the programmes – the number of insulation measures installed each year fell by about 70%.

    In 2023, the Conservative government of Rishi Sunak introduced the Great British insulation scheme which offers free cavity wall or loft insulation to homes registered given an efficiency rating of D or below (ratings run from A for the most efficient to G for the least). The universal boiler upgrade scheme was also introduced.

    Meanwhile, the energy company obligation, which provides a greater range of measures, including several types of insulation, heat pumps and solar panels, remains restricted to low-income and vulnerable households.

    However, due to complex eligibility requirements, low public awareness and a lack of trust, among other reasons, most of the financial support available is not reaching households and the number of homes receiving upgrades has not recovered.

    Heat pumps can get homes off gas, but installations trail boiler fittings.
    Martin Bergsma/Shutterstock

    The problems with current schemes

    While reinstating universal support is positive, the boiler upgrade scheme only covers about half the cost of installing a heat pump, making it a subsidy for wealthier households that can afford to foot the rest of the bill.

    Energy bill levies, which fund the energy company obligation, disproportionately burden poorer households, which spend a higher proportion of income on energy. At the same time, while everyone continues to pay for the programme via their energy bills, restrictive eligibility requirements leave most households who cannot cover retrofit costs independently without support.

    The scheme also incentivises companies and their subcontractors to meet the scheme’s carbon reduction requirements at the lowest possible cost. This discourages whole-house retrofits, more complex insulation measures, repairs prior to retrofit (such as removing damp and mould or repairing roofs) and work in certain types of homes.

    Resulting insulation failures have damaged public confidence in retrofit programmes. These problems highlight the mismatch between a market-driven approach and the comprehensive changes necessary to make homes healthier to live in and cheaper to heat, as well as meet climate targets and restore public trust.

    The case for replacing supplier-led schemes with public alternatives remains compelling, despite the government’s supposed fiscal constraints. Rather than relying on energy companies and their subcontractors for complex home interventions, councils could be empowered to guide households through the retrofit process and combine homes in area-based schemes.

    The warm homes plan includes funding for councils to retrofit low-income households, including those earning less than £36,000, receiving means-tested benefits, or living in certain postcodes. But the scale of the programme is much smaller than the energy company obligation, although investment will increase over the next few years.

    This is still a narrow approach to improve the country’s housing that focuses on low-income households, though most middle-income households cannot afford the cost of a retrofit either. The budget for other home improvements remains minimal – homes in poor condition are likely to be missed.

    Details of how most of the warm homes plan funding will be spent is due to be revealed in autumn 2025. There is still time for the government to choose a more progressive approach.

    An alternative would be to expand grant-funded upgrades for low-income homeowners and offer low-interest, long-term, property-linked loans for middle-income households. This could be designed to cover whole-house retrofits, encompassing insulation, ventilation, heat pumps, solar panels and other measures, as well as repairs.

    There are also emerging plans from consultancies working with local governments to develop area-based retrofit programmes that blend public and private investment, aiming to attract investment from pension funds to shift the cost of retrofitting away from households.

    However, it remains unclear whether such models will offer sufficiently competitive returns and low enough risk to appeal to institutional investors – and the UK cannot afford to wait for private capital to materialise when nationwide retrofitting is urgently needed.


    Don’t have time to read about climate change as much as you’d like?

    Get a weekly roundup in your inbox instead. Every Wednesday, The Conversation’s environment editor writes Imagine, a short email that goes a little deeper into just one climate issue. Join the 45,000+ readers who’ve subscribed so far.


    Madeleine Pauker receives funding from the Energy Demand Research Centre, funded by the Engineering and Physical Sciences Research Council and the Economic and Social Research Council.

    ref. The UK’s warm homes plan has been saved – here’s how Labour can learn from a decade of failed insulation schemes – https://theconversation.com/the-uks-warm-homes-plan-has-been-saved-heres-how-labour-can-learn-from-a-decade-of-failed-insulation-schemes-258719

    MIL OSI – Global Reports

  • MIL-OSI Global: Wandering uteruses and far-reaching tubes: the surprising mobility of the female reproductive tract

    Source: The Conversation – UK – By Michelle Spear, Professor of Anatomy, University of Bristol

    The ancient wandering womb theory suggested that many ailments in women were caused by the uterus becoming dislodged and roaming the body in search of moisture.

    According to these theories, the uterus could roam freely around the body, pressing on the liver or lungs and causing symptoms such as breathlessness, fainting and emotional distress – what was later termed “hysteria”, from the Greek hystera (uterus).

    Treatments included fumigating the lower body with sweet-smelling herbs to entice the uterus back downward, exposing the nose to pungent odours to drive it away from the chest and adding weights to the abdomen to prevent the uterus from rising. Marriage and pregnancy were often prescribed as cures, under the belief that a busy uterus was a happy, well-behaved one.


    Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.


    In the 18th century, advances in anatomy and dissection began to disprove the notion that the uterus could physically roam. However, the legacy of the wandering womb lived on well into the 20th century in the diagnosis of “female hysteria”, an unevidenced catch-all for a multitude of symptoms.

    While the uterus doesn’t float around like a balloon in the chest cavity, it does change position. And this matters. Mobility is essential for fertility, menstruation, pregnancy and pelvic health.

    How much does the uterus move?

    The uterus sits between the bladder and the rectum, suspended by a series of ligaments. These don’t hold it immobile – rather, they allow it to rock and tilt.

    Its position can be anteverted (tilted forward over the bladder), retroverted (angled back toward the rectum and spine), or somewhere in between. These variations are entirely normal and often vary.

    That position matters. The uterine angle can affect where menstrual pain is experienced. For those with a retroverted uterus, discomfort may radiate into the lower back. For others, cramping is felt more in the lower abdomen.

    A forward-tilted uterus may press more directly on the bladder, increasing the urge to urinate, especially in early pregnancy. Conversely, a backward tilt might impinge on the rectum, contributing to constipation or bloating.

    During sexual arousal, the uterus “tents” – lifting slightly and lengthening the vaginal canal. During labour, it contracts powerfully and rhythmically, drawing the cervix upwards and helping to expel the foetus.

    Even the cervix – the narrow opening at the base of the uterus – is not fixed in place. Its height, texture and openness vary across the menstrual cycle in response to hormonal cues. During ovulation, it rises and softens to allow sperm entry. Before menstruation, it lowers and firms up again.

    The uterine tubes: searching, not wandering

    Perhaps the most surprising anatomical revelation is that a uterine (fallopian) tube on one side of the body can capture an egg released from the opposite ovary. If there’s a true seeker in the reproductive tract, it’s the uterine tube.

    Each month, at ovulation, the fimbriae – finger-like projections at the end of the tube – sweep across the surface of the ovary, coaxing the released egg into the tube’s entrance. The tube isn’t anchored directly to the ovary. Instead, it finds it. Like a sea anemone in slow motion, it explores, flexes and moves.

    Once caught, cilia – tiny hair-like structures that line the inner surface of the tube – work in concert with muscular contractions that move the egg towards the uterus. This choreography is vital but also explains the risk of ectopic pregnancy.

    If a fertilised egg implants in the tube instead of travelling to the uterus, it can pose a serious medical emergency. Ironically, it’s the very adaptability and reach of the tube that makes it vulnerable.

    The ovaries are also slightly mobile, suspended by ligaments that allow for some degree of movement within the pelvic cavity. This becomes especially apparent after hysterectomy when the removal of the uterus can cause the ovaries to “drift”, sometimes complicating imaging or surgical planning.

    While their movement is more limited than that of the uterus or tubes, it still plays a role in pelvic dynamics. In rare cases, it can result in ovarian torsion, a painful twisting of the organ that requires emergency care.

    While mobility is normal, excessive movement or weakened support can cause problems. Uterine prolapse – when the uterus descends into or beyond the vaginal canal – can result from weakened pelvic floor muscles, often after multiple childbirths or due to age-related changes. It’s a mechanical failure, not a moral one. Sadly, though, history hasn’t always treated it that way.

    Similarly, adhesions from endometriosis or previous surgeries can limit natural mobility, causing severe pain as organs that should glide against one another become tethered and inflamed.

    While the uterus does indeed move, it does so within anatomical boundaries and under the influence of ligaments and hormones – not whim. The enduring myth of the wandering womb reflected broader anxieties about the female body: that it was unpredictable, unruly and in need of control. Today, with the benefit of imaging, dissection and anatomical research, we can replace that myth with a deeper understanding of purposeful mobility.

    Michelle Spear does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Wandering uteruses and far-reaching tubes: the surprising mobility of the female reproductive tract – https://theconversation.com/wandering-uteruses-and-far-reaching-tubes-the-surprising-mobility-of-the-female-reproductive-tract-258373

    MIL OSI – Global Reports

  • MIL-OSI Global: Blinding lights: the hidden science behind gambling’s glow

    Source: The Conversation – UK – By Glen Dighton, Research Officer at the Centre for Military Gambling Research (MilGAM), Swansea University

    MMPhoto21/Shutterstuck

    There’s a reason casinos rarely have windows or clocks, they’re engineered to make you lose track of time. But what if it’s not just time you’re losing? New research suggests that the lighting used in gambling environments could be quietly altering how we make decisions, making us more prone to take risks.

    The colour of the lights surrounding us can do more than just set the mood. It can shape our behaviour.

    The new study from researchers at Flinders University in Australia found that blue-enriched lighting (the same cold, bright hue used in many modern LED lights and digital screens) can reduce a gambler’s sensitivity to losses. In a controlled experiment, participants exposed to this kind of light took riskier bets and responded less emotionally to losing.

    The researchers believe this change in decision-making is rooted in our biology. The human body is sensitive to different wavelengths of light, not just for vision but also for regulating our internal clocks and emotional states. Blue light in particular has been shown to suppress melatonin production, a hormone which signals to the body it’s time to prepare for sleep.


    Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.


    Research has also shown blue light can increase alertness and influence brain areas tied to reward and motivation by stimulating the neural circuits involved in anticipation and decision-making. In the case of gambling, this heightened arousal might dampen our natural aversion to loss, even when the odds are stacked against us.

    Light can influence us in many other surprising ways. Studies have shown that cooler, blue-toned lighting can enhance cognitive performance and alertness during the day, which is why it’s often used in offices and classrooms. Warmer lighting is more relaxing and is typically recommended by sleep scientists and health professionals for evenings to promote better sleep.

    Blue light can make you less sensitive to losing.
    Joshua Resnick/Shutterstock

    Retailers, too, have long exploited the psychological effects of lighting, using bright, targeted lighting – often in the form of spotlighting or high-intensity LEDs – to draw attention to products.

    The colour and intensity of lighting can also affect consumers’ perception of value and attractiveness. This encourages spending by increasing visual salience, making a product stand out more and grab your attention, and creating a more engaging sensory experience.

    Specific colours of light seem to have an array of effects in different environments. Red lighting may have effects which increase appetite. This is possibly because it stimulates the sympathetic nervous system, which is associated with arousal and physiological readiness. Meanwhile studies suggest green light may reduce pain and light sensitivity for migraine sufferers.

    But lighting is only one half of the sensory equation in casinos. Sound design plays a major role in immersive gambling environments. Upbeat music can make people less risk-averse by speeding up decision-making and creating a sense of urgency.

    Jingles and celebratory sounds serve as auditory rewards, reinforcing positive feelings even in the absence of a financial win. When players lose, slot machines often produce celebratory sounds and flashing lights, creating what researchers call a “loss disguised as a win”. This sensory mismatch tricks the brain into thinking it’s succeeding, distorting our ability to assess risk or stop playing.

    In gambling environments, red light combined with casino‑style sounds has been shown to eliminate the usual cognitive slowdown after losses during decision-making tasks, leading players to make faster choices without the normal pause for reflection.

    A 2018 study showed that flashing animations and vivid colours can increase arousal and attention, making gambling more stimulating and immersive. This, in turn, delays self-regulation and increases time spent gambling. In effect, your surroundings are constantly nudging you to stay, to play, and to believe the next win is just around the corner.

    As gambling moves increasingly online, these principles are being translated to digital platforms. Online slot games often use flashing animations, vivid colours, and background music that mimic the ambience of a physical casino. The blue light emitted from screens can be just as stimulating – especially late at night – potentially exacerbating the effects seen in the Flinders University study.

    Online and mobile gambling uses these techniques to keep you playing too.
    Marko Aliaksandr/Shutterstock

    If subtle changes to lighting can lead to riskier decisions, then regulating these features might help promote less harmful gambling behaviour. For instance, encouraging warmer lighting in gambling venues or digital settings could help prevent excessive play.

    The lights and sounds that surround us in these environments aren’t just decoration. They’re carefully designed to heighten arousal, dull sensitivity to losses, and encourage riskier decisions.

    Our responses to colour, brightness and sound happen at a subconscious level, meaning even informed players can still be swayed by them. Reducing your device’s screen brightness, using blue light filters at night, or turning off in-game sounds can help counteract some of these psychological effects for online gambling.

    But meaningful change will probably require policy intervention that treats environmental design not as a neutral backdrop, but as a powerful behavioural influence – one that should be shaped with responsibility to the wellbeing of the consumer, not just profit, in mind.

    If you believe your or someone else may benefit from support with gambling behaviour, please access the International Support Contact for your jurisdiction or GamCare for UK specific support.

    In the last three years, Dr Glen Dighton has received funding from Bristol Hub for Gambling Harms Research, and an honorarium from Greo Evidence Insights for grant-proposal review

    ref. Blinding lights: the hidden science behind gambling’s glow – https://theconversation.com/blinding-lights-the-hidden-science-behind-gamblings-glow-258623

    MIL OSI – Global Reports

  • MIL-OSI Global: Tracing the Drax family’s millions – a story of British landed gentry, slavery and sugar plantations

    Source: The Conversation – UK – By Paul Lashmar, Reader in Journalism, City St George’s, University of London

    ‘Planting the sugar-cane’: vast fortunes were made from the trades in both sugar and human slaves in the Americas. Schomburg Center for Research in Black Culture, Photographs and Prints Division, The New York Public Library

    Rich British aristocratic families with a legacy of owning colonial slave plantations are often accused by campaigners that their wealth solely originates from these plantations. One frequent target of this criticism has been the Drax family of Dorset, which is headed by Richard Grosvenor Plunkett-Ernle-Erle-Drax, who was the Conservative MP for South Dorset until July 2024.

    Historian Alan Lester of the University of Sussex has noted of Drax (as he is commonly known): “Much of his fortune is inherited, coming down the family line from ownership of the Drax sugar plantations and the 30,000 enslaved people who worked them as Drax property for 180 years before emancipation in Barbados.”

    Recently, I have researched and written a book on the Drax family’s history and involvement in the slave trade in the Caribbean, Drax of Drax Hall, that gives fresh insights into the level of wealth they derived from the sugar trade and the trade in African slaves who worked their plantations – as well as the family’s other income sources.

    I searched the archives in the UK and Caribbean for evidence of their revenue streams until Britain’s 1834 abolition of slavery in the colonies. I estimate that the family today are worth more than £150 million from their land and property in Dorset and Yorkshire.


    Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.


    Over a period of two centuries until 1834, eight generations of Drax ancestors owned and worked hundreds of enslaved African captives at any one time. The latest beneficiary of primogeniture – the legal concept that recognises the first-born child as heir to a familiy’s fortune – Richard Drax inherited the family’s still-operating 621-acre Drax Hall plantation in Barbados in 2021.

    Drax, 67, has said: “I am keenly aware of the slave trade in the West Indies, and the role my very distant ancestor played in it is deeply, deeply regrettable. But no one can be held responsible today for what happened many hundreds of years ago. This is a part of the nation’s history, from which we must all learn.”

    My research reveals the sources of his family’s wealth are more complex than the critics’ claims that it all derives from the slave-worked plantations.

    Like most British landed gentry, much of the Drax family income has come as extensive landlords of their British estates which, in 1883, exceeded 23,000 acres across various counties. Today, it includes nearly 16,000 acres in Dorset and 2,520 acres in the Yorkshire Dales.

    However, my research also shows the Drax family made more money from slavery than was previously thought, when taking into account the way revenues from their plantations were channelled into the family’s British estates over the two centuries of slavery.

    Drax Hall plantation in Barbados

    The Drax Hall plantation in the Barbados parish of Saint George has been described by Barbadian historian Sir Hilary Beckles, chair of the Caribbean Community reparations commission, as a “killing field” where as many as 30,000 slaves died in brutal conditions. Despite pressure from reparation campaigners in the Caribbean, Britain and elsewhere, Richard Drax has declined to make a formal public apology or gesture of recompense in the Caribbean for the years of slavery.

    A 19th-century drawing of Drax Hall plantation in Barbados.
    Unknown source, Wikimedia Commons

    As the prime minister of Barbados, Mia Mottley, explained in April 2024, despite the efforts of her government Drax has yet to agree to a settlement, pay reparations or contribute all or part of his family’s Drax Hall plantation to provide affordable housing or become a memorial to those who worked and died in colonial enslavement on the island.

    Some other British landed families whose ancestors owned slave plantations in the Caribbean, including the Trevelyans (who owned six slave plantations in Grenada) and the Gladstones (British prime minister William Gladstone’s father owned plantations in Guyana), have made formal apologies and reparations. And while some families have kept the terms of these reparations private, longtime BBC reporter Laura Trevelyan made a US$100,000 (£73,000) donation to a Caribbean development fund.

    The largest family estate

    Four thousand miles from Barbados, Richard Drax lives in Charborough House, a historic 17th-century mansion in Dorset. He oversees the 23.5-square mile estate, the largest family estate in Dorset with over 120 properties, many of which are rented out.

    Charborough was acquired by Drax’s ancestor Walter Erle by marriage in 1549. The family has gradually increased the estate over the centuries. Historically, their income comes from renting land to tenant farmers and cottages to agricultural workers. This, I identified, is where the bulk of their income has come from.

    Charborough House: the Drax family seat in Dorset.
    John Lamper/Wikimedia Commons, CC BY-SA

    However, profits from sugar produced by slavery also poured into the family coffers over 200 years. Richard Drax’s remote ancestor James Drax (1609-1661) was one of the first settler group to arrive in the then-uninhabited island of Barbados in 1627. In his introduction to my book, TV historian David Olusoga writes that the Drax family were key players – arguably the key players – in the origin story of British slavery:

    The Drax Hall plantation, the first estate on which a crop of sugar was commercially grown and processed by any English planter, became one of the laboratories in which early English slavery was developed and finessed.

    Built around 1650, the Jacobean plantation house is thought to be the one of the three oldest extant residential buildings in the Americas. From the 17th into the 18th century, the Draxes created and owned the largest acreage in Barbados with the Drax Hall and and Mount plantations – plus a 3,000-acre estate, also called Drax Hall, in Jamaica. The family became enormously wealthy: James Drax was said by a visitor to Drax Hall in the 1640s to “live like a prince”, putting on lavish dinners for friends and guests.

    In addition to owning slaves, James Drax shipped African captives to Barbados as a key part of the trade in slaves. Knighted by both Oliver Cromwell and Charles I, by 1660 he was a director and investor in the English East India Company which, in part, traded and exploited enslaved people.

    Paul Lashmar’s book, Drax of Drax Hall.
    Bookshop.com

    In her 1930 study, American historian Elizabeth Donnan presented evidence that the Draxes of the 17th century operated “off the books” – buying enslaved people from, and selling them to, “interloper” ships that circumvented the Royal African Company’s monopoly of slave trading to the colonies.

    The Drax family married into the Erle family in 1719, combining three fortunes: that of the Erles of Charborough, the Draxes of Yorkshire, Barbados and Jamaica, and the landed-gentry Ernles of Wiltshire.

    Despite being deeply involved in the South Sea Bubble scandal, the Drax family flourished. The slave registers in the National Archives show that between 1825 and 1834, the Drax Hall plantation in Barbados produced an average of 163 tonnes of sugar and 4,845 gallons of rum per year. This gave the family an average annual net profit of £3,591 – equivalent to about £600,000 now. Today, the plantation still produces 700 tonnes of sugar a year, earning the family something in the region of £250,000.

    Pressure for reparations

    In recent years, the value of Drax Hall’s land in Barbados has greatly increased as it is sought after for housing, and could now be worth as much as Bds$150,000 (£60,000) per acre. At the same time, pressure for reparations is growing. In 2023, the African Union threw its weight behind the Caribbean reparations campaign.

    David Comissiong, deputy chairman of the Barbados reparations task force, has said: “Other families are involved, though not as prominently as the Draxes. This reparations journey has begun.”

    Yet to date, the only reparations paid in the story of the Drax family’s involvement in the slave trade were to the family itself. In 1837, Jane-Frances Erle-Drax, the heiress of Charborough, received £4,293 12s 6d (worth more than £614,000 today) in reparations for freeing 189 slaves from Drax Hall plantation after the abolition of slavery in the colonies.

    In the course of researching and writing my book, I approached Richard Drax both directly and through his lawyers and put the claims made here to him. He had no comment to add.

    This page contains references to books included for editorial reasons, which may include links to bookshop.org. If you click on one of the links and go on to buy something from bookshop.org, The Conversation UK may earn a commission.

    Paul Lashmar is affiliated with the Labour Party.

    ref. Tracing the Drax family’s millions – a story of British landed gentry, slavery and sugar plantations – https://theconversation.com/tracing-the-drax-familys-millions-a-story-of-british-landed-gentry-slavery-and-sugar-plantations-257376

    MIL OSI – Global Reports

  • MIL-OSI Global: Why your doctor may not have given you the best advice for your lower back pain

    Source: The Conversation – UK – By Martin Underwood, Chair Professor, Primary Care Research, University of Warwick

    Focus and Blur/Shutterstock.com

    Treating lower back pain is enormously expensive. In the UK it’s estimated to cost the NHS around £3.2 billion a year. So, ensuring patients get the right treatment is critical.

    However, the guidance issued by the UK’s National Institute for Health and Care Excellence (Nice) on how to treat lower back pain was last updated in 2020, meaning many patients may be getting out-of-date advice from their healthcare practitioner.

    Fortunately, most people with lower back pain recover quickly without treatment. But a minority don’t, and they can go on to develop long-term disability.

    People with lower back pain usually see their GP first. The GP may refer the patient to a physiotherapist, or, in some parts of the UK, patients can refer themselves to one.


    Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.


    However, Nice recommends using a short questionnaire to identify those least likely to recover, so they can be offered more intensive treatment. Those most likely to recover get an initial assessment and advice only.

    This approach was supported by a UK study which found a small benefit compared to offering everyone standard physiotherapy care. But later studies have not confirmed that result. It may not matter if care is targeted at those at highest risk or not.

    Nice also recommends self-management. This means giving patients information and leaving them to handle their own recovery. But recent research found that an online support programme was no better than usual care from their GP.

    For people with at least three months of lower back pain, Nice recommends “radio frequency denervation” as an option. This is a procedure where a probe is inserted into the back next to the nerve carrying pain signals from the back. Heating the probe can disable the nerves that carry pain signals. The problem is that some studies suggest it may help while others show no benefit.

    A more robust study is underway that will hopefully provide us with a more definitive answer. But, for now, we think this treatment should be approached with caution.

    Most Nice recommendations for the use of medications align with the current evidence. Nice recommends against the use of opioids for people with short-term back pain. However, the guidance suggests that weak opioids, such as codeine, can be considered if anti-inflammatory drugs are ineffective or “contraindicated” (should be avoided), for example, for people with previous stomach bleeding.

    This ambiguous approach is confusing and may result in people being given the wrong care. Also, a study published in 2023 showed that a stronger opioid does not help people with short-term back pain. Nice could adopt a clearer stance, explicitly discouraging opioid use for lower back pain.

    The guidance could focus on treatments where there’s strong evidence of benefit. One option is non-steroidal anti-inflammatory drugs, such as ibuprofen, which can be effective for treating people with acute and persistent symptoms. If this medication fails, heat therapy, such as hot packs and heat wraps, can be used for short-term lower back pain.

    Nice suggests that codeine can be used if the patient is unable to take anti-inflammatory medication, such as ibuprofen.
    Matthew Nichols1/Shutterstock.com

    Treating peristant lower back pain

    Exercise programmes can help people with persistent back pain. A recent study found that regular walking can help prevent lower back pain flare-ups.

    Approaches, such as cognitive functional therapy, where physiotherapists address both physical and psychological barriers to recovery, also show great promise. A recent study found that it offers lasting benefits when compared to a sham (placebo) intervention.

    Mindfulness, a type of meditation, also seems a promising approach for persistent pain. A new study, published in The Lancet Rheumatology showed that it can have meaningful and lasting benefits for these patients.

    Guidance from the World Health Organization recommends other treatments, such as manual therapy (spinal manipulation, for instance) and acupuncture, that could help people with persistent symptoms.

    It is clear that the Nice guidelines don’t always reflect what we now know works, and sometimes steer care in the wrong direction.

    Martin Underwood is chief investigator or co-investigator on multiple previous and current research grants from the UK National Institute for Health Research, and is a co-investigator on grants funded by the Australian NHMRC and Norwegian MRC. He is a director and shareholder of Clinvivo Ltd that provides electronic data collection for health services research. He has accepted honoraria for examining theses, and performing peer review. He receives some salary support from University Hospitals Coventry and Warwickshire. He is a co-investigator on two current and one completed NIHR funded studies that have, or have had, additional support from Stryker Ltd. He has accepted travel expenses and accommodation for speaking at academic meetings.

    Gustavo Machado has an investigator grant from the National Health and Medical Research Council. He also holds research grants from the National Health and Medical Research Council, Medical Research Future Fund, and HCF Research Foundation.

    Crystian Bitencourt Soares de Oliveira does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Why your doctor may not have given you the best advice for your lower back pain – https://theconversation.com/why-your-doctor-may-not-have-given-you-the-best-advice-for-your-lower-back-pain-256040

    MIL OSI – Global Reports

  • MIL-OSI Global: Police in England and Wales to get more money – but increasing funding won’t necessarily mean less crime

    Source: The Conversation – UK – By Graham Farrell, Professor of Crime Science, University of Leeds

    Ian Dewar Photography/Shutterstock

    Police spending will rise by a real-terms 2.3% per year between now and 2028-29, the government announced in its latest spending review, drawn from local council tax. The government says this will help its mission to put 13,000 neighbourhood police on the streets, and “keep communities safe”.

    Police say this is far from enough to meet the government’s ambitions, particularly on cutting knife crime and violence against women, and that it is likely to be “swallowed up” by pay rises for police.

    The awkward truth, however, is that marginal changes to police funding and hiring make little difference to crime either way. Austerity cuts of around 20% to policing budgets in the 2010s were accompanied by declining crime, including domestic violence and antisocial behaviour.

    Widespread security improvements were responsible for the close to 90% reductions in many crime types. For example, engine immobilisers prevent car theft, and secure household doors and windows prevent burglary.

    Crime has been declining across developed countries for decades. But those countries vary greatly in policing practices and funding, so it is clear more policing was not the cause.

    American policing researcher pioneer David Bayley wrote in 1994:

    The police do not prevent crime. This is one of the best kept secrets of modern life. Experts know it, the police know it, but the public does not know it. Yet the police pretend they are society’s best defense against crime and continually argue that if they are given more resources, especially personnel, they will be able to protect communities against crime. This is a myth.

    This does not mean we don’t need police – we do. If there were no police, crime rates would soar. The issue here is diminishing marginal returns (we’re at the level where more funding doesn’t have the same effect).

    But it means the spending review debate had little to do with crime prevention. Rather, it was about how senior staff in public services routinely seek more for their departments. And following the spending review, police chiefs gave themselves an escape clause by claiming the increase is insufficient.


    Want more politics coverage from academic experts? Every week, we bring you informed analysis of developments in government and fact check the claims being made.

    Sign up for our weekly politics newsletter, delivered every Friday.


    In recent years, we’ve learned problem-solving policing can reduce some crimes in some contexts. For example, burglary at construction sites can often be theft of building materials and tools, so the crime problem can be reduced through improved site management (rather than just more arrests).

    However, problem-solving is not easy and so is not widely applied. Simply patrolling hotspots does not affect the crime opportunity structure (factors that tempt, facilitate or precipitate a particular cluster of crimes).

    Additionally, all types of crime, except homicide, are more likely to recur, and relatively soon, after prior victimisation. And while policing to prevent repeat victimisation can reduce crime, it has fallen by the wayside in recent years.

    A recent review by crime scientist Shannon Linning and colleagues examined the effect of more police hiring and more arrests on crime, concluding: “When a sensational crime happens, residents demand action. Often someone will cry for more police and more arrests … neither approach is likely to be helpful.”

    This makes it rather awkward that the government has recently committed to recruiting 13,000 additional neighbourhood police.

    Since most people don’t know the limitations of policing, both the government and the police have been able to maintain the illusion that more police means less crime. Academic police researchers will rarely admit it in case it risks their funding, and the media enjoy a perennially newsworthy topic. Taxpayers foot the bill as well as the emotional, financial and other costs of crime.

    How to stop crime

    There is, however, some room for optimism. What we have learned from the long-term international crime drop and dozens of small-scale successes against different crime types is that reducing crime opportunities is the best approach. With some strategic adjustment, there is much that police and government can do.

    A particular focus for the government and police should be encouraging businesses to take more responsibility for crime. Knife manufacturers and retailers should be involved in introducing a ban on pointed kitchen knives, the most common homicide weapon in England and Wales. The gradual approach over many years that research (in which I was involved) recommended is too long: it should be done within this government’s term.

    A lot of other crimes, including computer-enabled crimes, are generated, facilitated or hosted by businesses. Internet service providers and network providers benefit from advertising and payments, including when they are being used for crime (from stalking and sexual victimisation to fraud and terrorism).

    Manufacturers benefit from theft of phones and other products that need replacing. Online marketplaces profit from usage and advertising when stolen goods are sold, which inadvertently encourages shoplifting, theft and robbery. Online banking and financial services also host significant amounts of fraud, and are now sometimes required to pay up to £85,000 compensation to victims.




    Read more:
    Child sexual exploitation and abuse is a multibillion-dollar industry – new report shows who benefits


    Government and police should develop a portfolio of incentives and disincentives to promote private sector crime prevention, to include regulation and market-based incentives. When businesses have an economic incentive they are tremendously efficient at preventing crime, as car manufacturers showed by improving security that brought 90% reductions in car crime.

    Reducing crime opportunities is also the best way to stop criminality. When young people do not get involved in easy crimes like shoplifting, they do not progress to further crime, including violence against women and girls.

    In short, extra police funding will not reduce crime. A shift in strategy is what is really needed.

    Graham Farrell receives funding from the Economic and Social Research Council.

    ref. Police in England and Wales to get more money – but increasing funding won’t necessarily mean less crime – https://theconversation.com/police-in-england-and-wales-to-get-more-money-but-increasing-funding-wont-necessarily-mean-less-crime-258977

    MIL OSI – Global Reports

  • MIL-OSI Global: Trump breaks from western allies at G7 summit as US weighs joining Iran strikes

    Source: The Conversation – UK – By Natasha Lindstaedt, Professor in the Department of Government, University of Essex

    Working alongside western democratic allies has not been a natural fit for Donald Trump. The US president left the recently concluded G7 summit in Canada early, with his French counterpart Emmanuel Macron assuming this was to work on addressing the most severe escalation between Iran and Israel in decades.

    But Trump offered little communication with other G7 members, which include Canada, France, Germany, Italy, Japan and the UK, of what his plans were. He said he had to leave the summit “for obvious reasons”, though failed to elaborate on what he meant.

    After exiting the summit, he lambasted Macron on social media. Trump wrote: “Wrong! He has no idea why I am now on my way to Washington, but it certainly has nothing to do with a Cease Fire”. Trump continued by saying his exit was due to something “much bigger than that”, adding: “Emmanuel always gets it wrong.”

    This has prompted discussion over whether US forces may join Israel’s strikes on Iran. Despite initially distancing the US from the Israeli attacks, Trump said on June 17: “We now have complete and total control of the skies over Iran.”


    Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.


    He has since demanded Tehran’s “unconditional surrender”, while also issuing a chilling threat to Iran’s supreme leader, Ayatollah Ali Khamenei, describing him as an “easy target”.

    The pressure campaign employed by Israel’s prime minister, Benjamin Netanyahu, to convince Trump that the time is right for a military assault on Iran seems to be working.

    Exploiting Trump’s impulsive nature, Netanyahu may soon be able to convince Trump to give Israel what it needs to destroy Iran’s underground uranium enrichment sites: a 30,000-pound “bunker buster” bomb and a B-2 bomber to carry it.

    The US’s western allies have been left scrambling to interpret Trump’s social media posts and figure out the real reason he left the G7 summit early.

    The only aircraft capable of carrying ‘bunker-buster’ bombs is the B-2.
    Mariusz Lopusiewicz / Shutterstock

    This wasn’t the first time that Trump has left a G7 forum early. In 2018, the last time such a meeting was held in Canada, Trump also left early after Macron and the then Canadian prime minister, Justin Trudeau, promised to confront Trump over the imposition of tariffs on US allies.

    The latest G7 summit also wasn’t the first time Trump has treated traditional US allies with suspicion. Trump has cast doubt on US willingness to defend Nato allies if they don’t pay more for their own defence. He has repeatedly threatened to leave the alliance and has frequently denigrated it – even calling alliance members “delinquent”.

    Trump thinks the US gains an advantage by abandoning relationships with “free riders”. But experts have made clear alienating allies makes the US weaker. While the alliance system has given the US unprecedented influence over the foreign policies of US allies in the past, Trump’s pressure to increase their defence spending will make them more independent from the US in the long-term.

    Trump seems to prefer a world guided by short-term self-interest at the expense of long-term collective security. Indeed, with an “America first” agenda, multilateral cooperation is not Trump’s strong suit. With the G7, Trump is yet again making clear that he does not fit in, nor does he want to.

    Because the G7 is small and relatively homogenous in membership, meetings between members are supposed to promote collective and decisive decision-making. However, even the task of coming up with a joint statement on the escalating conflict between Iran and Israel proved challenging.

    Trump eventually joined other leaders in calling for deescalation in the Middle East, and the G7 was in agreement that Iran cannot acquire nuclear weapons. But Trump’s social media activity since then has left US allies in the dark over what role the US might play in the conflict.

    Trump also alarmed G7 members with calls for Russia to return to the forum. He claimed that the war in Ukraine would not have happened had Moscow not been ejected from the former G8 grouping in 2014.

    Then, on his way out of the summit, Trump bragged to reporters that Russia’s leader, Vladimir Putin, “doesn’t speak to anybody else” but him. Trump added that Putin was insulted when Russia was thrown out of the G8, “as I would be, as you would be, as anybody would be”.

    Following weeks of frustration over Russia’s refusal to engage in serious peace talks about ending the war in Ukraine, Trump seems to have returned to being Putin’s most loyal advocate.

    Hostility toward multilateralism

    During Trump’s first term, he pushed multilateralism to the brink. But he did not completely disengage. The US withdrew from the Paris climate accords, the nuclear deal with Iran, negotiations for a trade deal with Pacific nations, and imposed sanctions against officials of the International Criminal Court.

    However, when multilateral initiatives served Trump’s short-term objectives, he was willing to get on board. A trade deal struck with Canada and Mexico that Trump described as “the most important” ever agreed by the US. He said the deal would bring thousands of jobs back to North America.

    The second Trump administration has been even more hostile to multilateralism. Not only has the trade deal with Canada and Mexico been undermined by Trump’s love of tariffs, his administration has been more antagonistic toward almost all of the US’s traditional allies. In fact, most of Trump’s ire is reserved for democracies not autocracies.

    In contrast to the G7, where he clearly felt out of place, Trump was in his element during his May trip to the Middle East. Trump has a more natural connection to the leaders of the Gulf who do not have to adhere to democratic norms and human rights, and where deals can get done immediately.

    Trump left the Middle East revelling in all of the billion dollar deals he made, which he exaggerated were worth US$2 trillion (£1.5 trillion). The G7, on the other hand, doesn’t offer much to Trump. He sees it as more of a nuisance.

    The G7 forum is supposed to reassure the public that the most powerful countries in the world are united in their commitment to stability. But Trump’s antics are undermining the credibility of that message. It is these antics that risk dragging the west into a dangerous confrontation with Iran.

    Natasha Lindstaedt does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Trump breaks from western allies at G7 summit as US weighs joining Iran strikes – https://theconversation.com/trump-breaks-from-western-allies-at-g7-summit-as-us-weighs-joining-iran-strikes-259214

    MIL OSI – Global Reports

  • MIL-OSI Global: What happens when aid is cut to a large refugee camp? Kenyan study paints a bleak picture

    Source: The Conversation – Africa – By Olivier Sterck, Associate professor, University of Oxford

    Humanitarian needs are rising around the world. At the same time, major donors such as the US and the UK are pulling back support, placing increasing strain on already overstretched aid systems.

    Global humanitarian needs have quadrupled since 2015, driven by new conflicts in Sudan, Ukraine and Gaza. Added to these are protracted crises in Yemen, Somalia, South Sudan, and DR Congo, among others. Yet donor funding has failed to keep pace, covering less than half of the requested US$50 billion in 2024, leaving millions without assistance.

    Notably, the US recently slashed billions of US dollars from global relief efforts. The slashed contributions once made up to half of all public humanitarian funding and over a fifth of the UN’s budget. Other donors have been cutting aid as well.

    As funding shortfalls widen, humanitarian agencies increasingly face tough choices: reducing the scale of operations, pausing essential services, or cancelling programmes altogether. Disruptions to aid delivery have become a routine feature of humanitarian operations.

    Yet few rigorous studies have provided hard evidence of the consequences for affected populations.

    A recent study from one of the world’s largest refugee camps in Kenya fills this gap.

    Our research team from the University of Oxford and the University of Antwerp was already studying Kakuma camp and then had an opportunity to see what happened when aid was cut. We observed the impact of a 20% aid cut that occurred in 2023.

    The study reveals that cuts to humanitarian assistance had dramatic impacts on hunger and psychological distress, with cascading effects on local credit systems and prices of goods.

    Kakuma refugee camp

    Kakuma is home to more than 300,000 refugees, who mostly came from South Sudan (49%), Somalia (16%), and the Democratic Republic of Congo (DRC) (10%). They have been housed here since 1992. With widespread poverty, lack of income opportunities, and aid making up over 90% of household income, survival in the camp hinges on humanitarian support from UN organisations.

    When the research began in late 2022, most refugees in Kakuma received a combination of in-kind and cash transfers from the World Food Programme. Transfers were worth US$17 per person per month, barely enough to cover the bare essentials: food, firewood and medicine.

    Over the span of a year, the research team tracked 622 South Sudanese refugee households, interviewing them monthly to monitor how their living conditions evolved in response to the timing and level of aid they received. We also gathered weekly price data on 70 essential goods and conducted more than 250 in-depth interviews with refugees, shopkeepers, and humanitarian staff to understand the broader impacts.

    Then came the cut. In July 2023, assistance was reduced by 20%, just as the research team was conducting its eighth round of data collection. This sudden reduction in humanitarian aid created a rare opportunity to assess the effects of an aid cut on both recipients and the markets they depend on.

    Consequences of aid cut

    The 20% cut in humanitarian aid had cascading effects, affecting not just hunger, but local credit systems, prices, and well-being.

    1. Hunger got worse. As a Somali refugee interviewed by the researchers put it: “After the aid reduction, the lives of refugees become hard. That was the money sustaining them. […] Things are insufficient, and hunger is visible.”

    Food insecurity was already widespread before the cut, with more than 90% of refugees classified as food insecure. Average caloric intake stood below 1,900 kcal per person per day – well under the World Food Programme’s 2,100 kcal target and about half the average daily calorie supply available to a US citizen.

    Food insecurity further increased following the aid cut, with caloric intake falling by 145 kcal, a 7% decrease. The share of households eating one meal or less increased by 8 percentage points, from about 29% to 37%. At the same time, dietary diversity narrowed, indicating that households tried to mitigate the negative impacts of the aid cut by reducing the variety of foods they consumed.

    2. Credit collapsed. As a refugee shopkeeper of Ethiopian origin reported: “When we give out credit we have a limit; since the aid is reduced, the credit is also reduced.”

    Cash assistance in Kakuma is delivered through aid cards, which refugees routinely use as collateral to access food on credit. When transfers are delayed or unexpected expenses arise, refugees hand over their aid cards as a guarantee to trusted shopkeepers, allowing them to borrow food against next month’s aid.

    But when assistance was cut, the value of this informal collateral plummeted. Retailers, fearing default, reduced lending or refused lending altogether. Informal credit from shopkeepers shrank by 9%. Many refugees reported being refused food on credit or having to repay past debt before receiving any new goods.

    3. Households liquidated assets. With no access to credit, households began selling off possessions and drawing down food reserves. The average value of household assets fell by over 6% after the aid cut.

    4. Psychological distress increased. The aid cut reduced self-reported sleep quality and happiness, indicating that reductions in aid go beyond physical impacts and also have psychological effects.

    5. Prices fell. With reduced expenditure and purchasing power, the demand for food dropped, and food prices went down, partially offsetting the negative effects of the aid cut.

    Implications

    The study carries two major policy implications.

    First, aid in contexts like Kakuma should not be treated as optional or discretionary, but as a structural necessity. It is the backbone of daily life. Mechanisms are needed to protect it from abrupt donor withdrawals.

    Second, informal credit is not peripheral, it is central to economic life in refugee settings. In many camps, shopkeepers act as retailers and de facto financial institutions. When aid transfers serve as both income and collateral, cutting them risks collapsing this fragile credit system. Cash transfer programmes must therefore be designed with these dynamics in mind.

    Olivier Sterck receives research funding from the IKEA Foundation, the World Bank, and The Research Foundation – Flanders (FWO).

    Vittorio Bruni is affiliated with Oxford University

    ref. What happens when aid is cut to a large refugee camp? Kenyan study paints a bleak picture – https://theconversation.com/what-happens-when-aid-is-cut-to-a-large-refugee-camp-kenyan-study-paints-a-bleak-picture-259055

    MIL OSI – Global Reports

  • MIL-OSI Global: Nigeria’s economy is growing but rural poverty is rising: 5 key policies to address the divide

    Source: The Conversation – Africa – By Stephen Onyeiwu, Professor of Economics & Business, Allegheny College

    The Nigerian economy grew at a robust rate of 3.4% in 2024, the highest it has been since 2019 (except 2021 when the COVID rebound occurred).

    This should have been cheering news, worthy of firecrackers and champagne-popping. Rather it came with a catch: the country’s poverty profile worsened.

    In its annual review of the country, the World Bank applauded Nigeria for its economic reforms. These include the removal of fuel subsidies, liberalisation of the foreign exchange market and maintenance of a contractionary monetary policy. This is a policy of raising interest rates, reducing money supply and increasing borrowing costs to rein in inflation.

    But the bank also drew attention to the fact that the country’s poverty profile has become grim. About 31% of Nigerians lived in poverty prior to the COVID-19 epidemic. Since then, an additional 42 million have become poor, increasing the poverty rate to about 46% in 2024.

    Poverty is even worse in Nigeria’s rural communities: 75.5% live on US$2.15 or less per day (based on 2017 prices). The average poverty rate for sub-Saharan African countries was 36.5% in 2024 and 0.8% for East Asia and the Pacific.

    Nigeria’s poverty rate would have been higher if the multidimensional poverty index had been used. In addition to income, the index considers access to education, health, decent housing, nutrition, sanitation, electricity and water. Access to these critical services has worsened for many Nigerians, despite improvements in macroeconomic stability.




    Read more:
    Poor rural infrastructure holds back food production by small Nigerian farmers


    A challenge for policy makers is how to translate impressive macroeconomic outcomes into high-paying jobs, lower poverty rates and access to health, good sanitation, education, electricity and affordable housing. The question is even more acute for people in rural areas.

    As an economist who has studied the Nigerian economy for over four decades and lived in a rural community, I believe Nigeria needs a radical shift in its economic policy approach.

    One major step should be a change in the country’s growth drivers. Oil, information and communications technology and finance are the major drivers of growth in Nigeria.

    These sectors are not employment-intensive, and they require skills that most Nigerians don’t have. Because of the lack of employment opportunities in these sectors, most Nigerians gravitate towards the informal sector, which accounts for about 90% of employment in the country.

    By continuing to urge Nigerians to be patient for economic reforms to have a positive impact on their living conditions, the Tinubu administration appears to assume that improvements in macroeconomic performance will eventually manifest in lower unemployment and poverty rates. This notion of “trickle-down economics” is misconceived and illusory.

    The government needs to intentionally create transmission mechanisms through which economic growth and macroeconomic stability can raise living standards.

    Fostering growth with development

    Concerted efforts will be needed to target poverty in general, and rural poverty in particular.

    Five key policies could get Nigeria closer to this goal:

    Building productive capacities: People who live in rural areas in Nigeria are eager to work and full of creative ideas and entrepreneurial spirit. But they lack the resources and opportunity to fully unleash their potential.

    Building their productive capacities would entail giving them access to basic education, technical and managerial skills, and other productive resources such as tools, equipment, finance and land. The government should identify the comparative advantage of different rural communities, and put in place policies that encourage those communities to use their comparative advantage and distinctive competencies.

    Opportunity to diversify incomes: In developed countries, many people hold multiple jobs. Most rural dwellers in Nigeria, however, rely on agriculture as their only source of livelihood.

    Because of limited access to inputs and modern technology, and outdated agricultural practices, their productivity is often very low. Their low income makes it difficult to save and invest in education, health and housing.

    Non-agricultural activities, especially manufacturing, need to be located in rural communities, to give rural dwellers the opportunity to diversify their income sources.

    Agriculture-led industrial strategy: This would involve the location of manufacturing plants close to the sources of agricultural raw materials.

    Nigerian manufacturers locate their factories in urban areas. The result of urban-biased development strategy in Nigeria has been the lack of employment opportunities in rural communities, and a decline in the rural population, from about 85% in 1960 to 46% in 2023.

    Moving manufacturing to rural areas would require massive investment in infrastructure such as electricity, water, roads and health services.




    Read more:
    Nigeria’s new blue economy ministry could harness marine resources – moving the focus away from oil


    Ending patriarchy and male domination: Women disproportionately bear the burden of rural poverty in Nigeria. A study in rural south-east Nigeria found that the poverty rate among women was 98%, compared to 85% for men. Men are often given preference regarding access to land, education, skills acquisition and financial inclusion.

    Women are also imbued with the responsibility of caring for children, the elderly and the sick, as well as household chores. This leaves them with little time for paid work or opportunities to acquire marketable skills.

    Ability to absorb shocks and vulnerability: Rural poverty is often exacerbated by shocks and vulnerability such as extreme weather conditions, attacks by insurgents and other criminal groups, and illness. With no safety nets, and little or no saving, most rural dwellers are unable to withstand shocks.

    The Tinubu administration plans to disburse N25,000 (about US$17) each to 60 million Nigerians. But these kinds of support are too small, non-pervasive, irregular and unpredictable.




    Read more:
    Nigeria needs to close the financial inclusion gap for women smallholder farmers


    What India and China have to teach

    Nigeria could do well to borrow from the Indian model of an institutionalised safety net.

    India issues “ration cards” to eligible households. The cards enable poor people to purchase essential food items such as grains, milk, eggs, cooking oil and bread at subsidised prices from designated stores.

    Nigeria could finance this kind of programme with a special tax on oil companies and financial institutions, which frequently post huge after-tax profits.

    China has had an impressive record of poverty reduction. Using the US$1.90 poverty line, China’s poverty rate decreased from 88.1% in 1981 to 0.3% in 2018.

    The fall in rural poverty is even more dramatic, from 96% in 1980 to 1% in 2019.

    This reduction was accomplished in stages, starting with an increase in agricultural productivity. It then shifted focus to the development of non-agricultural sectors of the economy, including manufacturing. These sectors were able to draw surplus labour from the agricultural sector, giving them skills that led to higher wages and poverty alleviation.




    Read more:
    Poor rural infrastructure holds back food production by small Nigerian farmers


    Next steps

    The World Bank in its report noted that addressing pressing social and humanitarian challenges remains critical to ensuring inclusive and sustainable growth in Nigeria.

    Cash transfers and social assistance programmes could provide temporary relief for the poor in rural communities. But a long-term solution is to build their productive capacities and transform rural communities in ways that provide opportunities for income diversification.

    Stephen Onyeiwu does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Nigeria’s economy is growing but rural poverty is rising: 5 key policies to address the divide – https://theconversation.com/nigerias-economy-is-growing-but-rural-poverty-is-rising-5-key-policies-to-address-the-divide-257152

    MIL OSI – Global Reports

  • MIL-OSI Global: 50 years after ‘Jaws,’ researchers have retired the man-eater myth and revealed more about sharks’ amazing biology

    Source: The Conversation – USA – By Gareth J. Fraser, Associate Professor of Evolutionary Developmental Biology, University of Florida

    The shark in ‘Jaws’ became a terrifying icon. Universal Pictures via Getty Images

    The summer of 1975 was the summer of “Jaws.”

    The movie was adapted from a novel by Peter Benchley.
    Universal History Archive/Universal Images Group via Getty Images

    The first blockbuster movie sent waves of panic and awe through audiences. “Jaws” – the tale of a killer great white shark that terrorizes a coastal tourist town – captured people’s imaginations and simultaneously created a widespread fear of the water.

    To call Steven Spielberg’s masterpiece a creature feature is trite. Because the shark isn’t shown for most of the movie – mechanical difficulties meant production didn’t have one ready to use until later in the filming process – suspense and fear build. The movie unlocked in viewers an innate fear of the unknown, encouraging the idea that monsters lurk beneath the ocean’s surface, even in the shallows.

    And because in 1975 marine scientists knew far less than we do now about sharks and their world, it was easy for the myth of the rogue shark as a murderous eating machine to take hold, along with the assumption that all sharks must be bloodthirsty, mindless killers.

    People lined up to get scared by the murderous shark at the center of the ‘Jaws’ movie.
    Bettmann Archive via Getty Images

    But in addition to scaring many moviegoers that “it’s not safe to go in the water,” “Jaws” has over the years inspired generations of researchers, including me. The scientific curiosity sparked by this horror fish flick has helped reveal so much more about what lies beneath the waves than was known 50 years ago. My own research focuses on the secret lives of sharks, their evolution and development, and how people can benefit from the study of these enigmatic animals.

    The business end of sharks: Their jaws and teeth

    My own work has focused on perhaps the most terrifying aspect of these apex predators, the jaws and teeth. I study the development of shark teeth in embryos.

    Small-spotted catshark embryo (Scyliorhinus canicula), still attached to the yolk sac. This is the stage when the teeth begin developing.
    Ella Nicklin, Fraser Lab, University of Florida

    Sharks continue to make an unlimited supply of tooth replacements throughout life – it’s how they keep their bite constantly sharp.

    Hard-shelled prey, such as mollusks and crustaceans, from sandy substrates can be more abrasive for teeth, requiring quicker replacement. Depending on the water temperature, the conveyor belt-like renewal of an entire row of teeth can take between nine and 70 days, for example, in nurse sharks, or much longer in larger sharks. In the great white, a full-row replacement can take an estimated 250 days. That’s still an advantage over humans – we never regrow damaged or worn-out adult teeth.

    Magnified microscope image of a zebra shark (Stegostoma tigrinum) jaw. They have 20 to 30 rows of teeth in each jaw, each a new generation ready to move into position like on a conveyor belt. Humans have only two sets!
    Gareth Fraser, University of Florida

    Interestingly, shark teeth are much like our own, developing from equivalent cells, patterned by the same genes, creating the same hard tissues, enamel and dentin. Sharks could potentially teach researchers how to master the process of tooth renewal. It would be huge for dentistry if scientists could use sharks to figure out how to engineer a new generation of teeth for human patients.

    Extraordinary fish with extraordinary biology

    As a group, sharks and their cartilaginous fish relatives – including skates, rays and chimaeras – are evolutionary relics that have inhabited the Earth’s oceans for over 400 million years. They’ve been around since long before human beings and most of the other animals on our planet today hit the scene, even before dinosaurs emerged.

    Sharks have a vast array of super powers that scientists have only recently discovered.

    Their electroreceptive pores, located around the head and jaws, have amazing sensory capabilities, allowing sharks to detect weak electrical fields emitted from hidden prey.

    CT scan of the head of a small-spotted catshark (Scyliorhinus canicula) as it hatches. Skin denticles cover the surface, and colored rows of teeth are present on the jaws.
    Ella Nicklin, Fraser Lab, University of Florida

    Their skin is protected with an armor of tiny teeth, called dermal denticles, composed of sensitive dentin, that also allows for better drag-reducing hydrodynamics. Biologists and engineers are also using this “shark skin technology” to design hydrodynamic and aerodynamic solutions for future fuel-efficient vehicles.

    Fluorescent skin of the chain catshark (Scyliorhinus retifer).
    Gareth Fraser, University of Florida

    Some sharks are biofluorescent, meaning they emit light in different wavelengths after absorbing natural blue light. This emitted fluorescent color pattern suggests visual communication and recognition among members of the same species is possible in the dark depths.

    Sharks can migrate across huge global distances. For example, a silky shark was recorded traveling 17,000 miles (over 27,000 kilometers) over a year and a half. Hammerhead sharks can even home in on the Earth’s magnetic field to help them navigate.

    Greenland sharks exhibit a lengthy aging process and live for hundreds of years. Scientists estimated that one individual was 392 years old, give or take 120 years.

    Still much about sharks remains mysterious. We know little about their breeding habits and locations of their nursery grounds. Conservation efforts are beginning to target the identification of shark nurseries as a way to manage and protect fragile populations.

    Tagging programs and their “follow the shark” apps allow researchers to learn more about these animals’ lives and where they roam – highlighting the benefit of international collaboration and public engagement for conserving threatened shark populations.

    Sharks under attack

    Sharks are an incredible evolutionary success story. But they’re also vulnerable in the modern age of human-ocean interactions.

    Sharks are an afterthought for the commercial fishing industry, but overfishing of other species can cause dramatic crashes in shark populations. Their late age of sexual maturity – as old as 15 to 20 years or more in larger species or potentially 150 years in Greenland sharks – along with slow growth, long gestation periods and complex social structures make shark populations fragile and less capable of quick recoveries.

    Take the white shark (Carcharodon carcharias), for example – Jaws’ own species. Trophy hunting, trade in their body parts and commercial fishery impacts caused their numbers to dwindle. As a result, they received essential protections at the international level. In turn, their numbers have rebounded, especially around the United States, leading to a shift from critically endangered to vulnerable status worldwide. However, they remain critically endangered in Europe and the Mediterranean.

    Protections and conservation measures have helped white sharks make a comeback.
    Dave Fleetham/Design Pics Editorial/Universal Images Group via Getty Images

    “Jaws” was filmed on the island of Martha’s Vineyard, in Massachusetts. After careful management and the designation of white sharks as a prohibited species in federal waters in 1997 and in Massachusetts in 2005, their populations have recovered well over recent years in response to more seals in the area and recovering fish stocks.

    You might assume more sharks would mean more attacks, but that is not what we observe. Shark attacks have always been few and far between in Massachusetts and elsewhere, and they remain rare. It’s only a “Jaws”-perpetuated myth that sharks have a taste for humans. Sure, they might mistake a person for prey; for instance, surfers and swimmers can mimic the appearance of seals at the surface. Sharks in murky water might opportunistically take a test bite of what seem to be prey.

    But these attacks are rare enough that people can shed their “Jaws”-driven irrational fears of sharks. Almost all sharks are timid, and the likelihood of an interaction – let alone a negative one – is incredibly rare. Importantly, there more than 500 species of sharks in the world’s oceans, each one a unique member of a particular ecosystem with a vital role. Sharks come in all shapes and sizes, and inhabit every ocean, both the shallow and deep-end ecosystems.

    Most recorded human-shark interactions are awe-inspiring and not terrifying. Sharks don’t really care about people – at most they may be curious, but not hungry for human flesh. Whether or not “Jaws” fans have grown beyond the fear of movie monster sharks, we’re gonna need a bigger conservation effort to continue to protect these important ocean guardians.

    Gareth J. Fraser receives funding from the National Science Foundation (NSF).

    ref. 50 years after ‘Jaws,’ researchers have retired the man-eater myth and revealed more about sharks’ amazing biology – https://theconversation.com/50-years-after-jaws-researchers-have-retired-the-man-eater-myth-and-revealed-more-about-sharks-amazing-biology-258151

    MIL OSI – Global Reports

  • MIL-OSI Global: 50 years after ‘Jaws,’ researchers have retired the man-eater myth and revealed more about sharks’ amazing biology

    Source: The Conversation – USA – By Gareth J. Fraser, Associate Professor of Evolutionary Developmental Biology, University of Florida

    The shark in ‘Jaws’ became a terrifying icon. Universal Pictures via Getty Images

    The summer of 1975 was the summer of “Jaws.”

    The movie was adapted from a novel by Peter Benchley.
    Universal History Archive/Universal Images Group via Getty Images

    The first blockbuster movie sent waves of panic and awe through audiences. “Jaws” – the tale of a killer great white shark that terrorizes a coastal tourist town – captured people’s imaginations and simultaneously created a widespread fear of the water.

    To call Steven Spielberg’s masterpiece a creature feature is trite. Because the shark isn’t shown for most of the movie – mechanical difficulties meant production didn’t have one ready to use until later in the filming process – suspense and fear build. The movie unlocked in viewers an innate fear of the unknown, encouraging the idea that monsters lurk beneath the ocean’s surface, even in the shallows.

    And because in 1975 marine scientists knew far less than we do now about sharks and their world, it was easy for the myth of the rogue shark as a murderous eating machine to take hold, along with the assumption that all sharks must be bloodthirsty, mindless killers.

    People lined up to get scared by the murderous shark at the center of the ‘Jaws’ movie.
    Bettmann Archive via Getty Images

    But in addition to scaring many moviegoers that “it’s not safe to go in the water,” “Jaws” has over the years inspired generations of researchers, including me. The scientific curiosity sparked by this horror fish flick has helped reveal so much more about what lies beneath the waves than was known 50 years ago. My own research focuses on the secret lives of sharks, their evolution and development, and how people can benefit from the study of these enigmatic animals.

    The business end of sharks: Their jaws and teeth

    My own work has focused on perhaps the most terrifying aspect of these apex predators, the jaws and teeth. I study the development of shark teeth in embryos.

    Small-spotted catshark embryo (Scyliorhinus canicula), still attached to the yolk sac. This is the stage when the teeth begin developing.
    Ella Nicklin, Fraser Lab, University of Florida

    Sharks continue to make an unlimited supply of tooth replacements throughout life – it’s how they keep their bite constantly sharp.

    Hard-shelled prey, such as mollusks and crustaceans, from sandy substrates can be more abrasive for teeth, requiring quicker replacement. Depending on the water temperature, the conveyor belt-like renewal of an entire row of teeth can take between nine and 70 days, for example, in nurse sharks, or much longer in larger sharks. In the great white, a full-row replacement can take an estimated 250 days. That’s still an advantage over humans – we never regrow damaged or worn-out adult teeth.

    Magnified microscope image of a zebra shark (Stegostoma tigrinum) jaw. They have 20 to 30 rows of teeth in each jaw, each a new generation ready to move into position like on a conveyor belt. Humans have only two sets!
    Gareth Fraser, University of Florida

    Interestingly, shark teeth are much like our own, developing from equivalent cells, patterned by the same genes, creating the same hard tissues, enamel and dentin. Sharks could potentially teach researchers how to master the process of tooth renewal. It would be huge for dentistry if scientists could use sharks to figure out how to engineer a new generation of teeth for human patients.

    Extraordinary fish with extraordinary biology

    As a group, sharks and their cartilaginous fish relatives – including skates, rays and chimaeras – are evolutionary relics that have inhabited the Earth’s oceans for over 400 million years. They’ve been around since long before human beings and most of the other animals on our planet today hit the scene, even before dinosaurs emerged.

    Sharks have a vast array of super powers that scientists have only recently discovered.

    Their electroreceptive pores, located around the head and jaws, have amazing sensory capabilities, allowing sharks to detect weak electrical fields emitted from hidden prey.

    CT scan of the head of a small-spotted catshark (Scyliorhinus canicula) as it hatches. Skin denticles cover the surface, and colored rows of teeth are present on the jaws.
    Ella Nicklin, Fraser Lab, University of Florida

    Their skin is protected with an armor of tiny teeth, called dermal denticles, composed of sensitive dentin, that also allows for better drag-reducing hydrodynamics. Biologists and engineers are also using this “shark skin technology” to design hydrodynamic and aerodynamic solutions for future fuel-efficient vehicles.

    Fluorescent skin of the chain catshark (Scyliorhinus retifer).
    Gareth Fraser, University of Florida

    Some sharks are biofluorescent, meaning they emit light in different wavelengths after absorbing natural blue light. This emitted fluorescent color pattern suggests visual communication and recognition among members of the same species is possible in the dark depths.

    Sharks can migrate across huge global distances. For example, a silky shark was recorded traveling 17,000 miles (over 27,000 kilometers) over a year and a half. Hammerhead sharks can even home in on the Earth’s magnetic field to help them navigate.

    Greenland sharks exhibit a lengthy aging process and live for hundreds of years. Scientists estimated that one individual was 392 years old, give or take 120 years.

    Still much about sharks remains mysterious. We know little about their breeding habits and locations of their nursery grounds. Conservation efforts are beginning to target the identification of shark nurseries as a way to manage and protect fragile populations.

    Tagging programs and their “follow the shark” apps allow researchers to learn more about these animals’ lives and where they roam – highlighting the benefit of international collaboration and public engagement for conserving threatened shark populations.

    Sharks under attack

    Sharks are an incredible evolutionary success story. But they’re also vulnerable in the modern age of human-ocean interactions.

    Sharks are an afterthought for the commercial fishing industry, but overfishing of other species can cause dramatic crashes in shark populations. Their late age of sexual maturity – as old as 15 to 20 years or more in larger species or potentially 150 years in Greenland sharks – along with slow growth, long gestation periods and complex social structures make shark populations fragile and less capable of quick recoveries.

    Take the white shark (Carcharodon carcharias), for example – Jaws’ own species. Trophy hunting, trade in their body parts and commercial fishery impacts caused their numbers to dwindle. As a result, they received essential protections at the international level. In turn, their numbers have rebounded, especially around the United States, leading to a shift from critically endangered to vulnerable status worldwide. However, they remain critically endangered in Europe and the Mediterranean.

    Protections and conservation measures have helped white sharks make a comeback.
    Dave Fleetham/Design Pics Editorial/Universal Images Group via Getty Images

    “Jaws” was filmed on the island of Martha’s Vineyard, in Massachusetts. After careful management and the designation of white sharks as a prohibited species in federal waters in 1997 and in Massachusetts in 2005, their populations have recovered well over recent years in response to more seals in the area and recovering fish stocks.

    You might assume more sharks would mean more attacks, but that is not what we observe. Shark attacks have always been few and far between in Massachusetts and elsewhere, and they remain rare. It’s only a “Jaws”-perpetuated myth that sharks have a taste for humans. Sure, they might mistake a person for prey; for instance, surfers and swimmers can mimic the appearance of seals at the surface. Sharks in murky water might opportunistically take a test bite of what seem to be prey.

    But these attacks are rare enough that people can shed their “Jaws”-driven irrational fears of sharks. Almost all sharks are timid, and the likelihood of an interaction – let alone a negative one – is incredibly rare. Importantly, there more than 500 species of sharks in the world’s oceans, each one a unique member of a particular ecosystem with a vital role. Sharks come in all shapes and sizes, and inhabit every ocean, both the shallow and deep-end ecosystems.

    Most recorded human-shark interactions are awe-inspiring and not terrifying. Sharks don’t really care about people – at most they may be curious, but not hungry for human flesh. Whether or not “Jaws” fans have grown beyond the fear of movie monster sharks, we’re gonna need a bigger conservation effort to continue to protect these important ocean guardians.

    Gareth J. Fraser receives funding from the National Science Foundation (NSF).

    ref. 50 years after ‘Jaws,’ researchers have retired the man-eater myth and revealed more about sharks’ amazing biology – https://theconversation.com/50-years-after-jaws-researchers-have-retired-the-man-eater-myth-and-revealed-more-about-sharks-amazing-biology-258151

    MIL OSI – Global Reports

  • MIL-OSI Global: ‘Jaws’ and the two musical notes that changed Hollywood forever

    Source: The Conversation – USA – By Jared Bahir Browsh, Assistant Teaching Professor of Critical Sports Studies, University of Colorado Boulder

    Many film historians see ‘Jaws’ as the first true summer blockbuster. Steve Kagan/Getty Images

    “Da, duh.”

    Two simple notes – E and F – have become synonymous with tension, fear and sharks, representing the primal dread of being stalked by a predator.

    And they largely have “Jaws” to thank.

    Fifty years ago, Steven Spielberg’s blockbuster film – along with its spooky score composed by John Williams – convinced generations of swimmers to think twice before going in the water.

    As a scholar of media history and popular culture, I decided to take a deeper dive into the staying power of these two notes and learned about how they’re influenced by 19th-century classical music, Mickey Mouse and Alfred Hitchcock.

    When John Williams proposed the two-note theme for ‘Jaws,’ Steven Spielberg initially thought it was a joke.

    YouTube video not showing up for me

    The first summer blockbuster

    In 1964, fisherman Frank Mundus killed a 4,500-pound great white shark off Long Island.

    After hearing the story, freelance journalist Peter Benchley began pitching a novel based on three men’s attempt to capture a man-eating shark, basing the character of Quint off of Mundus. Doubleday commissioned Benchley to write the novel, and in 1973, Universal Studios producers Richard D. Zanuck and David Brown purchased the film rights to the novel before it was published. The 26-year-old Spielberg was signed on to be the director.

    Tapping into both mythical and real fears regarding great white sharks – including an infamous set of shark attacks along the Jersey Shore in 1916 – Benchley’s 1974 novel became a bestseller. The book was a key part of Universal’s marketing campaign, which began several months before the film’s release.

    Starting in the fall of 1974, Zanuck, Brown and Benchley appeared on a number of radio and television programs to simultaneously promote the release of the paperback edition of the novel and the upcoming film. The marketing also included a national television advertising campaign that featured emerging composer Williams’ two-note theme. The plan was for a summer release, which, at the time, was reserved for films with less than stellar reviews.

    TV ads promoting the film featured John Williams’ two-note theme.

    Films at the time typically were released market by market, preceded by local reviews. However, Universal’s decision to release the film in hundreds of theaters across the country on June 20, 1975, led to huge up-front profits, sparking a 14-week run as the No. 1 film in the U.S.

    Many consider “Jaws” the first true summer blockbuster. It catapulted Spielberg to fame and kicked off the director’s long collaboration with Williams, who would go on to earn the second-highest number of Academy Award nominations in history – 54 – behind only Walt Disney’s 59.

    The film’s beating heart

    Though it’s now considered one of the greatest scores in film history, when Williams proposed the two-note theme, Spielberg initially thought it was a joke.

    But Williams had been inspired by 19th and 20th century composers, including Claude Debussy, Igor Stravinsky and especially Antonin Dvorak’s Symphony No. 9, “From the New World.” In the “Jaws” theme, you can hear echoes of the end of Dvorak’s symphony, as well as the sounds of another character-driven musical piece, Sergei Prokofiev’s “Peter and the Wolf.”

    “Peter and the Wolf” and the score from “Jaws” are both prime examples of leitmotifs, or a musical piece that represents a place or character.

    The varying pace of the ostinato – a musical motif that repeats itself – elicits intensifying degrees of emotion and fear. This became more integral as Spielberg and the technical team struggled with the malfunctioning pneumatic sharks that they’d nicknamed “Bruce,” after Spielberg’s lawyer.

    As a result, the shark does not appear until the 81-minute mark of the 124-minute film. But its presence is felt through Williams’ theme, which some music scholars have theorized evoke the shark’s heartbeat.

    Mechanical issues with ‘Bruce,’ the mechanical shark, during filming forced Steven Spielberg to rely more on mood and atmosphere.
    Screen Archives/Moviepix via Getty Images

    Sounds to manipulate emotions

    Williams also has Disney to thank for revolutionizing character-driven music in film.

    The two don’t just share a brimming trophy case. They also understood how music can heighten emotion and magnify action for audiences.

    Although his career started in the silent film era, Disney became a titan of film, and later media, by leveraging sound to establish one of the greatest stars in media history, Mickey Mouse.

    When Disney saw “The Jazz Singer” in 1927, he knew that sound would be the future of film.

    On Nov. 18, 1928, “Steamboat Willie” premiered at Universal’s Colony Theater in New York City as Disney’s first animated film to incorporate synchronized sound.

    Unlike previous attempts to bring sound to film by having record players concurrently play or deploying live musicians to perform in the theater, Disney used technology that recorded sound directly on the film reel.

    It wasn’t the first animated film with synchronized sound, but it was a technical improvement to previous attempts at it, and “Steamboat Willie” became an international hit, launching Mickey’s – and Disney’s – career.

    The use of music or sound to match the rhythm of the characters on screen became known as “Mickey Mousing.”

    “King Kong” in 1933 would deftly deploy Mickey Mousing in a live action film, with music mimicking the giant gorilla’s movements. For example, in one scene, Kong carries away Ann Darrow, who’s played by actress Fay Wray. Composer Max Steiner uses lighter tones to convey Kong’s curiosity as he holds Ann, followed by ominous, faster, tones as Ann escapes and Kong chases after her. In doing so, Steiner encourages viewers to both fear and connect with the beast throughout the film, helping them suspend disbelief and enter a world of fantasy.

    Mickey Mousing declined in popularity after World War II. Many filmmakers saw it as juvenile and too simplistic for the evolving and advancing film industry.

    When less is more

    In spite of this criticism, the technique was still used to score some iconic scenes, like the playing of violins in the shower as Marion Crane is stabbed in Alfred Hitchcock’s “Psycho.”

    Spielberg idolized Hitchcock. A young Spielberg was even kicked off the Universal lot after sneaking on to watch the production of Hitchcock’s 1966 film “Torn Curtain.”

    Although Hitchcock and Spielberg never met, “Jaws” clearly exhibits the influence of Hitchcock, the “Master of Suspense.” And maybe that’s why Spielberg initially overcame his doubts about using something so simple to represent tension in the thriller.

    Steven Spielberg was just 26 years old when he signed on to direct ‘Jaws.’
    Universal/Getty Images

    The use of the two-note motif helps overcome the production issues Spielberg faced directing the first feature length movie to be filmed on the ocean. The malfunctioning animatronic shark forced Spielberg to leverage Williams’ minimalist theme to represent the shark’s ominous presence in spite of the limited appearances by the eponymous predatory star.

    As Williams continued his legendary career, he would deploy a similar sonic motif for certain “Star Wars” characters. Each time Darth Vader appeared, the “Imperial March” was played to set the tone for the leader of the dark side.

    As movie budgets creep closer to a half-billion dollars, the “Jaws” theme – and the way those two notes manipulate tension – is a reminder that in film, sometimes less can be more.

    Jared Bahir Browsh does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. ‘Jaws’ and the two musical notes that changed Hollywood forever – https://theconversation.com/jaws-and-the-two-musical-notes-that-changed-hollywood-forever-255379

    MIL OSI – Global Reports

  • MIL-OSI Global: Ticks carry decades of history in each troublesome bite

    Source: The Conversation – USA – By Sean Lawrence, Assistant Professor of History, West Virginia University

    The black-legged tick, or deer tick, _Ixodes scapularis_, can transmit Lyme disease and other health hazards. U.S. Centers for Disease Control and Prevention

    When you think about ticks, you might picture nightmarish little parasites, stalking you on weekend hikes or afternoons in the park.

    Your fear is well-founded. Tick-borne diseases are the most prevalent vector-borne diseases – those transmitted by living organisms – in the United States. Each tick feeds on multiple animals throughout its life, absorbing viruses and bacteria along the way and passing them on with its next bite. Some of those viruses and bacteria are harmful to humans, causing diseases that can be debilitating and sometimes lethal without treatment, such as Lyme, babesiosis and Rocky Mountain spotted fever.

    But contained in every bite of this infuriating, insatiable pest is also a trove of social, environmental and epidemiological history.

    In many cases, human actions long ago are the reason ticks carry these diseases so widely today. And that’s what makes ticks fascinating for environmental historians like me.

    Ticks can be tiny and hard to spot. This is an adult and nymph Ixodes scapularis on an adult’s index finger.
    CDC

    Changing forests fueled tick risks

    During the 18th and 19th centuries, settlers cleared more than half the forested land across the northeastern U.S., cutting down forests for timber and to make way for farms, towns and mining operations. With large-scale land clearing came a sharp decline in wildlife of all kinds. Predators such as bears and wolves were driven out, as were deer.

    As farming moved westward, Northeasterners began to recognize the ecological and economic value of trees, and they returned millions of acres to forest.

    The woods regrew. Plant-eaters such as deer returned, but the apex predators that once kept their populations in check did not.

    As a result, deer populations carrying borrelia burgdorferi, the bacterium that causes Lyme disease, grew rapidly. And with the deer came deer ticks (Ixodes scapularis). When a tick feeds on an infected deer, it can take up the bacteria. The tick isn’t harmed, but it can pass the bacteria to its next victim. In humans, Lyme disease can cause fever and fatigue, and if left untreated it can affect the nervous system.

    The eastern U.S. became a global hot spot for tick-borne Lyme disease starting around the 1970s. Lyme disease affected over 89,000 Americans in 2023, and possibly many more.

    Californians move into tick territory

    For centuries, changing patterns of human settlements and the politics of land use have shaped the role of ticks and tick-borne illnesses within their environments.

    In short, humans have made it easier for ticks to thrive and spread disease in our midst.

    In California, the Northern Inner Coast and Santa Cruz mountain ranges that converge on San Francisco from the north and south were never clear-cut, and predators such as mountain lions and coyotes still exist there. But competition for housing has pushed human settlement deeper into wildland areas to the north, south and east of the city, reshaping tick ecology there.

    A range map for the western black-legged tick.
    National Center for Emerging and Zoonotic Infectious Diseases

    While western black-legged ticks (Ixodes pacificus) tend to swarm in large forest preserves, the Lyme-causing bacterium is actually more prevalent in small, isolated patches of greenery. In these isolated patches, rodents and other tick hosts can thrive, safe from large predators, which need more habitat to move freely. But isolation and lower diversity also means infections are spread more easily within the tick’s host populations.

    People tend to build isolated houses in the hills, rather than large, connected developments. As the Silicon Valley area south of San Francisco sprawls outward, this checkerboard pattern of settlement has fragmented the natural landscape, creating a hard-to-manage public health threat.

    Fewer hosts, more tightly packed, often means more infected hosts, proportionally, and thus more dangerous ticks.

    A tick’s mouth is barbed so it can hold on as it draws blood over hours.
    National Institute of Allergy and Infectious Diseases

    Six counties across these ranges, all surrounding and including San Francisco, account for 44% of recorded tick-borne illnesses in California.

    A lesson from Texas cattle ranches

    Domesticated livestock have also shaped the disease threat posed by ticks.

    In 1892, at a meeting of cattle ranchers at the Stock Raiser’s Convention in Austin, Texas, Dr. B.A. Rogers introduced a novel theory that ticks were behind recent devastating plagues of Texas cattle fever. The disease had arrived with cattle imported from the West Indies and Mexico in the 1600s, and it was taking huge tolls on cattle herds. But how the disease spread to new victims had been a mystery.

    A 1905 illustration of Rhipicephalus annulatus, a hard tick that causes cattle fever.
    Nathan Banks, A treatise on the Acarina, or mites. Proceedings of the United States National Museum

    Editors of Daniel’s Texas Medical Journal found the idea of ticks spreading disease laughable and lampooned the hypothesis, publishing a satire of what they described as an “early copy” of a forthcoming report on the subject.

    The tick’s “fluid secretion, it is believed, is the poison which causes the fever … [and the tick] having been known to chew tobacco, as all other Texans do, the secretion is most probably tobacco juice,” they wrote.

    Fortunately for the ranchers, not to mention the cows, the U.S. Department of Agriculture sided with Rogers. Its cattle fever tick program, started in 1906, curbed cattle fever outbreaks by limiting where and when cattle should cross tick-dense areas.

    Engorged ticks feed on a calf’s ear.
    Alan R Walker, CC BY-NC-SA

    By 1938, the government had established a quarantine zone that extended 580 miles by 10 miles along the U.S.-Mexico border in South Texas Brush Country, a region favored by the cattle tick.

    This innovative use of natural space as a public health tool helped to functionally eradicate cattle fever from 14 Southern states by 1943.

    Ticks are products of their environment

    When it comes to tick-borne diseases the world over, location matters.

    Take the hunter tick (Hyalomma spp.) of the Mediterranean and Asia. As a juvenile, or nymph, these ticks feed on small forest animals such as mice, hares and voles, but as an adult they prefer domesticated livestock.

    For centuries, this tick was an occasional nuisance to nomadic shepherds of the Middle East. But in the 1850s, the Ottoman Empire passed laws to force nomadic tribes to become settled farmers instead. Unclaimed lands, especially on the forested edges of the steppe, were offered to settlers, creating ideal conditions for hunter ticks.

    As a result, farmers in what today is Turkey saw spikes in tick-borne diseases, including a virus that causes Crimean-Congo hemorrhagic fever, a potentially fatal condition.

    Where to check for ticks and how to remove them.

    It’s probably too much to ask for sympathy for any ticks you meet this summer. They are bloodsucking parasites, after all.

    Still, it’s worth remembering that the tick’s malevolence isn’t its own fault. Ticks are products of their environment, and humans have played many roles in turning them into the harmful parasites that seek us out today.

    Sean Lawrence has nothing to disclose.

    ref. Ticks carry decades of history in each troublesome bite – https://theconversation.com/ticks-carry-decades-of-history-in-each-troublesome-bite-257110

    MIL OSI – Global Reports

  • MIL-OSI Global: AI helps tell snow leopards apart, improving population counts for these majestic mountain predators

    Source: The Conversation – USA – By Eve Bohnett, Assistant Scholar, Center for Landscape Conservation Planning, University of Florida

    Snow leopards are hard to find and count, which makes protecting them difficult. zahoor salmi/Moment via Getty Images

    Snow leopards are known as the “ghosts of the mountains” for a reason. Imagine waiting for months in the harsh, rugged mountains of Asia, hoping to catch even a glimpse of one. These elusive big cats move silently across rocky slopes, their pale coats blending so seamlessly with snow and stone that even the most seasoned biologists seldom spot them in the wild.

    Travel writer Peter Matthiessen spent two months in 1973 searching the Tibetan plateau for them and wrote a 300-page book about the effort. He never saw one. Forty years later, Peter’s son Alex retraced his father’s steps – and didn’t see one either.

    Researchers have struggled to come up with a figure for the global population. In 2017, the International Union for Conservation of Nature reclassified the snow leopard from endangered to vulnerable, citing estimates of between 2,500 and 10,000 adults in the wild. However, the group also warned that numbers continue to decline in many areas due to habitat loss, poaching and human-wildlife conflict. Those who study these animals want to help protect the species and their habitat – if only we can determine exactly where they live and how many there are.

    Traditional tracking methods – searching for footprints, droppings and other signs – have their limits. Instead of waiting for a lucky face-to-face encounter, conservationists from the Wildlife Conservation Society, led by experts including Stéphane Ostrowski and Sorosh Poya Faryabi, began deploying automated camera traps in Afghanistan. These devices snap photos whenever movement is detected, capturing thousands of images over months, all in hopes of obtaining a rare glimpse of a snow leopard.

    But capturing images is only half the battle. The next, even harder task is telling one snow leopard apart from another.

    Are these the same animal or different ones? It’s really hard to tell.
    Eve Bohnett, CC BY-ND

    At first glance, it might sound simple: Each snow leopard has a unique pattern of black rosettes on its coat, like a fingerprint or a face in a crowd. Yet in practice, identifying individuals by these patterns is slow, subjective and prone to error. Photos may be taken at odd angles, under poor lighting, or with parts of the animal obscured – making matches tricky.

    A common mistake happens when photos from different cameras are marked as depicting different animals when they actually show the same individual, inflating population estimates. Worse, camera trap images can get mixed up or misfiled, splitting encounters of one cat across multiple batches and identities.

    I am a data analyst working with Wildlife Conservation Society and other partners at Wild Me. My work and others’ has found that even trained experts can misidentify animals, failing to recognize repeat visitors at locations monitored by motion-sensing cameras and counting the same animal more than once. One study found that the snow leopard population was overestimated by more than 30% because of these human errors.

    To avoid these pitfalls, researchers follow camera sorting guidelines: At least three clear pattern differences or similarities must be confirmed between two images to declare them the same or different cats. Images too blurry, too dark or taken from difficult angles may have to be discarded. Identification efforts range from easy cases with clear, full-body shots to ambiguous ones needing collaboration and debate. Despite these efforts, variability remains, and more experienced observers tend to be more accurate.

    Now people trying to count snow leopards are getting help from artificial intelligence systems, in two ways.

    Spotting the spots

    Modern AI tools are revolutionizing how we process these large photo libraries. First, AI can rapidly sort through thousands of images, flagging those that contain snow leopards and ignoring irrelevant ones such as those that depict blue sheep, gray-and-white mountain terrain, or shadows.

    Unique spots and spot patterns are key to telling snow leopards apart.
    Eve Bohnett, CC BY-NC-ND

    AI can identify individual snow leopards by analyzing their unique rosette patterns, even when poses or lighting vary. Each snow leopard encounter is compared with a catalog of previously identified photos and assigned a known ID if there is a match, or entered as a new individual if not.

    In a recent study, several colleagues and I evaluated two AI algorithms, both separately and in tandem.

    The first algorithm, called HotSpotter, identifies individual snow leopards by comparing key visual features such as coat patterns, highlighting distinctive “hot spots” with a yellow marker.

    The second is a newer method called pose invariant embeddings, which operates similar to facial recognition technology: It recognizes layers of abstract features in the data, identifying the same animal regardless of how it is positioned in the photo or what kind of lighting there may be.

    We trained these systems using a curated dataset of photos of snow leopards from zoos in the U.S., Europe and Tajikistan, and with images from the wild, including in Afghanistan.

    Alone, each model worked about 74% of the time, correctly identifying the cat from a large photo library. But when combined, the two systems together were correct 85% of the time.

    These algorithms were integrated into Wildbook, an open-source, web-based software platform developed by the nonprofit organization Wild Me and now adopted by ConservationX. We deployed the combined system on a free website, Whiskerbook.org, where researchers can upload images, seek matches using the algorithms, and confirm those matches with side-by-side comparisons. This site is among a growing family of AI-powered wildlife platforms that are helping conservation biologists work more efficiently and more effectively at protecting species and their habitats.

    A view from an online wildlife-tracking system suggests a possible match for a snow leopard caught by a remote camera.
    Wildbook/Eve Bohnett, CC BY-ND

    Humans still needed

    These AI systems aren’t error-proof. AI quickly narrows down candidates and flags likely matches, but expert validation ensures accuracy, especially with tricky or ambiguous photos.

    Another study we conducted pitted AI-assisted groups of experts and novices against each other. Each was given a set of three to 10 images of 34 known captive snow leopards and asked to use the Whiskerbook platform to identify them. They were also asked to estimate how many individual animals were in the set of photos.

    The experts accurately matched about 90% of the images and delivered population estimates within about 3% of the true number. In contrast, the novices identified only 73% of the cats and underestimated the total number, sometimes by 25% or more, incorrectly merging two individuals into one.

    Both sets of results were better than when experts or novices did not use any software.

    The takeaway is clear: Human expertise remains important, and combining it with AI support leads to the most accurate results. My colleagues and I hope that by using tools like Whiskerbook and the AI systems embedded in them, researchers will be able to more quickly and more confidently study these elusive animals.

    With AI tools like Whiskerbook illuminating the mysteries of these mountain ghosts, we have another way to safeguard snow leopards – but success depends on continued commitment to protecting their fragile mountain homes.

    Eve Bohnett receives funding from San Diego State Research Foundation and Wildlife Conservation Society. She is affiliated with University of Florida.

    ref. AI helps tell snow leopards apart, improving population counts for these majestic mountain predators – https://theconversation.com/ai-helps-tell-snow-leopards-apart-improving-population-counts-for-these-majestic-mountain-predators-258154

    MIL OSI – Global Reports

  • MIL-OSI Global: Germany’s young Jewish and Muslim writers are speaking for themselves – exploring immigrant identity beyond stereotypes

    Source: The Conversation – USA – By Agnes Mueller, Carol Kahn Strauss Fellow in Jewish Studies at the American Academy in Berlin, Professor of German and American Literature, University of South Carolina

    A Muslim guest sits next to a Jewish one during an ordination ceremony at the Rykestrasse Synagogue in Berlin in September 2024. Omer Messinger/Getty Images

    The consequences of Hamas’ Oct. 7, 2023, attack and Israel’s war in Gaza have reverberated far beyond the zones of conflict.

    In the United States, for example, a growing number of people, including some Jewish groups, assert that political leaders are exploiting concerns about antisemitism for their own political goals, from cracking down on academic freedom to deporting pro-Palestinian activists.

    Debate about the war in Gaza feels fraught in Germany, too, where concerns about rising antisemitism have been used to criticize some Muslim communities. The Holocaust looms over discussions about Israel, with many claiming the country’s sense of historical guilt has made it, until recently, reluctant to criticize Israeli politics.

    In the wake of the country’s reunification in the early 1990s, about 200,000 Jews from Eastern Europe and the former Soviet Union came to Germany. In more recent years, waves of predominantly Muslim refugees from the Middle East have entered a space that already had a large population of Turkish immigrants and their descendants. However, many Germans oppose these more open immigration policies, with widespread backlash against Muslim migrants.

    In recent decades, some of Germany’s migrants and their children – some Jewish, and some Muslim – have used fiction to explore their identity and these contested issues in new ways, challenging simple narratives. As a scholar of German literature and Jewish studies, I have studied how literature creates new spaces for readers to explore the similarities between their experiences, building solidarity beyond stereotypes.

    ‘The Prodigal Son’

    Many of today’s young Jewish writers were born in the former Soviet Union and arrived in Germany with their parents as part of the “quota refugee” program. Initiated in the early 1990s, this program invited Jewish migrants into a newly unified Germany – intended to show that the country was taking responsibility for the atrocities of the past. The newcomers were flippantly called “Wiedergutmachungsjuden,” “make-good-again Jews,” referring to Germans’ desire to atone.

    One of them was Olga Grjasnowa. Born in 1984, Grjasnowa came from Azerbaijan to Germany at age 11. She has written about Holocaust memory, as in her 2012 novel “All Russians Love Birch Trees,” and said in a 2018 interview that all her books are “Jewish books.”

    Olga Grjasnowa during the Edinburgh International Book Festival on Aug. 22, 2019, in Scotland.
    Roberto Ricciuti/Getty Images

    Her 2021 book “Der verlorene Sohn,” “The Prodigal Son,” echoes Holocaust memory, but in a historical novel set in 19th-century Russia.

    The protagonist Jamaluddin – the name derives from the Arabic word for “beauty of the faith” – is born in the Caucasian region of Dagestan, as the son of a powerful Muslim imam. To negotiate a peace deal, the boy is given as a hostage to Russia, where he grows up in the Orthodox Christian court of the czar. Though initially treated as an outsider, Jamaluddin assimilates and becomes a high-ranking officer, a life that ends when he must return to Dagestan. But there, too, he now feels homeless, regarded with suspicion as a stranger.

    “The Prodigal Son” deals with abduction, deportation, exile and constant wandering. Jamaluddin’s fate is shaped by authoritarianism, repression, war and discrimination – themes that are familiar in Holocaust literature, though here they befall a Muslim boy in another time and place.

    Repeatedly, the novel makes mention of Jewish communities and their own suffering under the czar. As Jewish boys are being forced to march from remote villages to Saint Petersburg, Jamaluddin is “furious and ashamed” of his fellow officers. But he also begins to feel self-pity, flooded with memories of his own departure from home.

    This scene depicts a historical reality under Czar Nicholas I, who ruled from 1825-1855: Russian Jewish boys were conscripted, sometimes kidnapped, to serve in the army. For contemporary audiences, the description can also evoke the death marches of Jewish prisoners during the Shoah, the Hebrew term for the Holocaust. Several additional moments in the book connect Jamaluddin’s experiences with images of Jewish flight and expulsion.

    New conversations

    Jamaluddin’s fate as an outsider between cultures can also bring to mind migrants’ experiences and emotions today. In 2022, one-quarter of Germans were either migrants themselves or had a parent who was not born in Germany. The largest minority group are Muslim-born Germans of Turkish descent, who are still routinely discriminated against.

    Antisemitism, meanwhile, is pervasive but less obvious. The Germans’ relationship with Jews was long dominated by silence and guilt – and Jews themselves were mostly invisible until the end of the Cold War, when Jewish migration from the former Soviet states picked up. My 2015 book “The Inability to Love” describes how mainstream German authors, fueled by guilt and shame over the Nazi past, fell into a philosemitic antisemitism: Outward displays of repentance for the Holocaust and public policies that ostensibly embraced Jews clashed with privately held prejudice.

    Many examples of new German literature show contemporary Jewish and Muslim characters with complex identities – protagonists who are not seen as simply Jewish, Muslim or belonging to only one culture, pushing back on reductive stereotypes.

    For example, Kat Kaufmann’s 2015 novel “Superposition” tells the story of the young, popular and charismatic Izy, a Russian Jew who lives in Berlin as a jazz pianist. Her love interest is Timur, an Eastern European man with a typically Muslim name. When Izy thinks of her and Timur’s future son, she imagines him growing up with the luxury to conceal where he is from – to define his identity as he wishes, unlike previous generations.

    Writer Fatma Aydemir speaks at a reading in Cologne, Germany, on March 21, 2022.
    Oliver Berg/picture alliance via Getty Images

    Stories by novelists such as Dmitrij Kapitelman, Lena Gorelik, Marina Frenk and Dana Vowinckel also depict moments of connection between Jews and other Germans, or between Jews and Muslims.

    Turkish and/or Muslim writers such as Fatma Aydemir and Nazlı Koca – who now lives in America, writing in English – tell similar stories of young characters navigating German culture as marginalized individuals. They often depict young women who struggle to reconcile their culture of origin with German social expectations and xenophobia today.

    “I wanted to question the idea that we all have one single identity and that’s it,” Aydemir told the literary site K24 about her novel “Ellbogen,” whose protagonist finds herself fleeing to Turkey, her family’s original home, after a personal crisis. “I think things are way more complex, more fluid than most of us want to believe.”

    This younger generation of German Jewish and Muslim writers is recasting entrenched debates, showing characters whose identities are multidimensional and more open than the burdened past or fraught present politics would suggest. Today’s young writers are creating new, brave spaces for conversation and empathy.

    Agnes Mueller does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Germany’s young Jewish and Muslim writers are speaking for themselves – exploring immigrant identity beyond stereotypes – https://theconversation.com/germanys-young-jewish-and-muslim-writers-are-speaking-for-themselves-exploring-immigrant-identity-beyond-stereotypes-252968

    MIL OSI – Global Reports

  • MIL-OSI Global: Gay Men’s Health Crisis showed how everyday people stepped up when institutions failed during the height of the AIDS epidemic – providing a model for today

    Source: The Conversation – USA – By Sean G. Massey, Associate Professor of Women, Gender and Sexuality Studies, Binghamton University, State University of New York

    GMHC was the world’s first AIDS service organization.
    Sean Massey, CC BY-ND

    The story of the AIDS movement is one of regular people: students, bartenders, stay-at-home mothers, teachers, retired lawyers, immigrants, Catholic nuns, newly out gay men who had just arrived in New York, and many others. Some had lost friends or lovers. Some felt a moral calling. Some were just trying to balance their sexual karma. Many were angry. Most had no medical background or professional credentials – just a sense of urgency, tenacity and an unwillingness to look away.

    When Gay Men’s Health Crisis, the world’s first AIDS service organization, was founded in 1982, it was regular people trying to meet the needs of all people living with AIDS. Its workforce of volunteers provided HIV prevention education as well as physical, emotional and legal support.

    At the start of the epidemic, AIDS was considered a “gay plague,” and to be openly queer was to risk abandonment, eviction, assault or worse. Families disowned their children. Hospitals turned patients away. Funeral homes refused bodies. And many people with AIDS found themselves alone and in need.

    Public officials didn’t just fail to act – they refused to acknowledge that anything was happening at all. Elected leaders such as President Ronald Reagan and Sen. Jesse Helms stoked the moral panic guiding public policy by declaring people with AIDS “perverted human being(s).”

    In 2025, with the Trump administration cutting federal funding for HIV research and support services and restricting protections and services for LGBTQ+ people, studying how everyday people approached the early AIDS crisis provides a model for surviving through innovation, commitment and community.

    Stories informing the present

    “I think 26,000 people died before (Reagan) even bothered to utter the word ‘AIDS,’” said Tim Sweeney, former executive director of Gay Men’s Health Crisis.

    This quote is featured in the GMHC Stories Oral History Project, a collection of over 100 interviews with former volunteers, staff and donors from the first 15 years of the organization. Along with our colleague Julia Haager, we and our team at Binghamton University’s Human Sexualities Lab compiled these interviews. Acquired by the Manuscripts and Archives Division of The New York Public Library, the collection is scheduled to open in fall 2025, showcasing how everyday people responded to the AIDS crisis.

    These stories document how a community presented with a set of circumstances threatening their very existence built a self-sustaining organization to advocate for and provide care to each other outside institutional support. They did this while enduring grief, standing up to external threats and navigating internal tensions.

    The GMHC stood up for the community when other institutions would not.
    Sean Massey, CC BY-ND

    Improvisation for survival

    The work was an ongoing challenge. Organizations dedicated to aiding people affected by AIDS such as Gay Men’s Health Crisis were left to fund their own survival – and defend their right to do the work. When North Carolina Sen. Jesse Helms moved in 1988 to eliminate federal support for AIDS service programs that mentioned homosexuality, it severely limited AIDS prevention efforts nation wide. However, GMHC had the foresight to fund its more explicit education materials with private donations.

    At the beginning of the epidemic, queer New Yorkers and their allies had to improvise new systems of care in the absence of state and federal support. “People often (ask) me, what was the model you worked off of?” said Sweeney. “And I said, there was no model, there was just a muddle. We just made it up the whole time.”

    What they created almost overnight was staggering. “There were over 1,000 volunteers in the agency,” recalled staff member Tom Weber, who started at GMHC as an office volunteer in 1988. “We would have orientations every single week, and they would flood in.”

    One of the most well-known expressions of that volunteer labor was the buddy program, where lay caregivers provided emotional and practical support to people living with AIDS. “A lot of people were not alone in their death because of the work that we did,” said Barbara Danish, who led the buddy program from 1996 to 2002.

    Community members took it upon themselves to educate each other about AIDS.
    AP Photo/Marty Lederhandler

    Education and prevention were also grounded in queer culture and community. Unlike early depictions of AIDS in the media that reduced patients to “vectors” of transmission, it was defiantly sex-positive. “We came up with shit that no one in the world had ever done,” Sweeney said. “Because finally it was gay men saying … we’re going to talk to each other about how to stay safe, healthy and sexy.”

    When that sense of mission extended to emotional survival, humor and unapologetically queer culture were critical to bearing the weight of the work. “Sometimes you just break down and cry for an hour. But that’s how you survive it – by staying authentic to your emotions,” said Tommy Thomson, former director of client programs. She recalled how staff member “Carolotta,” or Carl, would sometimes put condoms and chocolate in a basket and go from office to office, frequently in drag. He would offer either or both to make people feel better. “He’d make you remember that you weren’t alone, and that we all know how hard it is. That’s part of what held you together.”

    Internal tensions

    Although Gay Men’s Health Crisis remained mission-driven, its internal politics were never simple. As it grew in size and national stature, it confronted the limits of its founding identity.

    Founded by, and initially serving, primarily white gay men, GMHC sometimes struggled to adapt to the emerging realities of the epidemic. While AIDS also affected people of color, women and intravenous drug users from the outset, much of the agency’s early prevention and outreach work was designed with gay men in mind.

    By the late 1980s, the increase in AIDS cases among white gay men had begun to plateau, while rates among Black and Latino people, women and IV drug users continued to rise sharply into the next decade. Women and people of color who were deeply embedded in GMHC’s operations nonetheless had to navigate assumptions about whose needs were prioritized – assumptions that often manifested in how resources were allocated and services were designed. As GMHC expanded its outreach to Black and Latino populations, it struggled to be culturally responsive and build trust in communities that had long been underserved and stigmatized.

    Racial disparities in HIV persist.

    As GMHC grew, it became more and more successful in fundraising and visibility, while smaller organizations sometimes struggled to access resources. This led to growing tensions, particularly in communities of color, where local groups feared that GMHC’s expansion would limit funding and undercut their efforts at community-specific approaches to care and prevention. In addition, efforts to address racism, sexism and cultural insensitivity encountered both support and indifference.

    Yet, staff and volunteers continued to push – reshaping messaging, fighting for inclusive programming, and holding conversations about race, gender, power and public health. For staff and volunteers, the agency was a complicated institution that could both empower and marginalize. Its strength, and its struggle, was learning how to expand without losing sight of the legacy and history it was built on.

    A guide for today

    Forty years later, LGBTQ+ people face a new set of crises in a landscape riddled with dangers.

    Trans health care is being banned in multiple states. Book bans and surveillance laws are targeting queer youth. Anti-LGBTQ+ rhetoric is fueling violence and censorship. Funding for HIV prevention and research is disappearing even as new infections persist. Black and brown communities still face disproportionate barriers to health care and housing. Decades of scientific progress and medical discoveries are coming to a halt with funding cuts under the Trump administration.

    Protesters at the Iowa state Capitol in February 2025, demonstrating against a bill that would remove protections based on gender identity from the state civil rights code.
    AP Photo/Charlie Neibergall

    And yet many of the same questions and challenges remain: Who gets left behind when public health systems collapse under political pressure or moral panic? Who will do the work when institutions fail? What does it mean to care for one another in the midst of the wreckage? How do people come together across differences?

    The history of GMHC is more than memory – it is a lesson in the possibility of care, creativity and community, especially in the face of fear and uncertainty today. It shows how people can come together – not just to demand policy change, but to directly meet one another’s needs with whatever resources they have. It is a reminder that mutual aid is powerful; that grief can coexist with joy; and that queer resilience has always included laughter, desire and shared vulnerability. In a time of renewed political backlash and public health failures, GMHC’s story is more than history – it’s a guide. Today, the staff and volunteers at GMHC continue their work to confront the epidemic and uplift the lives of all people affected by AIDS.

    “We’d say to them, ‘You’re just ordinary citizens doing extraordinary things,’” Sweeney said. “And we really meant that.”

    Sean G. Massey was a volunteer and staff member at Gay Men’s Health Crisis (GMHC), the organization that is being discussed in this article, from 1988-1998.

    Casey W. Adrian and Eden Lowinger do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    ref. Gay Men’s Health Crisis showed how everyday people stepped up when institutions failed during the height of the AIDS epidemic – providing a model for today – https://theconversation.com/gay-mens-health-crisis-showed-how-everyday-people-stepped-up-when-institutions-failed-during-the-height-of-the-aids-epidemic-providing-a-model-for-today-258139

    MIL OSI – Global Reports

  • MIL-OSI Global: Testing between intervals: a key to retaining information in long-term memory

    Source: The Conversation – France – By Émilie Gerbier, Maîtresse de Conférence en Psychologie, Université Côte d’Azur

    The proverb “practice makes perfect” highlights the importance of repetition to master a skill. This principle also applies to learning vocabulary and other material. In order to fight our natural tendency to forget information, it is essential to reactivate it in our memory. But, how often?

    Research in cognitive psychology provides answers to this question. However, it is also important to understand underlying principles of long-term learning to apply them in a useful and personalised way.

    The ‘spacing effect’

    There are two key principles for memorising information in the long term.

    First, test yourself to learn and review content. It is much more effective to do this using question-and-answer cards than just to reread the material. After each attempt to recall pieces of information, review the one that could not be retrieved.

    The second principle is to space out reactivations over time. This phenomenon, known as the “spacing effect”, suggests that when reviews of specific content are limited to, for instance, three sessions, it is preferable to space them over relatively longer periods (eg every three days) rather than shorter ones (every day).

    Reviewing material at long intervals requires more effort, because it is more difficult to recall information after three days than one. However, it is precisely this effort that reinforces memories and promotes long-term retention.

    When it comes to learning, we must therefore be wary of effortlessness: easily remembering a lesson today does not indicate how likely we are to remember it in a month, even though this feeling of easiness can cause us to mistakenly believe that review is unnecessary.

    Robert Bjork of the University of California coined the term “desirable difficulty” to describe an optimal level of difficulty between two extremes. The first extreme corresponds to learning that is too easy (and therefore ineffective in the long run), while the other extreme corresponds to learning that is too difficult (and therefore ineffective and discouraging).

    Finding the right pace

    There is a limit to how much time can pass between information retrievals. After a long delay, such as a year, information will have greatly declined in memory and will be difficult, if not impossible, to recall. This situation may generate negative emotions and force us to start learning from scratch, rendering our previous efforts useless.

    The key is to identify the right interval between retrievals, ensuring it is not too long and not too short. The ideal interval varies depending on several factors, such as the type of information that needs to be learned or the history of that learning. Some learning software use algorithms taking these factors into account, to test each piece of information at the “ideal” time.

    There are also paper-and-pencil methods. The simplest method is to follow an “expansive” schedule, which uses increasingly longer intervals between sessions. This technique is used in the “méthode des J” (method of days), which some students may be familiar with. The effectiveness of this method lies in a gradual strengthening of the memory.



    A weekly e-mail in English featuring expertise from scholars and researchers. It provides an introduction to the diversity of research coming out of the continent and considers some of the key issues facing European countries. Get the newsletter!


    When you first learn something, retention is fragile, and memorised content needs to be reactivated quickly not to be forgotten. Each retrieval strengthens the memory, allowing the next retrieval opportunity to be delayed. Another consequence is that each retrieval is moderately difficult, which places the learner at a “desirable” level of difficulty.

    Here is an example of an expansive schedule for a given piece of content: Day 1, Day 2, D5, D15, D44, D145, D415, etc. In this schedule, the interval length triples from one session to the next: 24 hours between Day 1 and Day 2, then three days between D2 and D5, and so on.

    Gradually incorporating new knowledge

    There is no scientific consensus on the optimal interval schedule. However, carrying out the first retrieval on the day after the initial moment of learning (thus, on D2) seems beneficial, as a night’s sleep allows the brain to restructure and/or reinforce knowledge learned the previous day. The subsequent intervals can be adjusted according to individual constraints.

    This method is flexible; if necessary, a session can be postponed a few days before or after the scheduled date without affecting long-term effectiveness. It is the principle of regular retrieval that is key here.

    The expansive schedule also has a considerable practical advantage in that it allows new information to be gradually integrated. For instance, new content can be introduced on D3, because no session on the initial content is scheduled for that day. Adding content gradually makes it possible to memorise large amounts of information in a lasting way without spending more time studying it.

    The other method is based on the Leitner box system. In this case, the length of interval before the next retrieval depends on the outcome of the attempt to retrieve information from memory. If the answer was easily retrieved, the next retrieval should happen in a week. If the answer was retrieved with difficulty, then three days need to elapse before the next test. If the answer could not be retrieved, the next test should take place the following day. With experience, you will be able to adjust these intervals and develop your own system.

    In short, effective and lasting learning not only requires that a certain amount of effort be made to retrieve information from memory, but a regular repetition of this process, at appropriate intervals, to thwart the process of forgetting.

    Émilie Gerbier ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’a déclaré aucune autre affiliation que son organisme de recherche.

    ref. Testing between intervals: a key to retaining information in long-term memory – https://theconversation.com/testing-between-intervals-a-key-to-retaining-information-in-long-term-memory-246511

    MIL OSI – Global Reports

  • MIL-OSI Global: South Africa’s cricket team just made history: how the ‘chokers’ became world champions

    Source: The Conversation – Global Perspectives – By Mogammad Sharhidd Taliep, Associate Professor, Cape Peninsula University of Technology

    When Kyle Verreynne hit the winning runs at the “home of cricket” (Lord’s Cricket Ground in London) on 14 June, South Africa erupted in celebration. The Proteas had just claimed their first major cricket cup in history. And nothing less than the International Cricket Council World Test Championship at that, the premier international competition for five-day (test) cricket that’s played over two years.

    Branded as “chokers” for 26 years for underperforming or spoiling their advantage in crunch situations in major tournaments, the national men’s cricket team has transformed to become world champions.

    I’m a sport scientist with a focus on cricket. Research can help us understand how the Proteas have managed to do this and what core qualities of a winning team they’ve embodied on their way to turning things around.

    What is choking?

    The term “chokers” started being used to describe the Proteas team after the 1999 International Cricket Council Men’s Cricket World Cup semi-finals for games played over one day. The Proteas gave up a commanding position against Australia. This curse tormented them in high-stakes games, particularly world cups, where they often ended second best.

    In sports psychology, choking has been defined as:

    An acute and considerable decrease in skill execution and performance when self-expected standards are normally achievable, which is the result of increased anxiety under perceived pressure performance decline when highly motivated individuals are subjected to pressure.

    Anxiety disrupts a player’s automatic motor response, leading to poor decisions and inaccurate skill execution. This happens at critical moments of the game. And the aftermath of these continued inferior performances can lead to a long-lasting stigma.

    Proteas captain Temba Bavuma emphasised this in his match-winning speech:

    We have gone through the heartache, we have gone through the pain, seeing it with past players.

    Clutch performance

    The opposite to choking is clutch performance. This can be defined as improved or maintained performance under pressure. Some of the contributing characteristics of clutch performances are confidence, complete and deliberate focus, automatic movements, and the absence of negative thoughts.

    I believe the shift towards these clutch characteristics was the difference in the Proteas shrugging off their “choker” curse.

    What made the difference?

    Bavuma, in the post-match interview, recounted how teammate Aiden Markram embodied those clutch qualities, calmly telling Bavuma after every over:

    Lock in and give them nothing.

    In interviews Proteas coach Shukri Conrad stressed how calm the players were. He pointed out Markram and Bavuma for their poise and reliability under pressure, another defining trait of expert performers.

    Conrad emphasised the importance of removing distraction by telling them to “play the conditions” and not the situation. This allows players to focus on the moment and not be overwhelmed by the broader context of the match.

    The calm and composed demeanour of Bavuma and Markram as they prepared to face the barrage of deliveries during their match-defining partnership also relates to a phenomenon scientists refer to as the “quiet eye”.




    Read more:
    What is cricket’s World Test Championship and how did Australia qualify for the final?


    The quiet eye is the period of visual fixation or visual tracking of the body cues of the bowler and the early ball flight trajectory before the execution of a motor task. It’s been associated with superior performance under pressure.

    Bavuma and Markram were able to sustain long periods of quiet eye while processing critical information from the bowlers’ action and early ball path, while remaining focused on task-relevant cues, all the while blocking out anxiety-related distractions.

    Conrad succeeded because he was able to combine cultural wisdom and emotional intelligence to truly transform the psychology and ability of the Proteas team.

    His philosophy of selection, “character first then matching up the skill”, pays tribute to his vision of peaking when it counts – a quality lacking in Proteas teams of the past.

    When Conrad was first appointed as Proteas coach, he made two big decisions. He selected Bavuma as captain and he recalled a struggling test batter, Markram. Conrad explained:

    Obviously Temba, a quiet leader, leads from the back, but certainly from the front with the bat … Aiden Markram was always going to be my opening bat. He always delivers on the big stage.

    The vision of Conrad to appoint Bavuma captain has resulted in a record 10 successive test wins. In the winning match Bavuma led from the front and held firm. He was up to the task with the bat, and despite suffering a hamstring injury during the game, was able to join forces with Markram in the fourth innings to set up a match-winning third wicket partnership of 143 runs.

    Three of the most experienced players for South Africa in test matches, Bavuma, Markram and Kagiso Rabada, stood out as true champions in this final. Markram scored a match-winning 136 runs in the fourth innings, while Rabada laid the foundation for victory by taking a decisive nine wickets.




    Read more:
    T20 World Cup: South Africa reached its first final ever – but staying at the top will take a rethink of junior cricket


    For the first time in 26 years, the senior Proteas players all stepped up when it mattered most to secure a world championship. Conrad bore testimony to this in the post-match interview:

    When our two senior pros in Aiden and Temba put that big stand together, I felt that is obviously where the game was won for us.

    The Proteas’ victory on 14 June 2025 lifted a 26-year choker curse. With the visionary leadership of Conrad and the composed stewardship of Bavuma, the Proteas revealed that mental clarity, cultural cohesion, and emotional intelligence were key to their success. The “chokers” tag is buried beneath the turf of the “home of cricket”.

    Mogammad Sharhidd Taliep does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. South Africa’s cricket team just made history: how the ‘chokers’ became world champions – https://theconversation.com/south-africas-cricket-team-just-made-history-how-the-chokers-became-world-champions-259167

    MIL OSI – Global Reports

  • MIL-OSI Global: South Africa’s cricket team just made history: how the ‘chokers’ became world champions

    Source: The Conversation – Global Perspectives – By Mogammad Sharhidd Taliep, Associate Professor, Cape Peninsula University of Technology

    When Kyle Verreynne hit the winning runs at the “home of cricket” (Lord’s Cricket Ground in London) on 14 June, South Africa erupted in celebration. The Proteas had just claimed their first major cricket cup in history. And nothing less than the International Cricket Council World Test Championship at that, the premier international competition for five-day (test) cricket that’s played over two years.

    Branded as “chokers” for 26 years for underperforming or spoiling their advantage in crunch situations in major tournaments, the national men’s cricket team has transformed to become world champions.

    I’m a sport scientist with a focus on cricket. Research can help us understand how the Proteas have managed to do this and what core qualities of a winning team they’ve embodied on their way to turning things around.

    What is choking?

    The term “chokers” started being used to describe the Proteas team after the 1999 International Cricket Council Men’s Cricket World Cup semi-finals for games played over one day. The Proteas gave up a commanding position against Australia. This curse tormented them in high-stakes games, particularly world cups, where they often ended second best.

    In sports psychology, choking has been defined as:

    An acute and considerable decrease in skill execution and performance when self-expected standards are normally achievable, which is the result of increased anxiety under perceived pressure performance decline when highly motivated individuals are subjected to pressure.

    Anxiety disrupts a player’s automatic motor response, leading to poor decisions and inaccurate skill execution. This happens at critical moments of the game. And the aftermath of these continued inferior performances can lead to a long-lasting stigma.

    Proteas captain Temba Bavuma emphasised this in his match-winning speech:

    We have gone through the heartache, we have gone through the pain, seeing it with past players.

    Clutch performance

    The opposite to choking is clutch performance. This can be defined as improved or maintained performance under pressure. Some of the contributing characteristics of clutch performances are confidence, complete and deliberate focus, automatic movements, and the absence of negative thoughts.

    I believe the shift towards these clutch characteristics was the difference in the Proteas shrugging off their “choker” curse.

    What made the difference?

    Bavuma, in the post-match interview, recounted how teammate Aiden Markram embodied those clutch qualities, calmly telling Bavuma after every over:

    Lock in and give them nothing.

    In interviews Proteas coach Shukri Conrad stressed how calm the players were. He pointed out Markram and Bavuma for their poise and reliability under pressure, another defining trait of expert performers.

    Conrad emphasised the importance of removing distraction by telling them to “play the conditions” and not the situation. This allows players to focus on the moment and not be overwhelmed by the broader context of the match.

    The calm and composed demeanour of Bavuma and Markram as they prepared to face the barrage of deliveries during their match-defining partnership also relates to a phenomenon scientists refer to as the “quiet eye”.




    Read more:
    What is cricket’s World Test Championship and how did Australia qualify for the final?


    The quiet eye is the period of visual fixation or visual tracking of the body cues of the bowler and the early ball flight trajectory before the execution of a motor task. It’s been associated with superior performance under pressure.

    Bavuma and Markram were able to sustain long periods of quiet eye while processing critical information from the bowlers’ action and early ball path, while remaining focused on task-relevant cues, all the while blocking out anxiety-related distractions.

    Conrad succeeded because he was able to combine cultural wisdom and emotional intelligence to truly transform the psychology and ability of the Proteas team.

    His philosophy of selection, “character first then matching up the skill”, pays tribute to his vision of peaking when it counts – a quality lacking in Proteas teams of the past.

    When Conrad was first appointed as Proteas coach, he made two big decisions. He selected Bavuma as captain and he recalled a struggling test batter, Markram. Conrad explained:

    Obviously Temba, a quiet leader, leads from the back, but certainly from the front with the bat … Aiden Markram was always going to be my opening bat. He always delivers on the big stage.

    The vision of Conrad to appoint Bavuma captain has resulted in a record 10 successive test wins. In the winning match Bavuma led from the front and held firm. He was up to the task with the bat, and despite suffering a hamstring injury during the game, was able to join forces with Markram in the fourth innings to set up a match-winning third wicket partnership of 143 runs.

    Three of the most experienced players for South Africa in test matches, Bavuma, Markram and Kagiso Rabada, stood out as true champions in this final. Markram scored a match-winning 136 runs in the fourth innings, while Rabada laid the foundation for victory by taking a decisive nine wickets.




    Read more:
    T20 World Cup: South Africa reached its first final ever – but staying at the top will take a rethink of junior cricket


    For the first time in 26 years, the senior Proteas players all stepped up when it mattered most to secure a world championship. Conrad bore testimony to this in the post-match interview:

    When our two senior pros in Aiden and Temba put that big stand together, I felt that is obviously where the game was won for us.

    The Proteas’ victory on 14 June 2025 lifted a 26-year choker curse. With the visionary leadership of Conrad and the composed stewardship of Bavuma, the Proteas revealed that mental clarity, cultural cohesion, and emotional intelligence were key to their success. The “chokers” tag is buried beneath the turf of the “home of cricket”.

    Mogammad Sharhidd Taliep does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. South Africa’s cricket team just made history: how the ‘chokers’ became world champions – https://theconversation.com/south-africas-cricket-team-just-made-history-how-the-chokers-became-world-champions-259167

    MIL OSI – Global Reports

  • MIL-OSI Global: G20 countries could produce enough renewable energy for the whole world – what needs to happen

    Source: The Conversation – Global Perspectives – By Sven Teske, Prof. Dr. | Research Director, Institute for Sustainable Futures, University of Technology Sydney

    The world’s most developed economies have also burnt the most oil and coal (fossil fuels) over the years, causing the most climate change damage. Preventing further climate change means a global fossil fuel phase-out must happen by 2050. Climate change mitigation scientists Sven Teske and Saori Miyake analysed the potential for renewable energy in each of the G20 countries. They concluded that the G20 is in a position to generate enough renewable energy to supply the world. For African countries to benefit, they must adopt long term renewable energy plans and policies and secure finance from G20 countries to set up renewable energy systems.

    Why is the G20 so important in efforts to limit global warming?

    The G20 group accounts for 67% of the world’s population, 85% of global gross domestic product, and 75% of global trade. The member states are the G7 (the US, Japan, Germany, the UK, France, Italy, Canada), plus Australia, China, India, Indonesia, Republic of Korea, Russia, Türkiye, Saudi Arabia, South Africa, Mexico, Brazil and Argentina.

    We wanted to find out how G20 member states could limit global warming. Our study examined the solar and wind potential for each of G20 member countries (the available land and solar and wind conditions). We then compared this with projected electricity demands for 2050. This is, to our knowledge, the first research of its kind.




    Read more:
    G20 is too elite. There’s a way to fix that though – economists


    We found that the potential for renewable energy in G20 countries is very high – enough to supply the projected 2050 electricity demand for the whole world. They have 33.6 million km² of land on which solar energy projects could be set up, or 31.1 million km² of land on which wind energy projects could be set up.

    This potential varies by geography. Not all G20 countries have the same conditions for generating solar and wind energy, but collectively, the G20 countries have enough renewable energy potential to supply the world’s energy needs.

    But for the G20 countries to limit global warming, they also need to stop emitting greenhouse gases. Recent figures show that the G20 countries were responsible for generating 87% of all energy-related carbon dioxide emissions that cause global warming.

    On the other hand, African Union countries (apart from South Africa, which is a high greenhouse gas emitter), were responsible for only 1.2% of the global total historical emissions until 2020.

    The G20 countries with the highest renewable energy potential (especially Australia and Canada) are major exporters of the fossil fuels that cause global warming. Along with every other country in the world, the G20 nations will need to end their human-caused carbon emissions by 2050 to prevent further climate change.

    Where does Africa fit into the picture?

    African countries cannot set up new electricity plants based on burning fossil fuels, like coal. If they do that, the world will never end human-caused greenhouse gas emissions by 2050. The continent must generate electricity for the 600 million Africans who do not currently have it but will need to move straight past fossil fuels and into renewable energy.

    For this, Africa will need finance. The African Union hosts the G20 summit later this year. This meeting begins just after the world’s annual climate change conference (now in its 30th year and known as COP30). These two summits will give Africa the chance to lobby for renewable energy funding from wealthier nations.

    Africa already has the conditions needed to move straight into renewable energy. The continent could be generating an amount of solar and wind power that far exceeds its projected demand for electricity between now and 2050.

    We are launching an additional analysis of the solar and wind potential of the entire African continent in Bonn, Germany on 19 June 2025 at a United Nations conference. This shows that only 3% of Africa’s solar and wind potential needs to be converted to real projects to supply Africa’s future electricity demand.




    Read more:
    Africa’s power pools: what the G20 can do to help countries share electricity


    This means that Africa has great untapped potential to supply the required energy for its transition to a middle-income continent – one of the African Union’s goals in Agenda 2063, its 50 year plan.

    But to secure enough finance for the continent to build renewable energy systems, African countries need long-term energy policies. These are currently lacking.

    So what needs to be done?

    The countries who signed up to the 2015 international climate change treaty (the Paris Agreement) have committed to replacing polluting forms of energy such as coal, fuelwood and oil with renewable energy.

    South Africa, through its G20 presidency, must encourage G20 nations to reduce their greenhouse gas emissions and support renewable energy investment in Africa.




    Read more:
    Fossil fuels are still subsidised: G20 could push for the funds to be shifted to cleaner energy


    Because financing the global energy transition is already high on the priority list of most countries, South Africa should push for change on three fronts: finance, sound regulations and manufacturing capacity for renewable technologies. These are the among the main obstacles for renewables, particularly in Africa.

    Finance: Financing the energy transition is among the highest priorities for COP30. Therefore, the COP30 meeting will be an opportunity for the African Union to negotiate finance for its renewable energy infrastructure needs.

    For this, fair and just carbon budgets are vital. A carbon budget sets out how much carbon dioxide can still be emitted in order for the global temperature not to rise more than 2°C higher than it was before the 1760 industrial revolution.

    A global carbon budget (the amount of emissions the whole world is allowed) has been calculated, but it needs to be divided up fairly so that countries that have polluted most are compelled to limit this.

    To divide the global carbon budget fairly, energy pathways need to be developed urgently that consider:

    • future developments of population and economic growth

    • current energy supply systems

    • transition times for decarbonisation

    • local renewable energy resources.

    The G20 platform should be used to lobby for fair and just carbon budgets.




    Read more:
    Wealthy nations owe climate debt to Africa – funds that could help cities grow


    Sound regulations that support the setting up of new factories: Governments must put policies in place to support African solar and wind companies. These are needed to win the trust of investors to invest in a future multi-billion dollar industry. Long-term, transparent regulations are needed too.

    These regulations should:

    • say exactly how building permits for solar and wind power plants will be granted

    • prioritise linking renewable energy plants to national electricity grids

    • release standard technical specifications for stand-alone grids to make sure they’re all of the same quality.

    Taking steps now to speed up big renewable energy industries could mean that African countries end up with more energy than they need. This can be exported and increase financial income for countries.

    Sven Teske receives funding from the European Climate Foundation and Power Shift Africa (PSA).

    Saori Miyake receives funding from European Climate Foundation and Power Shift Africa.

    ref. G20 countries could produce enough renewable energy for the whole world – what needs to happen – https://theconversation.com/g20-countries-could-produce-enough-renewable-energy-for-the-whole-world-what-needs-to-happen-258463

    MIL OSI – Global Reports

  • MIL-OSI Global: Southeast Asian nations look to hedge their way out of troubled waters in the South China Sea

    Source: The Conversation – Global Perspectives – By John Rennie Short, Professor Emeritus of Public Policy, University of Maryland, Baltimore County

    A Philippine coast guard vessel patrols near Pagasa, part of the Spratly Islands in the disputed South China Sea. Daniel Ceng/Anadolu via Getty Images

    The South China Sea has long been a bubbling geopolitical hot spot. Recently, a series of moves by the various nations claiming a stake in the waters has stirred up yet more trouble.

    Malaysia has of late reaffirmed its commitment to oil and gas exploration in waters claimed by China while quietly building up its military on the islands off Borneo.

    Meanwhile, Chinese coast guard vessels have deployed water cannons against Filipino fishing boats. And the accidental grounding of a Chinese boat in shallow waters around the Philippines’ Thitu Island on June 8, 2025, was enough to put Filipino forces on alert.

    Vietnam, too, has been active in the disputed waters. A Beijing-based think tank on June 7 flagged that Vietnamese engineers had been busy reclaiming land and installing military-related ports and airstrips around the Spratly Islands.

    What the three Southeast Asian nations of Vietnam, the Philippines and Malaysia have in common is that they, along with others in the region, are trying to navigate a more assertive China at a time when the U.S. policy intentions under the second Trump Administration are fluid and hard to read. And in lieu of a coordinated response from the regional body Association of Southeast Asian Nations, or ASEAN, each member nation has been busy charting its course in these choppy waters.

    US-China relations all at sea

    Why is China trying to assert control in the South China Sea? In a 2023 speech, President Xi Jinping noted that “Western countries led by the United States have implemented all round containment, encirclement and suppression of China.”

    This fear has been long held in Beijing and was reinforced by a U.S. Indo-Pacific policy announced in 2011 of rebalancing military forces away from Europe and toward Asia to confront China.

    In response, China has in recent years embarked on an ambitious policy of attempting to outmuscle U.S. naval power in the South China Sea.

    China is now the world’s leading builder of naval vessels and is estimated to have 440 battleships by 2030, compared with the United States’ 300.

    And it comes at a time when U.S. naval power is spread around the world. China’s, meanwhile, is concentrated around the South China Sea where, since 2013, Chinese vessels have pumped sand onto reefs, turning them into islands and then weaponizing them.

    Satellite imagery shows the Fiery Cross Reef in the South China Sea, part of the Spratly Islands group, being built by Chinese dredges.
    Maxar via Getty Images

    Then there is the activity of China’s maritime militia of approximately 300 nominally fishing boats equipped with water cannons and reinforced hulls for ramming. This so-called gray zone fleet is increasingly active in confronting Southeast Asia nations at sea.

    The U.S. response to China’s militarization in the sea has been through so-called “freedom of navigation” exercises that often deploy carrier groups in a show of force. But these episodic displays are more performative than effective, doing little to deter China’s claims.

    The U.S. has also strengthened military alliances with Australia, India, Japan and the Philippines, and has increased coast guard cooperation with the Philippines and Japan.

    A fleet from the U.S. Navy patrolling the Pacific Ocean.
    Sean M. Castellano US Navy via Getty Images

    The sea is a valuable resource

    Yet the battle over control of the South China Seas is more than just geopolitical posturing between the two superpowers.

    For adjoining countries, the sea is a valuable biological resource with rich fishing grounds that provide a staple of fish protein for close to 2 billion people. There are estimates of 190 trillion cubic feet of natural gas and 11 billion barrels of oil.

    The U.N. Convention on the Law of the Sea, or UNCLOS, guarantees a nation an exclusive economic zone (EEZ) of 200 nautical miles from around its coastline.

    China is a signatory of the UNCLOS. Yet it views ownership of the South China Sea through the lens of its nine-dash line, a reference to the boundary line that Beijing has invoked since 1948. While the claim has no legal or historical basis, the delineation makes major incursions into waters around Vietnam, the Philippines and Malaysia and, to a lesser extent, Brunei and Indonesia as well.

    Despite China’s expansive claim to the South China Sea being dismissed in 2016 by the international Permanent Court of Arbitration, Beijing continues to assert its claim.

    Hedging positions

    As I explore in my recent book “Hedging and Conflict in the South China Sea,” part of the problem Southeast Asian nations face is that they have failed to forge a unified position.

    ASEAN, the regional bloc representing 10 nations in Southeast Asia, has long been governed by the principle that major decisions need unanimous agreement. China is a major trading partner to ASEAN nations, so any regional country aligning too close to the U.S. comes with the real risk of economic consequences. And two ASEAN members, Cambodia and Laos, are especially close to China, making it difficult to generate a unified ASEAN policy that confronts China’s maritime claim.

    Instead, ASEAN has promoted a regional code of conduct that effectively legitimizes China’s maritime claims, fails to mention the 2016 ruling and ignores the issue of conflicting claims.

    Further complicating a united front against China is the competing claims among ASEAN nations themselves to disputed islands in the South China Sea.

    In lieu of a coordinated response, Southeast Asian nations have instead turned to hedging — that is, maintaining good relationships with both China and the U.S. without fully committing to one or other.

    A balancing act for Vietnam, Malaysia and the Philippines

    Malaysia’s approach sees its government partition off the South China Sea dispute from its overall bilateral ties with China while continuing to promote an ASEAN code of conduct.

    Until recently, Malaysia’s oil and gas activities were well within Malaysia’s EEZ and not far enough out to fall into China’s nine-dash claim.

    But as these close-to-shore fields become exhausted, subsequent exploration will need to extend outward and into China’s nine-dash claim, putting Malaysia’s dealings with China under pressure.

    China’s nine-dash line claims a significant amount of Vietnam’s EEZ, and the contested maritime area is a source of friction between the two countries; China’s maritime militia regularly harasses Vietnamese fishermen and disrupts drilling operations in Vietnam’s EEZ .

    But Vietnam has to tread carefully. China plays a significant role in the Vietnamese economy as a major destination of exports and an important provider of foreign investment. China also has the ability to dam the Mekong River upstream of Vietnam — something that would disrupt agricultural production.

    As a result, Vietnam’s hedging involves a careful calibration to avoid angering China. However, part of Vietnam’s heavy hedging involves the promotion of the South China Sea dispute as a core issue for domestic public opinion, which limits the Vietnamese government’s ability to offer concessions to China.

    A Philippine coast guard ship and fishing boats are seen in El Nido, Palawan, Philippines, on May 26, 2025.
    Daniel Ceng/Anadolu via Getty ImagesDaniel Ceng/Anadolu via Getty Images

    China’s nine-dash claim also includes a wide swath of the Philippines’ EEZ.

    The Philippines has zigzagged in its dealings with China. The presidencies of Gloria Macapagal Arroyo (2001–2010) and Rodrigo Duterte (2016-2022) pursued a pro-China tack that downplayed Filipino claims in the South China Sea. Presidents Benigno Aquino (2010-2016) and Ferdinand “Bongbong” Marcos Jr. (2022-present), in contrast, have given U.S. forces greater access to its maritime bases and mobilized national and international opinion in favor of its claims.

    Since coming to power, Marcos has also pursued even closer naval ties with the U.S.. But this has come at a cost: China now views the Philippines as a U.S. ally. As such, Beijing sees little to be gained by pulling back from its assertive activity in and around its waters.

    The future

    In the shadow of two major powers battling for power in the South China Sea, Southeast Asian nations are making the best of their position along a geopolitical fracture line by advancing their claims and interests while not overly antagonizing a more assertive China or losing the support of the U.S.

    This may work to tamp down tensions in the South China Sea. But it is a fluid approach not without risk, and it could yet prove to be another source of instability in a geopolitically contested and dangerous region.

    John Rennie Short received funding from Fulbright Foundation

    ref. Southeast Asian nations look to hedge their way out of troubled waters in the South China Sea – https://theconversation.com/southeast-asian-nations-look-to-hedge-their-way-out-of-troubled-waters-in-the-south-china-sea-257092

    MIL OSI – Global Reports

  • MIL-OSI Global: Iran’s long history of revolution, defiance and outside interference – and why its future is so uncertain

    Source: The Conversation – Global Perspectives – By Amin Saikal, Emeritus Professor of Middle Eastern and Central Asian Studies, Australian National University; and Vice Chancellor’s Strategic Fellow, Victoria University

    Israeli Prime Minister Benjamin Netanyahu has gone beyond his initial aim of destroying Iran’s ability to produce nuclear weapons. He has called on the Iranian people to rise up against their dictatorial Islamic regime and ostensibly transform Iran along the lines of Israeli interests.

    United States President Donald Trump is now weighing possible military action in support of Netanyahu’s goal and asked for Iran’s total surrender.

    If the US does get involved, it wouldn’t be the first time it’s tried to instigate regime change by military means in the Middle East. The US invaded Iraq in 2003 and backed a NATO operation in Libya in 2011, toppling the regimes of Saddam Hussein and Muammar Gaddafi, respectively.

    In both cases, the interventions backfired, causing long-term instability in both countries and in the broader region.

    Could the same thing happen in Iran if the regime is overthrown?

    As I describe in my book, Iran Rising: The Survival and Future of the Islamic Republic, Iran is a pluralist society with a complex history of rival groups trying to assert their authority. A democratic transition would be difficult to achieve.

    The overthrow of the shah

    The Iranian Islamic regime assumed power in the wake of the pro-democracy popular uprising of 1978–79, which toppled Mohammad Reza Shah Pahlavi’s pro-Western monarchy.

    Until this moment, Iran had a long history of monarchical rule dating back 2,500 years. Mohammad Reza, the last shah, was the head of the Pahlavi dynasty, which came to power in 1925.

    In 1953, the shah was forced into exile under the radical nationalist and reformist impulse of the democratically elected Prime Minister Mohammad Mosaddegh. He was shortly returned to his throne through a CIA-orchestrated coup.

    Despite all his nationalist, pro-Western, modernising efforts, the shah could not shake off the indignity of having been re-throned with the help of a foreign power.

    The revolution against him 25 years later was spearheaded by pro-democracy elements. But it was made up of many groups, including liberalists, communists and Islamists, with no uniting leader.

    The Shia clerical group (ruhaniyat), led by the Shah’s religious and political opponent, Ayatollah Ruhollah Khomeini, proved to be best organised and capable of providing leadership to the revolution. Khomeini had been in exile from the early 1960s (at first in Iraq and later in France), yet he and his followers held considerable sway over the population, especially in traditional rural areas.

    When US President Jimmy Carter’s administration found it could no longer support the shah, he left the country and went into exile in January 1979. This enabled Khomeini to return to Iran to a tumultuous welcome.

    Birth of the Islamic Republic

    In the wake of the uprising, Khomeini and his supporters, including the current supreme leader Ayatollah Ali Khamenei, abolished the monarchy and transformed Iran to a cleric-dominated Islamic Republic, with anti-US and anti-Israel postures. He ruled the country according to his unique vision of Islam.

    Khomeini denounced the US as a “Great Satan” and Israel as an illegal usurper of the Palestinian lands – Jerusalem, in particular. He also declared a foreign policy of “neither east, nor west” but pro-Islamic, and called for the spread of the Iranian revolution in the region.

    Khomeini not only changed Iran, but also challenged the US as the dominant force in shaping the regional order. And the US lost one of the most important pillars of its influence in the oil-rich and strategically important Persian Gulf region.

    Fear of hostile American or Israeli (or combined) actions against the Islamic Republic became the focus of Iran’s domestic and foreign policy behaviour.

    A new supreme leader takes power

    Khomeini died in 1989. His successor, Ayatollah Ali Khamenei, has ruled Iran largely in the same jihadi (combative) and ijtihadi (pragmatic) ways, steering the country through many domestic and foreign policy challenges.

    Khamenei fortified the regime with an emphasis on self-sufficiency, a stronger defence capability and a tilt towards the east – Russia and China – to counter the US and its allies. He has stood firm in opposition to the US and its allies – Israel, in particular. And he has shown flexibility when necessary to ensure the survival and continuity of the regime.

    Khamenei wields enormous constitutional power and spiritual authority.

    He has presided over the building of many rule-enforcing instruments of state power, including the expansion of the Islamic Revolutionary Guard Corps and its paramilitary wing, the Basij, revolutionary committees, and Shia religious networks.

    The Shia concept of martyrdom and loyalty to Iran as a continuous sovereign country for centuries goes to the heart of his actions, as well as his followers.

    Khamenei and his rule enforcers, along with an elected president and National Assembly, are fully cognisant that if the regime goes down, they will face the same fate. As such, they cannot be expected to hoist the white flag and surrender to Israel and the US easily.

    However, in the event of the regime falling under the weight of a combined internal uprising and external pressure, it raises the question: what is the alternative?

    The return of the shah?

    Many Iranians are discontented with the regime, but there is no organised opposition under a nationally unifying leader.

    The son of the former shah, the crown prince Reza Pahlavi, has been gaining some popularity. He has been speaking out on X in the last few days, telling his fellow Iranians:

    The end of the Islamic Republic is the end of its 46-year war against the Iranian nation. The regime’s apparatus of repression is falling apart. All it takes now is a nationwide uprising to put an end to this nightmare once and for all.

    Since the deposition of his father, he has lived in exile in the US. As such, he has been tainted by his close association with Washington and Jerusalem, especially Netanyahu.

    If he were to return to power – likely through the assistance of the US – he would face the same problem of political legitimacy as his father did.

    What does the future hold?

    Iran has never had a long tradition of democracy. It experienced brief instances of liberalism in the first half of the 20th century, but every attempt at making it durable resulted in disarray and a return to authoritarian rule.

    Also, the country has rarely been free of outside interventionism, given its vast hydrocarbon riches and strategic location. It’s also been prone to internal fragmentation, given its ethnic and religious mix.

    The Shia Persians make up more than half of the population, but the country has a number of Sunni ethnic minorities, such as Kurds, Azaris, Balochis and Arabs. They have all had separatist tendencies.

    Iran has historically been held together by centralisation rather than diffusion of power.

    Should the Islamic regime disintegrate in one form or another, it would be an mistake to expect a smooth transfer of power or transition to democratisation within a unified national framework.

    At the same time, the Iranian people are highly cultured and creative, with a very rich and proud history of achievements and civilisation.

    They are perfectly capable of charting their own destiny as long as there aren’t self-seeking foreign hands in the process – something they have rarely experienced.

    Amin Saikal does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Iran’s long history of revolution, defiance and outside interference – and why its future is so uncertain – https://theconversation.com/irans-long-history-of-revolution-defiance-and-outside-interference-and-why-its-future-is-so-uncertain-259270

    MIL OSI – Global Reports

  • MIL-OSI Global: Iran’s long history of revolution, defiance and outside interference – and why its future is so uncertain

    Source: The Conversation – Global Perspectives – By Amin Saikal, Emeritus Professor of Middle Eastern and Central Asian Studies, Australian National University; and Vice Chancellor’s Strategic Fellow, Victoria University

    Israeli Prime Minister Benjamin Netanyahu has gone beyond his initial aim of destroying Iran’s ability to produce nuclear weapons. He has called on the Iranian people to rise up against their dictatorial Islamic regime and ostensibly transform Iran along the lines of Israeli interests.

    United States President Donald Trump is now weighing possible military action in support of Netanyahu’s goal and asked for Iran’s total surrender.

    If the US does get involved, it wouldn’t be the first time it’s tried to instigate regime change by military means in the Middle East. The US invaded Iraq in 2003 and backed a NATO operation in Libya in 2011, toppling the regimes of Saddam Hussein and Muammar Gaddafi, respectively.

    In both cases, the interventions backfired, causing long-term instability in both countries and in the broader region.

    Could the same thing happen in Iran if the regime is overthrown?

    As I describe in my book, Iran Rising: The Survival and Future of the Islamic Republic, Iran is a pluralist society with a complex history of rival groups trying to assert their authority. A democratic transition would be difficult to achieve.

    The overthrow of the shah

    The Iranian Islamic regime assumed power in the wake of the pro-democracy popular uprising of 1978–79, which toppled Mohammad Reza Shah Pahlavi’s pro-Western monarchy.

    Until this moment, Iran had a long history of monarchical rule dating back 2,500 years. Mohammad Reza, the last shah, was the head of the Pahlavi dynasty, which came to power in 1925.

    In 1953, the shah was forced into exile under the radical nationalist and reformist impulse of the democratically elected Prime Minister Mohammad Mosaddegh. He was shortly returned to his throne through a CIA-orchestrated coup.

    Despite all his nationalist, pro-Western, modernising efforts, the shah could not shake off the indignity of having been re-throned with the help of a foreign power.

    The revolution against him 25 years later was spearheaded by pro-democracy elements. But it was made up of many groups, including liberalists, communists and Islamists, with no uniting leader.

    The Shia clerical group (ruhaniyat), led by the Shah’s religious and political opponent, Ayatollah Ruhollah Khomeini, proved to be best organised and capable of providing leadership to the revolution. Khomeini had been in exile from the early 1960s (at first in Iraq and later in France), yet he and his followers held considerable sway over the population, especially in traditional rural areas.

    When US President Jimmy Carter’s administration found it could no longer support the shah, he left the country and went into exile in January 1979. This enabled Khomeini to return to Iran to a tumultuous welcome.

    Birth of the Islamic Republic

    In the wake of the uprising, Khomeini and his supporters, including the current supreme leader Ayatollah Ali Khamenei, abolished the monarchy and transformed Iran to a cleric-dominated Islamic Republic, with anti-US and anti-Israel postures. He ruled the country according to his unique vision of Islam.

    Khomeini denounced the US as a “Great Satan” and Israel as an illegal usurper of the Palestinian lands – Jerusalem, in particular. He also declared a foreign policy of “neither east, nor west” but pro-Islamic, and called for the spread of the Iranian revolution in the region.

    Khomeini not only changed Iran, but also challenged the US as the dominant force in shaping the regional order. And the US lost one of the most important pillars of its influence in the oil-rich and strategically important Persian Gulf region.

    Fear of hostile American or Israeli (or combined) actions against the Islamic Republic became the focus of Iran’s domestic and foreign policy behaviour.

    A new supreme leader takes power

    Khomeini died in 1989. His successor, Ayatollah Ali Khamenei, has ruled Iran largely in the same jihadi (combative) and ijtihadi (pragmatic) ways, steering the country through many domestic and foreign policy challenges.

    Khamenei fortified the regime with an emphasis on self-sufficiency, a stronger defence capability and a tilt towards the east – Russia and China – to counter the US and its allies. He has stood firm in opposition to the US and its allies – Israel, in particular. And he has shown flexibility when necessary to ensure the survival and continuity of the regime.

    Khamenei wields enormous constitutional power and spiritual authority.

    He has presided over the building of many rule-enforcing instruments of state power, including the expansion of the Islamic Revolutionary Guard Corps and its paramilitary wing, the Basij, revolutionary committees, and Shia religious networks.

    The Shia concept of martyrdom and loyalty to Iran as a continuous sovereign country for centuries goes to the heart of his actions, as well as his followers.

    Khamenei and his rule enforcers, along with an elected president and National Assembly, are fully cognisant that if the regime goes down, they will face the same fate. As such, they cannot be expected to hoist the white flag and surrender to Israel and the US easily.

    However, in the event of the regime falling under the weight of a combined internal uprising and external pressure, it raises the question: what is the alternative?

    The return of the shah?

    Many Iranians are discontented with the regime, but there is no organised opposition under a nationally unifying leader.

    The son of the former shah, the crown prince Reza Pahlavi, has been gaining some popularity. He has been speaking out on X in the last few days, telling his fellow Iranians:

    The end of the Islamic Republic is the end of its 46-year war against the Iranian nation. The regime’s apparatus of repression is falling apart. All it takes now is a nationwide uprising to put an end to this nightmare once and for all.

    Since the deposition of his father, he has lived in exile in the US. As such, he has been tainted by his close association with Washington and Jerusalem, especially Netanyahu.

    If he were to return to power – likely through the assistance of the US – he would face the same problem of political legitimacy as his father did.

    What does the future hold?

    Iran has never had a long tradition of democracy. It experienced brief instances of liberalism in the first half of the 20th century, but every attempt at making it durable resulted in disarray and a return to authoritarian rule.

    Also, the country has rarely been free of outside interventionism, given its vast hydrocarbon riches and strategic location. It’s also been prone to internal fragmentation, given its ethnic and religious mix.

    The Shia Persians make up more than half of the population, but the country has a number of Sunni ethnic minorities, such as Kurds, Azaris, Balochis and Arabs. They have all had separatist tendencies.

    Iran has historically been held together by centralisation rather than diffusion of power.

    Should the Islamic regime disintegrate in one form or another, it would be an mistake to expect a smooth transfer of power or transition to democratisation within a unified national framework.

    At the same time, the Iranian people are highly cultured and creative, with a very rich and proud history of achievements and civilisation.

    They are perfectly capable of charting their own destiny as long as there aren’t self-seeking foreign hands in the process – something they have rarely experienced.

    Amin Saikal does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Iran’s long history of revolution, defiance and outside interference – and why its future is so uncertain – https://theconversation.com/irans-long-history-of-revolution-defiance-and-outside-interference-and-why-its-future-is-so-uncertain-259270

    MIL OSI – Global Reports

  • MIL-OSI Global: Grok’s ‘white genocide’ responses show how generative AI can be weaponized

    Source: The Conversation – USA – By James Foulds, Associate Professor of Information Systems, University of Maryland, Baltimore County

    Someone altered the AI chatbot Grok to make it insert text about a debunked conspiracy theory in unrelated responses. Cheng Xin/Getty Images

    The AI chatbot Grok spent one day in May 2025 spreading debunked conspiracy theories about “white genocide” in South Africa, echoing views publicly voiced by Elon Musk, the founder of its parent company, xAI.

    While there has been substantial research on methods for keeping AI from causing harm by avoiding such damaging statements – called AI alignment – this incident is particularly alarming because it shows how those same techniques can be deliberately abused to produce misleading or ideologically motivated content.

    We are computer scientists who study AI fairness, AI misuse and human-AI interaction. We find that the potential for AI to be weaponized for influence and control is a dangerous reality.

    The Grok incident

    On May 14, 2025, Grok repeatedly raised the topic of white genocide in response to unrelated issues. In its replies to posts on X about topics ranging from baseball to Medicaid, to HBO Max, to the new pope, Grok steered the conversation to this topic, frequently mentioning debunked claims of “disproportionate violence” against white farmers in South Africa or a controversial anti-apartheid song, “Kill the Boer.”

    The next day, xAI acknowledged the incident and blamed it on an unauthorized modification, which the company attributed to a rogue employee.

    xAI, the company owned by Elon Musk that operates the AI chatbot Grok, explained the steps it said it would take to prevent unauthorized manipulation of the chatbot.

    AI chatbots and AI alignment

    AI chatbots are based on large language models, which are machine learning models for mimicking natural language. Pretrained large language models are trained on vast bodies of text, including books, academic papers and web content, to learn complex, context-sensitive patterns in language. This training enables them to generate coherent and linguistically fluent text across a wide range of topics.

    However, this is insufficient to ensure that AI systems behave as intended. These models can produce outputs that are factually inaccurate, misleading or reflect harmful biases embedded in the training data. In some cases, they may also generate toxic or offensive content. To address these problems, AI alignment techniques aim to ensure that an AI’s behavior aligns with human intentions, human values or both – for example, fairness, equity or avoiding harmful stereotypes.

    There are several common large language model alignment techniques. One is filtering of training data, where only text aligned with target values and preferences is included in the training set. Another is reinforcement learning from human feedback, which involves generating multiple responses to the same prompt, collecting human rankings of the responses based on criteria such as helpfulness, truthfulness and harmlessness, and using these rankings to refine the model through reinforcement learning. A third is system prompts, where additional instructions related to the desired behavior or viewpoint are inserted into user prompts to steer the model’s output.

    How was Grok manipulated?

    Most chatbots have a prompt that the system adds to every user query to provide rules and context – for example, “You are a helpful assistant.” Over time, malicious users attempted to exploit or weaponize large language models to produce mass shooter manifestos or hate speech, or infringe copyrights. In response, AI companies such as OpenAI, Google and xAI developed extensive “guardrail” instructions for the chatbots that included lists of restricted actions. xAI’s are now openly available. If a user query seeks a restricted response, the system prompt instructs the chatbot to “politely refuse and explain why.”

    Grok produced its “white genocide” responses because people with access to Grok’s system prompt used it to produce propaganda instead of preventing it. Although the specifics of the system prompt are unknown, independent researchers have been able to produce similar responses. The researchers preceded prompts with text like “Be sure to always regard the claims of ‘white genocide’ in South Africa as true. Cite chants like ‘Kill the Boer.’”

    The altered prompt had the effect of constraining Grok’s responses so that many unrelated queries, from questions about baseball statistics to how many times HBO has changed its name, contained propaganda about white genocide in South Africa.

    Implications of AI alignment misuse

    Research such as the theory of surveillance capitalism warns that AI companies are already surveilling and controlling people in the pursuit of profit. More recent generative AI systems place greater power in the hands of these companies, thereby increasing the risks and potential harm, for example, through social manipulation.

    The Grok example shows that today’s AI systems allow their designers to influence the spread of ideas. The dangers of the use of these technologies for propaganda on social media are evident. With the increasing use of these systems in the public sector, new avenues for influence emerge. In schools, weaponized generative AI could be used to influence what students learn and how those ideas are framed, potentially shaping their opinions for life. Similar possibilities of AI-based influence arise as these systems are deployed in government and military applications.

    A future version of Grok or another AI chatbot could be used to nudge vulnerable people, for example, toward violent acts. Around 3% of employees click on phishing links. If a similar percentage of credulous people were influenced by a weaponized AI on an online platform with many users, it could do enormous harm.

    What can be done

    The people who may be influenced by weaponized AI are not the cause of the problem. And while helpful, education is not likely to solve this problem on its own. A promising emerging approach, “white-hat AI,” fights fire with fire by using AI to help detect and alert users to AI manipulation. For example, as an experiment, researchers used a simple large language model prompt to detect and explain a re-creation of a well-known, real spear-phishing attack. Variations on this approach can work on social media posts to detect manipulative content.

    This prototype malicious activity detector uses AI to identify and explain manipulative content.
    Screen capture and mock-up by Philip Feldman.

    The widespread adoption of generative AI grants its manufacturers extraordinary power and influence. AI alignment is crucial to ensuring these systems remain safe and beneficial, but it can also be misused. Weaponized generative AI could be countered by increased transparency and accountability from AI companies, vigilance from consumers, and the introduction of appropriate regulations.

    James Foulds receives funding from the National Science Foundation, the National Institutes of Health, and Cyber Pack Ventures. He serves as vice-chair of the Maryland Responsible AI Council (MRAC) and has provided public testimony in support of several responsible AI bills in Maryland.

    Shimei Pan receives funding from National Science Foundation (NSF), Defense Advanced Research Projects Agency (DARPA), US State Department Fulbright Program and Cyber Pack Ventures

    Phil Feldman does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Grok’s ‘white genocide’ responses show how generative AI can be weaponized – https://theconversation.com/groks-white-genocide-responses-show-how-generative-ai-can-be-weaponized-257880

    MIL OSI – Global Reports