Category: Reportage

  • MIL-OSI Global: Kenya’s ride-hailing drivers say their jobs offer dignity despite the challenges

    Source: The Conversation – Africa – By Julie Zollmann, Digital Planet Fellow, The Fletcher School, Tufts University

    Many argue that gig work involves exploitation, as research and media coverage have highlighted. But that doesn’t seem to deter ride hailing drivers on platforms like Uber and Bolt.

    In Kenya, in fact, many new drivers continued to join platforms even as fares were slashed starting in 2016.

    As a PhD student studying the role of digitalisation in development, I spent several years trying to understand how digital drivers experienced the quality of their work. My research found that in 2019, a typical digital driver in Nairobi worked about 58 hours a week and earned well below the minimum wage on an hourly basis. What made this work attractive? Why did drivers stay?

    In a new paper, I draw on a 2019 survey of 450 drivers in Nairobi and 38 subsequent qualitative interviews in Nairobi and Kenya’s second largest ride hailing market, Mombasa, in 2021 that explored drivers’ experiences in detail.

    In addition to measuring working hours and incomes, my survey team asked drivers if they considered their work “dignified”. Nearly eight in ten (78%) of our survey participants said yes. While that specific share of drivers may have changed since then, the underlying reasons drivers found the work dignified remain unchanged.

    In the global north, scholars have rung alarm bells about what “gig work” means for the erosion of standard jobs with legal protections around working hours, minimum wage and other benefits. But the drivers my team and I spoke with in Kenya felt that digital driving was a step towards formalisation rather than a drift away from an ideal formal job. Driving had diginity in contrast to the indignities of low-wage work and the vast informal sector, which was their realistic alternative for making a living.

    My findings highlight that workers’ experiences on global platforms like Uber are not universal and that digitisation may deliver some improvements in work quality relative to informal work in African contexts.

    How did digital work deliver dignity?

    Drivers explained that app companies imposed rules and structure that provided “discipline” in a transport sector more broadly associated with rudeness, unruliness, and disrespect towards passengers. Requirements for things like driving licences, proof of insurance, and ratings seemed to make drivers feel more professional and make passengers see them as such.

    Drivers felt proud to be part of a driver community that behaved professionally under these conditions. A 38-year-old male driver in Nairobi who had been working on the platforms for three years told us:

    We are very respected … Everyone trusts you to carry them. It’s not like the old days, when the taxi driver might rob you and dump you or even kill you. We are getting attraction from the society, even in the slums. They know you are an app driver, and they trust you because app drivers are good people. They know you can deliver, that you will be honest.




    Read more:
    Zimbabwe’s economy crashed — so how do citizens still cling to myths of urban and economic success?


    On platforms, drivers were matched digitally with riders. Respondents said this brought dignity by ensuring drivers would receive a fairly steady stream of clients. This meant that a driver could rest assured he would earn money every day.

    The alternative was to “hustle” in the informal economy to shake loose opportunities and constantly solicit those who might use their labour and beg for payment after a job was done. Constant solicitation and bargaining were exhausting and degrading.

    One driver explained:

    Most of us are poor. I have never walked out every morning sure that I would do a job. But now I know that if my car has been serviced and my phone is charged and working, I am going to work and not to some charity job. I used to wait at the base all day without getting a customer. Now, ….. at least two, three days are going to be good for you.

    Digital matchmaking also meant that drivers were not limited to serving the few clients they already knew or who happened to pass them at a fixed base. They found themselves serving new parts of the city and carrying important people, including business people, celebrities and local politicians. Serving these high-end customers made them feel proud and important. Wealthy neighbourhoods, luxury hotels and high-end restaurants felt more open to them in otherwise exclusionary and segregated cities.

    Some drivers felt that digitalisation had removed barriers to entry for taxi driving, like paying to join a parking base and building a client list.

    The app did away with parking bases, and about half of drivers joined the system through a “partner”, paying a fixed weekly fee to rent their car instead of buying it themselves.

    In efforts to make rides cheaper, in 2018 app companies in Kenya allowed smaller, less expensive cars on their platforms, lowering costs of ownership. Drivers in our survey showed that both formal and informal financiers were willing to offer loans to digital drivers, knowing they would have regular revenue to service their debt.

    Buying a car was seen as a huge, dignifying accomplishment. One driver in the survey told us:

    Growing up, I thought vehicles were owned only by the rich, but now digital driving has provided a means for me to own one and earn the respect of society.

    David Muteru, then chairman of the Digital Taxi Association of Kenya, echoed this sentiment: “Owning a vehicle, that’s an asset”.

    Dignity not always guaranteed

    The dignifying value of order was only possible when app companies enforced their own rules and did so fairly. Drivers preferred the stringent rule enforcement of one major app over the lax enforcement of another, which made for more stressful and undignified interactions with riders.

    When the rules were enforced, drivers could be sure that the app company would help if a rider refused to pay or if there was a dispute with the client. Drivers felt the stricter environment kept bad actors out.

    Over time, though, app companies slashed prices, competing for market share. Drivers felt less respected by riders who saw them as desperate for money. Low fares pressed drivers to negotiate with riders for offline trips and higher rates, reintroducing the indignity of haggling.

    Lessons for the future

    Digitally mediated work raises many questions about labour standards.

    This research shows how important it is to keep local context in mind. Digital driving is not the same experience for drivers in every context. Where people suffer indignities and deprivations in the informal sector, digitalisation may offer gains. But this potential depends on rule enforcement and pay. Material and subjective dignity are intertwined.

    Julie Zollmann received funding from Mastercard Foundation.

    ref. Kenya’s ride-hailing drivers say their jobs offer dignity despite the challenges – https://theconversation.com/kenyas-ride-hailing-drivers-say-their-jobs-offer-dignity-despite-the-challenges-257845

    MIL OSI – Global Reports

  • MIL-OSI Global: How a postwar German literary classic helped eclipse painter Emil Nolde’s relationship to Nazism

    Source: The Conversation – France – By Ombline Damy, Doctorante en Littérature Générale et Comparée, Sciences Po

    Emil Nolde, _Red Clouds_, watercolour on handmade paper, 34.5 x 44.7 cm. Emil Nolde/Museo Nacional Thyssen-Bornemisza, Madrid, CC BY-NC-ND

    Paintings by German artist Emil Nolde (1867-1956) were recently on display at the Musée Picasso in Paris as part of an exhibition on what the Nazis classified as “degenerate art”. At first glance, his works fit perfectly, but recent research shows that Nolde’s relationship to Nazism is much more nuanced than the exhibition revealed.

    The German Lesson: a postwar literary classic

    While Nolde was one of the many victims of the Third Reich’s repressive responses to “degenerate art”, he was also one of Nazism’s great admirers. The immense popularity of The German Lesson (1968) by author Siegfried Lenz, however, greatly contributed to creating the legend of Nolde as a martyr of the Nazi regime.


    The cover of the French edition, which was on sale in the Musée Picasso bookstore, subtly echoes one of Nolde’s works, Hülltoft Farm, which hung in the exhibition.

    Set against the backdrop of Nazi policies on “degenerate art”, the novel is about a conflict between a father and son. It addresses in literary form the central postwar issue of Vergangenheitsbewältigung, a term referring to the individual and collective work of German society on coming to terms with its Nazi past.

    The German Lesson was met with huge success upon publication. Since then, it has become a classic of postwar German literature. Over 2 million copies have been sold across the world, and the novel has been translated into more than 20 languages. It is still studied in Germany as part of the national school curriculum. Adding to its popularity, the book was adapted for the screen in 1971 and in 2019. More than 50 years after its publication, The German Lesson continues to shape the way we think about Nazi Germany.

    Max Ludwig Nansen, a fictional painter turned martyr

    Set in Germany in the 1950s, the novel is told through the eyes of Siggi, a young man incarcerated in a prison for delinquent youths. Asked to pen an essay on the “joys of duty”, he dives into his memories of a childhood in Nazi Germany as the son of a police officer.

    He remembers that his father, Jens Ole Jepsen, was given an order to prevent his own childhood friend, Max Ludwig Nansen, from painting. As a sign of protest against the painting ban, Nansen created a secret collection of paintings titled “the invisible pictures”. Because he was young enough to appear innocent, Siggi was used by his father to spy on the painter.

    Siggi found himself torn between the two men, who related to duty in radically opposite ways. While Jepsen thought it his duty to follow the orders given to him, Nansen saw art as his only duty. Throughout the novel, Siggi becomes increasingly close to the painter, whom he sees as a hero, all the while distancing himself from his father, who in turn is perceived as a fanatic.

    The novel’s point of view, that of a child, demands of its reader that they complete Siggi’s omissions or partial understanding of the world around him with their adult knowledge. This deliberately allusive narrative style enables the author to elude the topic of Nazism – or at least to hint at it in a covert way, thus making the novel acceptable to a wide German audience at the time of its publication in 1968.

    Nevertheless, the book leaves little room for doubt on the themes it tackles. While Nazism is never explicitly named, the reader will inevitably recognize the Gestapo (the political police of the regime) when Siggi speaks of the “leather coats” who arrest Nansen. Readers will also identify the ban on painting issued to Nansen as a part of Nazi policies on “degenerate art”. And, what’s more, they will undoubtedly perceive the real person hiding behind the fictional character of Max Ludwig Nansen: Emil Nolde, born Hans Emil Nansen.


    A weekly e-mail in English featuring expertise from scholars and researchers. It provides an introduction to the diversity of research coming out of the continent and considers some of the key issues facing European countries. Get the newsletter!

    Emil Nolde, a real painter become legend

    Much like his fictional counterpart Max Ludwig Nansen, the painter Emil Nolde fell victim to Nazi policies aimed at artists identified as “degenerate”. More than 1,000 of his artworks were confiscated, some of which were integrated into the 1937 travelling exhibition on “degenerate art” orchestrated by the regime. Nolde was banned from the German art academy, and he was forbidden to sell and exhibit his work.

    A photograph of Nazi propagandist Joseph Goebbels’ visit to the exhibition titled Entartete Kunst (Degenerate Art) in Munich, 1937. At left, from top, two paintings by Emil Nolde: Christ and the Sinner (1926) and the Wise and the Foolish Virgins (1910), a painting that has disappeared.
    Wikimedia

    After the collapse of the Nazi regime, the tide turned for this “degenerate” artist. Postwar German society glorified him as a victim and opponent of Nazi politics, an image which Nolde carefully fostered. In his memoirs, he claimed to have been forbidden to paint by the regime, and to have created a series of “unpainted pictures” in a clandestine act of resistance.

    Countless exhibits on Nolde, in Germany and around the world, served to perpetuate the myth of a talented painter, fallen victim to the Nazi regime, who decided to fight back. His works even made it into the hallowed halls of the German chancellery. Helmut Schmidt, chancellor of the Federal Republic of Germany from 1974 to 1982, and Germany’s former chancellor Angela Merkel decorated their offices with his paintings.

    The popularity of The German Lesson, inspired by Nolde’s life, further solidified the myth – until the real Nolde and the fictional Nansen became fully inseparable in Germany’s collective imagination.

    Twilight of an idol

    Yet, the historical figure and the fictional character could not be more different. Research conducted for exhibits on Nolde in Frankfurt in 2014 and in Berlin in 2019 revealed the artist’s true relationship to Nazism to the wider public.

    Nolde was indeed forbidden from selling and exhibiting his works by the Nazi regime. But he was not forbidden from painting. The series of “unpainted pictures”, which he claimed to have created in secret, are in fact a collection of works put together after the war.

    What’s more, Nolde joined the Nazi Party as early as 1934. To make matters worse, he also hoped to become an official artist of the regime, and he was profoundly antisemitic. He was convinced that his work was the expression of a “German soul” – with all the racist undertones that such an affirmation suggests. He relentlessly tried to convince Goebbels and Hitler that his paintings, unlike those of “the Jews”, were not “degenerate”.

    Why, one might ask, did more than 70 years go by before the truth about Nolde came out?

    Yes, the myth built by Nolde himself and solidified by The German Lesson served to eclipse historical truth. Yet this seems to be only part of the story. In Nolde’s case, like in many others that involve facing a fraught national past, it looks like fiction was a great deal more attractive than truth.

    In Lenz’s book, the painter Nansen claims that “you will only start to see properly […] when you start creating what you need to see”. By seeing in Nolde the fictional character of Nansen, Germans created a myth they needed to overcome a painful past. A hero, who resisted Nazism. Beyond the myth, reality appears to be more complex.

    Ombline Damy received funding from la Fondation Nationale des Sciences Politiques (National Foundation of Political Sciences, or FNSP) for her thesis.

    ref. How a postwar German literary classic helped eclipse painter Emil Nolde’s relationship to Nazism – https://theconversation.com/how-a-postwar-german-literary-classic-helped-eclipse-painter-emil-noldes-relationship-to-nazism-258310

    MIL OSI – Global Reports

  • MIL-OSI Global: A new observatory is assembling the most complete time-lapse record of the night sky ever

    Source: The Conversation – UK – By Noelia Noël, Senior Lecturer, School of Mathematics and Physics, University of Surrey

    On 23 June 2025, the world will get a look at the first images from one of the most powerful telescopes ever built: the Vera C. Rubin Observatory.

    Perched high in the Chilean Andes, the observatory will take hundreds of images of the southern hemisphere sky, every night for 10 years. In doing so, it will create the most complete time-lapse record of our Universe ever assembled. This scientific effort is known as the Legacy Survey of Space and Time (LSST).

    Rather than focusing on small patches of sky, the Rubin Observatory will scan the entire visible southern sky every few nights. Scientists will use this rolling deep-sky snapshot to track supernovae (exploding stars), asteroids, black holes, and galaxies as they evolve and change in real time. This is astronomy not as a static snapshot, but as a cosmic story unfolding night by night.

    At the heart of the observatory lies a remarkable piece of engineering: a digital camera the size of a small car and weighing over three tonnes. With a staggering 3,200 megapixels, each image it captures has enough detail to spot a golf ball from 25km away.


    Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.


    Each image is so detailed that it would take hundreds of ultra-high-definition TV screens to display it in full. To capture the universe in colour, the camera uses enormous filters — each about the size of a dustbin lid — that allow through different types of light, from ultraviolet to near-infrared.

    The observatory was first proposed in 2001, and construction at the Cerro Pachón ridge site in northern Chile began in April 2015. The first observations with a low-resolution test camera were carried out in October 2024, setting up the first images using the main camera, to be unveiled in June.

    Big questions

    The observatory is designed to tackle some of astronomy’s biggest questions. For instance, by measuring how galaxies cluster and move, the Rubin Observatory will help scientists investigate the nature of dark energy, the mysterious force driving the accelerating expansion of the Universe.

    As a primary goal, it will map the large-scale structure of the Universe and investigate dark matter, the invisible form of matter that makes up 27% of the cosmos. Dark matter acts as the “scaffolding” of the universe, a web-like structure that provides a framework for the formation of galaxies.

    The observatory is named after the US astronomer Dr Vera Rubin, whose groundbreaking work uncovered the first strong evidence for dark matter – the very phenomenon this telescope will explore in unprecedented detail.

    As a woman in a male-dominated field, Rubin overcame numerous obstacles and remained a tireless advocate for equality in science. She died in 2016 at the age of 88, and her name on this observatory is a tribute not only to her science, but to her perseverance and her legacy of inclusion.

    Closer to home, Rubin will help find and track millions of asteroids and other objects that come near Earth – helping warn astronomers of any potential collisions. The observatory will also monitor stars that change in brightness, which can reveal planets orbiting them.

    And it will capture rare and fleeting cosmic events, such as the collision of very dense objects called neutron stars, which release sudden bursts of light and ripples in space known as gravitational waves.

    What makes this observatory particularly exciting is not just what we expect it to find, but what we can’t yet imagine. Many astronomical breakthroughs have come from chance: strange flashes in the night sky and puzzling movements of objects. Rubin’s massive, continuous data stream could reveal entirely new classes of objects or unknown physical processes.

    The observatory is equipped with the world’s largest digital camera.
    RubinObs/NOIRLab/SLAC/DOE/NSF/AURA

    But capturing this “movie of the universe” depends on something we often take for granted: dark skies. One of the growing challenges facing astronomers is light pollution from satellite mega-constellations – a group of many satellites working together.

    These satellites reflect sunlight and can leave bright streaks across telescope images, potentially interfering with the very discoveries Rubin is designed to make. While software can detect and remove some of these trails, doing so adds complexity, cost and can degrade the data.

    Fortunately, solutions are already being explored. Rubin Observatory staff are developing simulation tools to predict and reduce satellite interference. They are also working with satellite operators to dim or reposition spacecraft. These efforts are essential – not just for Rubin, but for the future of space science more broadly.

    Rubin is a collaboration between the US National Science Foundation and the Department of Energy, with global partners contributing to data processing and scientific analysis. Importantly, much of the data will be publicly available, offering researchers, students and citizen scientists around the world the chance to make discoveries of their own.

    The “first-look” event, which will unveil the first images from the observatory, will be livestreamed in English and Spanish, and celebrations are planned at venues around the world.

    For astronomers, this is a once-in-a-generation moment – a project that will transform our view of the universe, spark public imagination and generate scientific insights for decades to come.

    Noelia Noël does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. A new observatory is assembling the most complete time-lapse record of the night sky ever – https://theconversation.com/a-new-observatory-is-assembling-the-most-complete-time-lapse-record-of-the-night-sky-ever-258231

    MIL OSI – Global Reports

  • MIL-OSI Global: Reform leads in voting intentions – but where does their vote come from?

    Source: The Conversation – UK – By Paul Whiteley, Professor, Department of Government, University of Essex

    Recent voting intention polling from YouGov (May 27) shows Reform UK in first place, 8% ahead of Labour and 10% ahead of the Conservatives, who are now in third place.

    The rising popularity of Nigel Farage’s party is an unprecedented threat to the major parties. This was driven home in recent local elections in England, where Reform won 677 seats and took control of 10 local authorities. But where does this support come from?

    The survey compares respondent voting intention to their votes in the 2024 general election.

    If we look at Conservative voters, 27% of them have switched to Reform in their voting intentions while 66% remain loyal. Alarmingly for Labour, only 60% of their 2024 voters have remained loyal and 15% intend to vote for Reform, while 12% switched to the Liberal Democrats and 9% to the Greens.

    Labour has been squeezed from both sides of the political spectrum, but the loss to the left is significantly larger than the loss to the right.

    In contrast, 73% of Liberal Democrat voters have remained loyal to the party with only 7% switching to Reform and 8% going to Labour. Not surprisingly, 91% of Reform voters have remained loyal, with 5% going to the Conservatives and 3% going to the Greens. None of the Reform voters have switched to Labour or the Liberal Democrats.

    Reform’s rise has led the Labour government to take more hardline stances on key issues, particularly immigration and asylum – which around half of YouGov respondents say is the most important issue facing the country.

    And with small boat crossings on the rise again, it remains to be seen whether the government’s recent proposals to reduce net migration will be enough to hold onto wavering supporters.




    Read more:
    What do MPs really think about immigration? We surveyed them to find out


    Social backgrounds and party support

    If we probe a bit further into the social characteristics of voters, only 8% of 18 to 24-year-olds support Reform, compared with 35% of 50 to 64-year-olds and 33% of the over-65s. Some 34% of the younger group support Labour, 12% the Conservatives, 15% the Liberal Democrats and 25% the Greens.

    As far as the 50 to 64-year-olds are concerned, 19% support Labour, 16% the Conservatives, 16% the Liberal Democrats and 9% the Greens. There is currently a significant age divide when it comes to party support.

    With respect to class (or “social grade” as it is described in contemporary surveys), 23% of the middle-class support Reform compared with 38% of the working class. The latter were the bedrock of Labour support a couple of generations ago, but now only 19% support Labour, with 17% supporting the Conservatives and 12% the Liberal Democrats.

    Current support for the parties among middle-class voters apart from Reform is 22% for Labour, 21% for the Conservatives and 17% for the Liberal Democrats. Again, the middle class used to be the key supporters of the Conservative party, but at the moment the party is running third behind its rivals in this group.

    Finally, the relationship between gender and support for the parties is also interesting. Some 35% of male respondents support Reform compared with only 24% of female respondents.

    In contrast, 21% of both men and women support Labour. The figures for the Conservatives are 16% of men and 22% of women, and Liberal Democrat support is 14% support from men and 16% from women.

    There is also notable support for Reform among those who voted Leave in the 2016 Brexit referendum in the YouGov survey. Altogether 53% of Leave voters in the EU Referendum opted for Reform and 24% supported the Conservatives, with 8% supporting Labour, 8% the Liberal Democrats and 4% the Greens. In the case of Remain voters, 10% chose Reform, 17% went for the Conservatives, 30% for Labour, 23% for the Liberal Democrats and 14% for the Greens.

    Not surprisingly, Reform takes the largest share of Brexit voters, but just over half of them – indicating that a lot of change has occurred in support since the 2016 referendum and Farage’s role in the Leave campaign. The fact that 10% of Remain voters switched to Reform and 20% of Leave voters have switched to Labour, the Liberal Democrats or the Greens shows that it is not just a simple case of support for Brexit leading to support for Reform.

    Voting and volatility

    Before Nigel Farage starts picking out curtains for Number 10, it is worth looking at another volatile moment in British political history. The chart below shows the effects of the split in the Labour party in 1981, when the Social Democratic Party was formed by the “gang of four” breakaway Labour politicians, Shirley Williams, Roy Jenkins, David Owen and Bill Rodgers.

    The newly formed party agreed an electoral pact with the Liberals, which continued until the 1983 election. A Gallup poll published in December 1981 shows a massive lead for the SDP-Liberal Alliance.

    And yet, Margaret Thatcher’s Conservatives won that election. Labour came second by a small margin ahead of the SDP-Liberal Alliance and remained the main opposition party.

    The point of this example is that a massive lead in the polls for the SDP-Liberal Alliance shortly after it was established did not provide a breakthrough in the general election two years later. Reform may be in the lead now, but this does not mean that it will win the general election of 2028-29.

    That said, there is a real risk for Labour continuing to lose support to both the left and the right – something which it needs to rapidly repair. Rachel Reeves’s “iron chancellor” strategy, in which the government announces fiscal rules which it claims to stand by at all costs, is no longer credible.

    As the Institute of Government points out, every single fiscal rule adopted since 2008 has subsequently been abandoned. A strategy of continuing austerity by making significant cuts in the welfare budget to calm financial markets is likely to fail, both in the economy and with voters.

    Paul Whiteley has received funding from the British Academy and the ESRC.

    ref. Reform leads in voting intentions – but where does their vote come from? – https://theconversation.com/reform-leads-in-voting-intentions-but-where-does-their-vote-come-from-257754

    MIL OSI – Global Reports

  • MIL-OSI Global: Rosemary has been linked to better memory, lower anxiety and even protection from Alzheimer’s

    Source: The Conversation – UK – By Dipa Kamdar, Senior Lecturer in Pharmacy Practice, Kingston University

    Anna Nahabed/Shutterstock

    Rosemary (Rosmarinus officinalis), the aromatic herb native to the Mediterranean, has long been treasured in kitchens around the world. But beyond its culinary charm, rosemary is also gaining recognition for its impressive health benefits, especially when it comes to brain health, inflammation and immune function.

    Research suggests rosemary may even hold promise in the fight against Alzheimer’s disease, the leading cause of dementia worldwide.

    Historically, rosemary has been linked to memory and mental clarity. In ancient Greece and Rome, students and scholars used rosemary in the hope of sharpening concentration and recall.

    Modern science is finding there may have been something in this: in one study, people who inhaled rosemary’s scent performed better on memory tasks compared to those in an unscented environment.

    So how does rosemary work on the brain? There are several mechanisms at play. For starters, rosemary stimulates blood circulation, including to the brain, helping deliver more oxygen and nutrients, which may improve mental clarity. It also has calming properties; some studies suggest its aroma can reduce anxiety and improve sleep. Lower stress can mean better focus and memory retention.


    Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.


    Rosemary contains compounds that interact with the brain’s neurotransmitters. One such compound, 1,8-cineole, helps prevent the breakdown of acetylcholine, a brain chemical essential for learning and memory. By preserving acetylcholine, rosemary may help support cognitive performance, especially as we age.

    Another bonus? Rosemary is packed with antioxidants, which help protect brain cells from damage caused by oxidative stress – a major factor in cognitive decline.

    Rosemary is rich in phytochemicals, plant compounds with health-enhancing effects. One of the most powerful is carnosic acid, an antioxidant and anti-inflammatory agent that helps shield brain cells from harm, particularly from the kinds of damage linked to Alzheimer’s disease.




    Read more:
    Chronic stress contributes to cognitive decline and dementia risk – 2 healthy-aging experts explain what you can do about it


    In 2025, researchers developed a stable version of carnosic acid called diAcCA. In promising pre-clinical studies, this compound improved memory, boosted the number of synapses (the connections between brain cells), and reduced harmful Alzheimer’s related proteins like amyloid-beta and tau.

    What’s especially exciting is that diAcCA only activates in inflamed brain regions, which could minimise side effects. So far, studies in mice show no signs of toxicity and significant cognitive improvements – raising hopes that human trials could be next.

    Researchers also believe diAcCA could help treat other inflammatory conditions, such as type 2 diabetes, cardiovascular disease and Parkinson’s disease.

    Beyond brain health

    Rosemary’s benefits could extend well beyond the brain. It’s been used traditionally to ease digestion, relieve bloating and reduce inflammation.

    Compounds like rosmarinic acid and ursolic acid are known for their anti-inflammatory effects throughout the body. Rosemary may even benefit the skin – a review suggests it can help soothe acne and eczema, while carnosic acid may offer anti-ageing benefits by protecting skin from sun damage.

    Rosemary oil also has antimicrobial properties, showing promise in food preservation and potential pharmaceutical applications by inhibiting the growth of bacteria and fungi.

    For most people, rosemary is safe when used in food, teas or aromatherapy. But concentrated doses or extracts can pose risks. Consuming large amounts may cause vomiting or, in rare cases, seizures – particularly in people with epilepsy.

    There’s also a theoretical risk of rosemary stimulating uterine contractions, so pregnant people should avoid high doses. Because rosemary can interact with some medications – such as blood thinners – it’s best to check with a healthcare provider before taking large amounts in supplement form.

    Rosemary is more than just a kitchen staple. It’s a natural remedy with ancient roots and modern scientific backing. As research continues, particularly into breakthrough compounds like diAcCA, rosemary could play an exciting role in future treatments for Alzheimer’s and other chronic conditions.

    In the meantime, adding a little rosemary to your life – whether in a meal, a cup of tea, or a breath of its fragrant oil – could be a small step with big health benefits.

    Dipa Kamdar does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Rosemary has been linked to better memory, lower anxiety and even protection from Alzheimer’s – https://theconversation.com/rosemary-has-been-linked-to-better-memory-lower-anxiety-and-even-protection-from-alzheimers-256920

    MIL OSI – Global Reports

  • MIL-OSI Global: Why Hulu’s The Handmaid’s Tale failed as feminist television

    Source: The Conversation – UK – By Roberta Garrett, Senior Lecturer in Literature and Cultural Studies, University of East London

    Warning: this article contains spoilers for all seasons of The Handmaid’s Tale.

    Hulu’s television adaptation of Margaret Atwood’s landmark 1985 feminist novel, The Handmaid’s Tale, has now come to an end.

    The series focused on female oppression within the imagined future religio-fascist state of Gilead. So, in light of the Donald Trump-led Republican party’s infringements on the reproductive rights of women, it seems appropriate that the first series launched in 2017, a year after Trump was elected, and the final series aired shortly after his current tenure began.

    Following Trump’s first election, the iconography of the handmaids’ costumes – hooded scarlet cloaks and white bonnets – were adopted as symbols of resistance at women’s rights protests around the world.

    The adaptation has been a popular and critical success. However, as I argue in The Routledge Handbook of Motherhood on Screen, despite its strong association with women’s protest movements, Hulu’s adaptation misrepresents the themes of Atwood’s biting feminist dystopia. In fact, it reinforces certain attitudes that Atwood, and other feminist writers and thinkers, have been criticising for decades.

    In particular, the series idealises white biological mothers, while demonising or marginalising other female figures. Here are three examples of how it does this.


    Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.


    1. Childless women are bitter spinsters or wicked stepmothers

    Atwood’s novel focuses chiefly on the horror of the rape and forced impregnation of the handmaids. But Hulu’s adaptation gives more weight to the theme of maternal loss and the handmaids’ desire to keep their biological offspring.

    The characters of the television show evolve over six series. This means they require extended character arcs, backstories and more emphasis on psychology than the novel. Hulu’s adaptation evolved into a dark maternal melodrama, where the moral worth of female characters is tied to their ability to bear children.

    Like a traditional fairy tale, the adaptation depicts infertile women, older spinsters and adoptive mothers in an overwhelmingly negative light. They are frequently shown to be unfit mothers, or cruel women.

    Atwood’s novel uses relatively flat characterisation in order to accentuate Gilead’s authoritarian structure, rather than individual psychology or motivations. In contrast, Hulu’s The Handmaid’s Tale develops the character of Aunt Lydia (one of the older, childless women who train, bully and discipline the handmaids) and Serena Joy (the commander’s wife in the household that June is sent to) as central characters.

    The trailer for season six of The Handmaid’s Tale.

    Aunt Lydia’s (Ann Dowd) backstory in season three reveals that in her pre-Gilead life, she was a lonely, ageing school teacher who suffers sexual rejection. She responds to this by spitefully removing a child from the care of his loving but overworked young, single mother.

    The moral worth attached to fertile and infertile women in the series is even more evident in the treatment of Serena (Yvonne Strahovski). In the novel Serena is an outspoken advocate for traditional female roles. The series takes this further. It shows baby‑crazed Serena actively creating the laws of Gilead – and the handmaid system – to obtain a child. She was apparently made infertile after being shot by a protester during a speaking engagement.

    Serena is the series’ chief antagonist throughout the first four seasons. This changes in season five. Now pregnant, Serena finds herself at the mercy of another angry infertile woman who wants to steal her baby. Once pregnant, Serena mellows and becomes a more sympathetic character. This evolution can be seen to reinforce the idea that infertile women are unfulfilled, unhappy women who can only be redeemed through pregnancy and childbirth.

    In its overall view, the series presents the spinsterish aunts as sadists who delight in punishing the fertile handmaids, and the infertile commanders’ wives as cold and shallow. Unlike the sisterly handmaids, the latter secretly loathe one another. They appear to only value children as status symbols.

    2. It endorses intensive, ‘natural’ mothering

    As many feminist critics have pointed out, the model of child-rearing currently favoured by society is “intensive”, and endorses so-called “natural” practices and behaviour (such as unmedicated birth and extended breastfeeding). These place considerable pressure on new mothers.

    This mode of mothering is displayed by handmaid heroines June (Elisabeth Moss) and Janine (Madeline Brewer). They show no difficulty in bonding with babies produced through rape, breastfeed with ease, have an innate ability to comfort their offspring and – in June’s case – even successfully give birth entirely alone.

    In contrast, the adoptive mothers are cack-handed with their babies and quickly resent their maternal duties. This suggests that good mothering is the preserve of biological mothers, to whom it comes naturally.

    A recap of seasons one to five of The Handmaid’s Tale.

    3. It consigns black women to side roles

    Series one to three focuses largely on white handmaids. Although June’s husband (O-T Fagbenle) and best friend Moira (Samira Wiley) are black, they escape to Canada in the first season, so feature only minimally in the drama that follows. Black characters occupy minor roles as servants or nannies (known as “Marthas”), who are readily sacrificed by June in her child-saving crusade.

    June casually causes the execution of the Martha who cares for her first daughter by pestering her to allow her to make contact. The Martha pleads with her to stop, but June responds with her usual maternal piety: “You know I can’t stop.” As the audience barely knows the Martha, their sympathies are directed towards June. Her desire to see her daughter is presented as a legitimate reason to endanger the life of a black non-mother.

    Only Rita (Amanda Brugel), the Martha assigned to June’s household, has a consistent, if marginal, onscreen presence. Rita is a key part of the resistance movement, but her role as resistance fighter diminishes when June assumes leadership. As communications professor Meredith Neville-Shepard argues, Rita spends much of the later episodes thanking “white saviour” June for facilitating her escape to Canada.

    For these reasons, although The Handmaid’s Tale succeeds as a compelling female-centered drama, unlike Atwood’s novel, it foregrounds the rights of biological mothers over the issue of women’s reproductive choice. While Atwood criticised forced impregnation, Hulu’s Handmaid’s tale became increasingly invested in an idealised view of white “natural mothers” that is oppressive to many women.

    Roberta Garrett does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Why Hulu’s The Handmaid’s Tale failed as feminist television – https://theconversation.com/why-hulus-the-handmaids-tale-failed-as-feminist-television-258122

    MIL OSI – Global Reports

  • MIL-OSI Global: UK funds controversial climate-cooling research

    Source: The Conversation – UK – By Will de Freitas, Environment + Energy Editor, UK edition

    Clouds over the ocean could be ‘brightened’ to reflect sunlight away from the planet. Kingcraft / shutterstock

    The UK government’s Advanced Research and Invention Agency – known as Aria – recently announced it is funding 21 research teams to explore what it terms climate cooling. The money involved (£56 million) isn’t much in the grand scheme of things. But experts on both sides of the debate (and this issue divides climate academics more than almost any other) agree it’s likely to be a precursor to more significant investment in future.


    This roundup of The Conversation’s climate coverage comes from our award-winning weekly climate action newsletter. Every Wednesday, The Conversation’s environment editor writes Imagine, a short email that goes a little deeper into just one climate issue. Join the 45,000+ readers who’ve subscribed.


    To refresh, “geoengineering” refers to any large-scale moves to deliberately alter the climate to combat global warming. This could involve removing carbon dioxide from the atmosphere, perhaps with huge vacuum-like machines (that still don’t really exist) or, more prosaically, by growing more trees. Some experts would consider planting a forest or restoring a wetland as a form of geoengineering.

    But today we’re focusing on the other main category of geoengineering, known as “solar radiation management”, or SRM. The idea here is to ensure that more sunlight is reflected back into space before it can heat up the planet.

    What makes the new UK investment so important, says Robert Chris, is it’s the first time a state has put significant public money into researching solar radiation management. Chris, who researches geoengineering at The Open University, highlighted five projects (of the 21 total) which are likely to involve small-scale experiments:

    “Three … concern brightening clouds over the ocean, one explores a method of refreezing the Arctic and the fifth looks at a specific detail of the potential cooling effect of placing certain compounds in the stratosphere.”




    Read more:
    Five geoengineering trials the UK is funding to combat global warming


    Marine brightening

    Let’s start with the brighter clouds.

    “We’re using water cannons to spray seawater into the sky. This causes brighter, whiter clouds to form. These low marine clouds reflect sunlight away from the ocean’s surface.”

    That’s Daniel Harrison of Southern Cross University in Australia, writing in late 2023 about his research. He’s now been awarded UK government money to continue his work, looking specifically at whether brightening clouds directly over the Great Barrier Reef for a few months could reduce coral bleaching during a marine heat wave.

    “Modelling studies are encouraging and suggest it could delay the expected decline in coral cover. This could buy valuable time for the reef while the world transitions away from fossil fuels.”

    The UK funding will enable Harrison to extend his work and assess if it can be safe and effective, albeit only as a temporary measure specifically targeted at the Great Barrier Reef.




    Read more:
    Could ‘marine cloud brightening’ reduce coral bleaching on the Great Barrier Reef?


    The other two cloud brightening projects, run from the universities of Manchester and Nottingham, are looking at developing better ways to seed clouds in the first place.

    Arctic refreezing

    The Arctic refreezing project is run by Shaun Fitzgerald of the University of Cambridge, and focuses on sea ice. The idea is to pump sea water from below the ice onto its surface in the winter, where it freezes. This means there will be more ice accumulated ahead of the summer melting season, meaning more of the sun’s energy reflected back into space (ice is more reflective than open ocean).

    Losing Arctic sea ice creates a feedback loop – the warmer the water, the less sea ice is formed; the less sea ice there is, the warmer it gets.
    Ondrej Prosicky / shutterstock

    Fitzgerald recently returned from fieldwork in northern Canada and wrote about his work for The Conversation. “Crucially,” he said, “the research is focused on developing our understanding of these potential ideas. The research could show that they are impractical, unfeasible or would potentially make things worse.” For instance, he points out that thicker ice “may not be much use” if it is so much saltier that it melts more quickly. He describes initial results – before the government funding – as “inconclusive but encouraging”.




    Read more:
    Arctic ice is vanishing – our bold experiment is trying to protect it


    Blocking out the sun

    The final project Chris highlights looks at one aspect of proposals to inject tiny particles high in the atmosphere where they would help reflect sunlight back into space. This is probably the most likely to happen, eventually, as it’s relatively cheap and well-studied.

    One risk concerns the health and environmental impact of these particles as they fall back to the surface. Hugh Hunt, also from Cambridge, has been awarded funds to examine alternative compounds that may be less toxic than those usually proposed.

    Chris writes: “The plan is to send tiny samples into the stratosphere in specially designed gondolas attached to balloons. The gondolas will later be recovered, so that the effect of the stratosphere on the samples can be examined. Nothing will be released into the atmosphere.”

    Researchers in this field are generally quick to point out the risks involved. Chris cautions that: “Deliberately altering the atmosphere, a shared global resource, is fraught with ethical, geopolitical and practical problems.” That’s the case whether geoengineering is carried out by states or private interests.

    Is there public support, for instance? Democratic oversight? What if something goes wrong – who is to blame and who is responsible for fixing the mess? Should all countries agree on an action plan, since geoengineering will affects everyone?

    These are concerns shared by Cambridge’s Albert Van Wijngaarden, UCL’s Chloe Colomer and Adrian Hindes of Australia National University. Writing last year on the risk of critical voices being excluded from geoengineering research, they worry that if “geoengineering is essentially allowed to self-regulate, with no effective global governance, future research could easily take us down a dangerous path”.

    They outline an “unproductive” polarisation between advocates and critics, and argue that “upcoming research projects must factor in the concerns of opponents, and not represent only supporters of geoengineering or those who have not been explicitly against it”.

    Perhaps the UK government was indeed listening: in the recent Aria funding announcement, Van Wijngaarden and Colomer were awarded a grant to design “engagement programmes” for people in the Arctic who are “among the most impacted” by climate change and geoengineering, but who are often ignored “because of ongoing and historical power imbalances”.




    Read more:
    Plans to cool the Earth by blocking sunlight are gaining momentum but critical voices risk being excluded


    People such as Fitzgerald (the Arctic ice freezer) do tend to recognise these issues. Fitzgerald, together with his colleague Elil Hoole, says that plans to dim the sun must be led by those most affected by climate change.

    Robert Chris calls solar geoengineering a “crazy idea”. But he says the alternative – not doing it – may be worse. “Perhaps solar geoengineering is the price we must pay for our wholly inadequate climate change response to date.”

    ref. UK funds controversial climate-cooling research – https://theconversation.com/uk-funds-controversial-climate-cooling-research-258210

    MIL OSI – Global Reports

  • MIL-OSI Global: How to design landscapes that enhance natural sounds and minimise noise pollution

    Source: The Conversation – UK – By Carlos Abrahams, Senior Lecturer in Environmental Assessment – Director of Ecoacoustics, Nottingham Trent University

    Superblocks in Barcalona, Spain, keep traffic noise to the periphery of residential areas. David Alf/Shutterstock

    Sounds are integral parts of any landscape. Think of the calls of grouse and curlew on the Pennine Moors. Wind sieving through reed beds in the Norfolk Broads. Church bells chiming out over the hustle and bustle of central London. Every locale across the Earth, beneath our oceans, lakes and rivers, and even underground, has its own distinctive “soundscape”.

    Soundscapes are created by a combination of biological sounds – the voices of birds, bats and insects – alongside environmental sounds from rainfall, waves crashing on the shore and low-frequency seismic rumbles. Layered over these natural sound sources are human-made noises from planes, trains, traffic and other elements of 21st-century life.

    This human-made noise can be so loud and so pervasive in some areas that it blocks the natural sounds that would otherwise be audible. This affects the behaviour and life cycles of wildlife, because many species rely on sound for breeding activity, social communication and predator detection. Masking these important signals can reduce breeding success and drive populations away from the disturbed habitats.

    Noise pollution also reduces our own health and wellbeing. Chronic noise exposure is linked to elevated stress levels, impaired cognitive function and an increased risk of cardiovascular disease. The damaging soundscapes of European urban areas contribute to 12,000 premature deaths and cost €40 billion (£34 billion) every year.


    Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.


    As soundscape researchers, we are trying to both understand and learn how to minimise the effects of noise on both wild nature and humans. Part of the solution involves adapting landscape design to build towns and cities that don’t just limit adverse noise pollution, but produce beneficial soundscapes. These can help people and wildlife engage with their surroundings and navigate more easily through them.

    For example, people might be drawn to vibrant chatter from a nearby street or use the sound of a river to place ourselves within the mental map of our neighbourhood. Paying attention to soundscapes within the landscape design process can create a stronger sense of place, linking us more closely to our surroundings.

    Many cities tackle noise at its source through urban design. In Barcelona, 57% of people are regularly exposed to excessive noise levels. The “superblocks” initiative – where motorised traffic is limited to peripheral roads around groups of buildings in the city – has allowed the pedestrianised inner streets to be opened up for people, planting and wildlife. This has created tranquil and rich local soundscapes and improved the population’s health in these areas.

    Landscape interventions, such as tree buffers, earth banks and noise walls, can limit noise propagation through the environment. At Buitenschot Park in the Netherlands, landscape architects have designed ridges or earth banks that absorb and disperse ground-level noise from the nearby Schiphol airport. These sculptural landforms were inspired by local observations that noise reduced with the ploughing of fields near the airport. The similar use of noise reduction surfaces, such as the low-noise asphalt currently being tested in Paris, also help to limit the spread of unwanted sound.

    Changes to the landscape also alter the perception of noise by the listener. Adding favourable sounds, such as flowing water, can draw attention away from traffic noise. Soundscape projects that include green spaces help increase biodiversity and engage citizens at the heart of the city. Some UK initiatives such as Bristol soundwalks and London’s Sounder City strategy involve the mapping of such quiet spaces to explain their purpose and encourage their use.

    Noise beyond cities

    Noise is not just an urban issue. Rural landscapes are adversely affected by agriculture, quarrying and tourism. Historically, rural landscapes have been afforded greater protection from noise than their urban counterparts. The UK national parks were originally designated to allow for the “quiet enjoyment”
    of countryside areas, while the tranquillity maps published two decades ago by the countryside charity Campaign to Protect Rural England sought to protect peaceful areas across the country.

    Today, rewilding and habitat restoration can play an important role in returning more natural soundscapes with a better balance of non-human and human soundmakers. Restoring wetlands, woodlands and grasslands increases vocalising species, like birds. This benefits both wildlife and people, enabling nature connection and improving environmental quality. By considering sound as a key element of sustainability and resilience, spaces can support biodiversity while enhancing the wellbeing and quality of life of the people in these communities.


    Don’t have time to read about climate change as much as you’d like?

    Get a weekly roundup in your inbox instead. Every Wednesday, The Conversation’s environment editor writes Imagine, a short email that goes a little deeper into just one climate issue. Join the 45,000+ readers who’ve subscribed so far.


    Carlos Abrahams works for the ecological consultancy Baker Consultants Ltd and owns shares in Soil Acoustics Ltd. He has received research funding from Innovate UK in leration to soil ecoacoustics.

    Usue Ruiz-Arana does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. How to design landscapes that enhance natural sounds and minimise noise pollution – https://theconversation.com/how-to-design-landscapes-that-enhance-natural-sounds-and-minimise-noise-pollution-252859

    MIL OSI – Global Reports

  • MIL-OSI Global: A neuroscientist explains why it’s impossible for AI to ‘understand’ language

    Source: The Conversation – Canada – By Veena D. Dwivedi, Director – Centre for Neuroscience; Professor – Psychology | Neuroscience, Brock University

    Language that refers to neural networks in AI is misleading. (Shutterstock)

    As meaning-makers, we use spoken or signed language to understand our experiences in the world around us. The emergence of generative artificial intelligence such as ChatGPT (using large language models) call into question the very notion of how to define “meaning.”

    One popular characterization of AI tools is that they “understand” what they are doing. Nobel laureate and AI pioneer Geoffrey Hinton said: “What’s really surprised me is how good neural networks are at understanding natural language — that happened much faster than I thought…. And I’m still amazed that they really do understand what they’re saying.”

    Hinton repeated this claim in an interview with Adam Smith, chief scientific officer for Nobel Prize Outreach. In it, Hinton stated that “neural nets are much better at processing language than anything ever produced by the Chomskyan school of linguistics.”

    Chomskyan linguistics refers to American linguist Noam Chomsky’s theories about the nature of human language and its development. Chomsky proposes that there is a universal grammar innate in humans, which allows for the acquisition of any language from birth.

    I’ve been researching how humans understand language since the 1990s, including more than 20 years of studies on the neuroscience of language. This has included measuring brainwave activity as people read or listen to sentences. Given my experience, I have to respectfully disagree with the idea that AI can “understand” — despite the growing popularity of this belief.

    Geoffrey Hinton’s response to receiving the Nobel prize in physics for his work in AI.

    Generating text

    First, it’s unfortunate that most people conflate text on a screen with natural language. Written text is related to — but not the same thing as — language.

    For example, the same language can be represented by vastly different visual symbols. Look at Hindi and Urdu, for instance. At conversational levels, these are mutually intelligible and therefore considered the same language by linguists. However, they use entirely different writing scripts. The same is true for Serbian and Croatian. Written text is not the same thing as “language.”

    Next let’s take a look at the claim that machine learning algorithms “understand” natural language. Linguistic communication mostly happens face-to-face, in a particular environmental context shared between the speaker and listener, alongside cues such as spoken tone and pitch, eye contact and facial and emotional expressions.

    The importance of context

    There is a lot more to understanding what a person is saying than merely being able to comprehend their words. Even babies, who are not experts in language yet, can comprehend context cues.

    Take, for example, the simple sentence: “I’m pregnant,” and its interpretations in different contexts. If uttered by me, at my age, it’s likely my husband would drop dead with disbelief. Compare that level of understanding and response to a teenager telling her boyfriend about an unplanned pregnancy, or a wife telling her husband the news after years of fertility treatments.

    In each case, the message recipient ascribes a different sort of meaning — and understanding — to the very same sentence.

    In my own recent research, I have shown that even an individual’s emotional state can alter brainwave patterns when processing the meaning of a sentence. Our brains (and thus our thoughts and mental processes) are never without emotional context, as other neuroscientists have also pointed out.

    So, while some computer code can respond to human language in the form of text, it does not come close to capturing what humans — and their brains — accomplish in their understanding.

    It’s worth remembering that when workers in AI talk about neural networks, they mean computer algorithms, not the actual, biological brain networks that characterize brain structure and function. Imagine constantly confusing the word “flight” (as in birds migrating) versus “flight” (as in airline routes) — this could lead to some serious misunderstandings!

    Finally, let’s examine the claim about neural networks processing language better than theories produced by Chomskyan linguistics. This field assumes that all human languages can be understood via grammatical systems (in addition to context), and that these systems are related to some universal grammar.

    Chomsky conducted research on syntactic theory as a paper-and-pencil theoretician. He did not conduct experiments on the psychological or neural bases of language comprehension. His ideas in linguistics are absolutely silent on the mechanisms underlying sentence processing and understanding.

    What the Chomskyan school of linguistics does do, however, is ask questions about how human infants and toddlers can learn language with such ease, barring any neurobiological deficits or physical trauma.

    There are at least 7,000 languages on the planet, and no one gets to pick where they are born. That means the human brain must be ready to comprehend and learn the language of their community at birth.

    Regardless of where a child is born, the human brain is capable of acquiring any language.
    (Unsplash/tommao wang), CC BY

    From this fact about language development, Chomsky posited an (abstract) innate module for language learning — not processing. From a neurobiological standpoint, the brain has to be ready to understand language from birth.

    While there are plenty of examples of language specialization in infants, the precise neural mechanisms are still unknown, but not unknowable. But objects of study become unknowable when scientific terms are misused or misapplied. And this is precisely the danger: conflating AI with human understanding can lead to dangerous consequences.

    Veena D. Dwivedi receives funding from the Canada Foundation for Innovation, the Social Sciences and Humanities Research Council of Canada, and Brock University.

    ref. A neuroscientist explains why it’s impossible for AI to ‘understand’ language – https://theconversation.com/a-neuroscientist-explains-why-its-impossible-for-ai-to-understand-language-246540

    MIL OSI – Global Reports

  • MIL-OSI Global: Stop the ‘good’ vs ‘bad’ snap judgments and watch your world become more interesting

    Source: The Conversation – USA – By Lorraine Besser, Professor of Philosophy, Middlebury

    Sticking to just thumbs-up or thumbs-down limits how you engage with the world. PM Images/Photodisc via Getty Images

    How many times have you used the words “good” or “bad” today?

    From checking your weather app to monitoring the progress you’ve made on your to-do list, to scrolling through social media, opportunities to make snap evaluations abound. And the more you sort things into these categories, the more instinctive making these judgments becomes. You may find yourself filtering everything that comes your way in terms of “good” or “bad.”

    A dark cloud triggers “bad,” a social media post of baby animals triggers “good,” a news story about a political scuffle triggers “bad.” Whether you think something is good or bad, or worthy of a like or not, is an important piece of information. But if that categorization is the only thing that’s on your mind, the only lens through which you interpret the world, you’ll miss out on a lot.

    I’m a philosopher who specializes in happiness, well-being and the good life. I study how one’s state of mind influences one’s experiences of the world.

    In my recent book “The Art of the Interesting,” I explore the ways the evaluative perspective squashes your ability to experience psychological richness and other positive dimensions of life. The more you instinctively react with a “good” or a “bad,” the less of the world you take in. You’ll be less likely to engage your mind, exercise curiosity and have interesting experiences.

    Evaluation narrows your mind

    When you instinctively label something as good or bad, you focus only on the features that make that thing good or bad.

    A storm cloud has so much more to it than a simple ‘good’ or ‘bad’ label allows for.
    Pobytov/E+ via Getty Images

    You look outside, and all you see is the darkness of the clouds, threatening your plans for the day. You don’t notice the cooling shade those clouds create, nor the dramatic ways the wind makes them morph. You don’t notice the flowers unfurling, nor the child walking by who is also looking up at the clouds, but with a wide-eyed look of wonder.

    When snap evaluations reign, you effectively shut yourself off from a wide range of possible experiences. When everything around you is just good or bad, nothing can be perplexing, mysterious or intriguing. Nothing can be simply new, or simply challenging, or simply stimulating. Nothing is interesting, for your mind has filtered out these possible sources of cognitive engagement. It sees what it expects, and nothing else.

    Open your mind for more psychological richness

    Snap evaluations narrow your perspective and limit your mind’s potential to connect and engage with other aspects of your experiences. But you can unlock this potential simply by resisting any instinct to judge and instead viewing the world without trying to evaluate what you see.

    Right away, you’ll start to notice more, and you’ll activate your mind’s internal drives for curiosity and exploration.

    Freed from the dead-end judgments of good/bad, you can explore what is novel, allow yourself to be challenged, and tackle the complexities inherent to human experiences. Traffic jams can become sources of intrigue, rather than just a bad way to start your day. Delicious meals won’t just taste good − they spark your curiosity and stimulate your creativity. You’ll go from seeing a co-worker as difficult and irritating to recognizing them as an individual with human imperfections who’s deserving of your compassion.

    You’ll also feel the pains, struggles and rewards that arise through these mental engagements. You’ll experience rich, intense moments and a greater range of emotions. You’ll find your life chock-full of unusual and unique experiences with very few instances of boredom and monotony.

    Over time, your mind will become more adept at finding connections, exercising creativity and operating from a place of cognitive complexity. You’ll start to view the world more holistically, as full of connections waiting to be discovered.

    All of these are signs that your life has become more psychologically rich.

    Your same old world opens up around you when you stop judging it.
    LeoPatrizi/E+ via Getty Images

    Expand your mind, expand your sense of self

    Psychological richness and, more generally, experiences of novelty and interestingness are valuable on their own. But there’s evidence that they’re also important due to their effects on your sense of self. When you engage in new, interesting activities, you not only broaden your horizons and develop fresh perspectives, but you also become more confident in your ability to do whatever comes next. In these ways, you expand your very sense of self.

    The connection between psychological richness and self-expansion is intuitive. Novel, interesting activities stimulate the mind, challenging it to engage and explore. This process can expand your confidence in your abilities and provide you with a greater sense of control over your environment. As one’s sense of self expands, one’s very presence within the world shifts.

    One recent study explored the influence of psychological richness on pro-environmental behavior. While it’s common to feel sad, anxious, angry, powerless and helpless in the face of climate change, developing psychological richness can transform these negative attitudes.

    Researchers found that people who experience psychological richness were more willing to engage in sustainable activities. They believe this correlation is mediated by self-expansion, which helps subjects feel more confident that their actions would have an impact on the daunting problem of climate change.

    Cut out good and bad, go for interesting instead

    Everyone has the capacity to develop a sense of presence and agency in the world that enhances the very experience of life. A habit of snap evaluations inhibits this capacity, but you can train your mind to be more apt to engage and explore.

    The easiest way to do this?

    Stop saying, or thinking, “good” and “bad.” When you find yourself inclined to do so, force yourself to say something else. Start right now and begin your journey to engage with the world in a more rewarding way.

    Lorraine Besser does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Stop the ‘good’ vs ‘bad’ snap judgments and watch your world become more interesting – https://theconversation.com/stop-the-good-vs-bad-snap-judgments-and-watch-your-world-become-more-interesting-252690

    MIL OSI – Global Reports

  • MIL-OSI Global: How illicit markets fueled by data breaches sell your personal information to criminals

    Source: The Conversation – USA – By Thomas Holt, Professor of Criminal Justice, Michigan State University

    Criminals often buy illicit information with cryptocurrencies. Boris Zhitkov via Getty Images

    Every year, massive data breaches harm the public. The targets are email service providers, retailers and government agencies that store information about people. Each breach includes sensitive personal information such as credit and debit card numbers, home addresses and account usernames and passwords from hundreds of thousands – and sometimes millions – of people.

    When National Public Data, a company that does online background checks, was breached in 2024, criminals gained the names, addresses, dates of birth and national identification numbers such as Social Security numbers of 170 million people in the U.S., U.K. and Canada. The same year, hackers who targeted Ticketmaster stole the financial information and personal data of more than 560 million customers.

    As a criminologist who researches cybercrime, I study the ways that hackers and cybercriminals steal and use people’s personal information. Understanding the people involved helps us to better recognize the ways that hacking and data breaches are intertwined. In so-called stolen data markets, hackers sell personal information they illegally obtain to others, who then use the data to engage in fraud and theft for profit.

    The quantity problem

    Every piece of personal data captured in a data breach – a passport number, Social Security number or login for a shopping service – has inherent value. Offenders can use the information in different ways. They can assume someone else’s identity, make a fraudulent purchase or steal services such as streaming media or music.

    The quantity of information, whether Social Security numbers or credit card details, that can be stolen through data breaches is more than any one group of criminals can efficiently process, validate or use in a reasonable amount of time. The same is true for the millions of email account usernames and passwords, or access to streaming services that data breaches can expose.

    This quantity problem has enabled the sale of information, including personal financial data, as part of the larger cybercrime online economy.

    eg: In headline of the following chart, U.S. doesn’t need periods.

    The sale of data, also known as carding, references the misuse of stolen credit card numbers or identity details. These illicit data markets began in the mid-1990s through the use of credit card number generators used by hackers. They shared programs that randomly generated credit card numbers and details and then checked to see whether the fake account details matched active cards that could then be used for fraudulent transactions.

    As more financial services were created and banks allowed customers to access their accounts through the internet, it became easier for hackers and cybercriminals to steal personal information through data breaches and phishing. Phishing involves sending convincing emails or SMS text messages to people to trick them into giving up sensitive information such as logins and passwords, often by clicking a false link that seems legitimate.

    One of the first phishing schemes targeted America Online users to get their account information to use their internet service at no charge.

    Selling stolen data online

    The large amount of information criminals were able to steal from such schemes led to more vendors offering stolen data to others through different online platforms.

    In the late 1990s and early 2000s, offenders used Internet Relay Chat, or IRC channels, to sell data. IRC was effectively like modern instant messaging systems, letting people communicate in real time through specialized software. Criminals used these channels to sell data and hacking services in an efficient place.

    In the early 2000s, vendors transitioned to web forums where individuals advertised their services to other users. Forums quickly gained popularity and became successful businesses with vendors selling stolen credit cards, malware and related goods and services to misuse personal information and enable fraud.

    One of the more prominent forums from this time was ShadowCrew, which formed in 2002 and operated until being taken down by a joint law enforcement operation in 2004. Their members trafficked over 1.7 million credit cards in less than three years.

    Forums continue to be popular, though vendors transitioned to running their own web-based shops on the open internet and dark web, which is an encrypted portion of the web that can be accessed only through specialized browsers like TOR, starting in the early 2010s. These shops have their own web addresses and distinct branding to attract customers, and they work in the same way as other e-commerce stores. More recently, vendors of stolen data have also begun to operate on messaging platforms such as Telegram and Signal to quickly connect with customers.

    Cybercriminals and customers

    Many of the people who supply and operate the markets appear to be cybercriminals from Eastern Europe and Russia who steal data and then sell it to others. Markets have also been observed in Vietnam and other parts of the world, though they do not get the same visibility in the global cybersecurity landscape.

    The customers of stolen data markets may reside anywhere in the world, and their demands for specific data or services may drive data breaches and cybercrime to provide the supply.

    The goods

    Stolen data is usually available in individual lots, such as a person’s credit or debit card and all the information associated with the account. These pieces are individually priced, with costs differing depending on the type of card, the victim’s location and the amount of data available related to the affected account.

    Vendors frequently offer discounts and promotions to buyers to attract customers and keep them loyal. This is often done with credit or debit cards that are about to expire.

    Some vendors also offer distinct products such as credit reports, Social Security numbers and login details for different paid services. The price for pieces of information varies. A recent analysis found credit card data sold for US$50 on average, while Walmart logins sold for $9. However, the pricing can vary widely across vendors and markets.

    Illicit payments

    Vendors typically accept payment through cryptocurrencies such as Bitcoin that are difficult for law enforcement to trace.

    Bitcoin is often used as payment for elicit information because it’s difficult to trace.
    AP Photo/Charles Krupa

    Once payment is received, the vendor releases the data to the customer. Customers take on a great deal of the risk in this market because they cannot go to the police or a market regulator to complain about a fraudulent sale.

    Vendors may send customers dead accounts that are unable to be used or give no data at all. Such scams are common in a market where buyers can depend only on signals of vendor trust to increase the odds that the data they purchase will be delivered, and if it is, that it pays off. If the data they buy is functional, they can use it to make fraudulent purchases or financial transactions for profit.

    The rate of return can be exceptional. An offender who buys 100 cards for $500 can recoup costs if only 20 of those cards are active and can be used to make an average purchase of $30. The result is that data breaches are likely to continue as long as there is demand for illicit, profitable data.

    This article is part of a series on data privacy that explores who collects your data, what and how they collect, who sells and buys your data, what they all do with it, and what you can do about it.

    Thomas Holt does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. How illicit markets fueled by data breaches sell your personal information to criminals – https://theconversation.com/how-illicit-markets-fueled-by-data-breaches-sell-your-personal-information-to-criminals-251586

    MIL OSI – Global Reports

  • MIL-OSI Global: Cuts to school lunch and food bank funding mean less fresh produce for children and families

    Source: The Conversation – USA – By Marlene B. Schwartz, Professor of Human Development and Family Sciences, University of Connecticut

    For many American children, school lunches are their most nutritious meal of the day. SDI Productions/iStock via Getty Images Plus

    The U.S. government recently cut more than US$1 billion in funding to two long-running programs that helped schools and food banks feed children and families in need. The U.S. Department of Agriculture says the reductions are a “return to long-term, fiscally responsible initiatives.” But advocacy groups say the cuts will hurt millions of Americans.

    The reductions came just days before the release of the Trump administration’s Make America Healthy Again report, an analysis of the factors causing chronic disease in children. One of those factors, the report says, is poor diet.

    Dr. Marlene Schwartz, a professor of human development and family sciences and director of the Rudd Center for Food Policy & Health at the University of Connecticut, discusses why cutting the Local Food for Schools and the Local Food Purchase Assistance programs means less fresh food will be available to children and families – and could hurt local farmers and ranchers too.

    Dr. Marlene Schwartz discusses why these programs were cut.

    The Conversation has collaborated with SciLine to bring you highlights from the discussion, edited here for brevity and clarity.

    Could you explain the two programs that were cut?

    Marlene Schwartz: Most schools were eligible for Local Food for Schools, a $660 million program, which has now been cut. The funds for Local Food for Schools were on top of the reimbursement that schools get for meals and would have allowed them to buy more local, fresh food.

    The Local Food Purchase Assistance program was designed primarily for food banks. Again, the idea was to provide federal money, about $500 million, so food banks could buy from local farmers and support local agriculture. But that too was cut.

    How will these cuts affect families and schoolchildren?

    Schwartz: Many children eat two of their meals, five days a week, at school. During the 2022-2023 school year, about 28 million kids ate lunch at school. More than 14 million had breakfast there.

    Having fresh, local produce in the school cafeteria provides the opportunity to introduce children to more fruits and vegetables and teach them about the food grown in their own communities. Think about how powerful a lesson about nutrition and local agriculture can be when you not only hear and read about it but can taste it too.

    How will these cuts affect farmers and ranchers?

    Schwartz: When the funding was there, the farmers and ranchers knew they had guaranteed buyers for their products. So the loss of these funds, especially so quickly, will have a very negative effect on them. Suddenly, the buyers they counted on don’t have the money to buy from them.

    Food banks provide fresh foods as well as canned.
    RyanJLane/E+ via Getty Images

    How does nutritious food in schools impact kids?

    Schwartz: Both the National School Lunch Program and the School Breakfast Program are required to comply with the dietary guidelines for Americans, so they’ve always had nutrition standards. These guidelines are updated every five years to reflect the most recent science and public health needs.

    The regulations on school meal nutrition were strengthened significantly with the 2010 Healthy, Hunger-Free Kids Act. We’ve done a number of studies showing that because of these changes, healthier meals are available at schools, and children eat better. The U.S. Department of Agriculture also did a large national study that reported much the same.

    Another study looked at the nutritional quality of the food at school, from home and at restaurants. It found that school food was the healthiest of all. Many people were surprised by this, but when you think about it, schools are the only setting required to follow federal and state nutrition regulations – restaurants and grocery stores don’t have to do that.

    But getting kids to eat nutritious food can be a challenge.

    Schwartz: We’ve known for decades that American children are not eating enough fruits and vegetables. We know they’re eating too much added sugar, saturated fat and sodium.

    This is due in part to the millions of dollars food companies spend to entice children to eat more sugary cereals, sweetened beverages and fast food.

    I think the best nutrition education happens on your plate. By maximizing the quality of food served in schools, policymakers can influence the diets of millions of children every single day.

    How nutritious are the foods at food banks?

    Schwartz: Food banks often measure their success in terms of the pounds of food they distribute into a community. But families relying on the charitable food system often have a higher risk of diet-related illness – like high blood pressure or Type 2 diabetes – and many want healthier foods.

    In response, food banks, which nationwide serve about 50 million Americans, have made a concerted effort to improve the nutritional quality of their food. There’s now a system to help food banks consistently track the nutritional quality of what they provide.

    Watch the full interview to hear more.

    SciLine is a free service based at the American Association for the Advancement of Science, a nonprofit that helps journalists include scientific evidence and experts in their news stories.

    Marlene B. Schwartz receives funding from the USDA, National Institutes of Health, Centers for Disease Control, Robert Wood Johnson Foundation, Partnership for a Healthier America, and the CT State Department of Education.

    ref. Cuts to school lunch and food bank funding mean less fresh produce for children and families – https://theconversation.com/cuts-to-school-lunch-and-food-bank-funding-mean-less-fresh-produce-for-children-and-families-256772

    MIL OSI – Global Reports

  • MIL-OSI Global: Game theory explains why reasonable parents make vaccine choices that fuel outbreaks

    Source: The Conversation – USA – By Y. Tony Yang, Endowed Professor of Health Policy and Associate Dean, George Washington University

    Vaccination is an example of how people make decisions in an interconnected system. MichelleLWilson via iStock/Getty Images Plus

    When outbreaks of vaccine-preventable diseases such as measles occur despite highly effective vaccines being available, it’s easy to conclude that parents who don’t vaccinate their children are misguided, selfish or have fallen prey to misinformation.

    As professors with expertise in vaccine policy and health economics, we argue that the decision not to vaccinate isn’t simply about misinformation or hesitancy. In our view, it involves game theory, a mathematical framework that helps explain how reasonable people can make choices that collectively lead to outcomes that endanger them.

    Game theory reveals that vaccine hesitancy is not a moral failure, but simply the predictable outcome of a system in which individual and collective incentives aren’t properly aligned.

    Game theory meets vaccines

    Game theory examines how people make decisions when their outcomes depend on what others choose. In his research on the topic, Nobel Prize-winning mathematician John Nash, portrayed in the movie “A Beautiful Mind, showed that in many situations, individually rational choices don’t automatically create the best outcome for everyone.

    Vaccination decisions perfectly illustrate this principle. When a parent decides whether to vaccinate their child against measles, for instance, they weigh the small risk of vaccine side effects against the risks posed by the disease. But here’s the crucial insight: The risk of disease depends on what other parents decide. If nearly everyone vaccinates, herd immunity – essentially, vaccinating enough people – will stop the disease’s spread. But once herd immunity is achieved, individual parents may decide that not vaccinating is the less risky option for their kid.

    In other words, because of a fundamental tension between individual choice and collective welfare, relying solely on individual choice may not achieve public health goals.

    A 1963 poster featuring Wellbee, the CDC’s national symbol of public health, encouraged people to get the polio vaccine.
    CDC via Wikimedia Commons

    This makes vaccine decisions fundamentally different from most other health decisions. When you decide whether to take medication for high blood pressure, your outcome depends only on your choice. But with vaccines, everyone is connected.

    This interconnectedness has played out dramatically in Texas, where the largest U.S. measles outbreak in a decade originated. As vaccination rates dropped in certain communities, the disease – once declared eliminated in the U.S. – returned. One county’s vaccination rate fell from 96% to 81% over just five years. Considering that about 95% of people in a community must be vaccinated to achieve herd immunity, the decline created perfect conditions for the current outbreak.

    This isn’t coincidence; it’s game theory playing out in real time. When vaccination rates are high, not vaccinating seems rational for each individual family, but when enough families make this choice, collective protection collapses.

    The free rider problem

    This dynamic creates what economists call a free rider problem. When vaccination rates are high, an individual might benefit from herd immunity without accepting even the minimal vaccine risks. Game theory predicts something surprising: Even with a hypothetically perfect vaccine – faultless efficacy, zero side effects – voluntary vaccination programs will never achieve 100% coverage. Once coverage is high enough, some rational individuals will always choose to be free riders, benefiting from the herd immunity provided by others.

    And when rates drop – as they have, dramatically, over the past five years – disease models predict exactly what we’re seeing: the return of outbreaks.

    Game theory reveals another pattern: For highly contagious diseases, vaccination rates tend to decline rapidly following safety concerns, while recovery occurs much more slowly. This, too, is a mathematical property of the system because decline and recovery have different incentive structures. When safety concerns arise, many parents get worried at the same time and stop vaccinating, causing vaccination rates to drop quickly.

    But recovery is slower because it requires both rebuilding trust and overcoming the free rider problem – each parent waits for others to vaccinate first. Small changes in perception can cause large shifts in behavior. Media coverage, social networks and health messaging all influence these perceptions, potentially moving communities toward or away from these critical thresholds.

    Mathematics also predicts how people’s decisions about vaccination can cluster. As parents observe others’ choices, local norms develop – so the more parents skip the vaccine in a community, the more others are likely to follow suit.

    Game theorists refer to the resulting pockets of low vaccine uptake as susceptibility clusters. These clusters allow diseases to persist even when overall vaccination rates appear adequate. A 95% statewide or national average could mean uniform vaccine coverage, which would prevent outbreaks. Alternatively, it could mean some areas with near-100% coverage and others with dangerously low rates that enable local outbreaks.

    Not a moral failure

    All this means that the dramatic fall in vaccination rates was predicted by game theory – and therefore more a reflection of system vulnerability than of a moral failure of individuals.
    What’s more, blaming parents for making selfish choices can also backfire by making them more defensive and less likely to reconsider their views.

    Much more helpful would be approaches that acknowledge the tensions between individual and collective interests and that work with, rather than against, the mental calculations informing how people make decisions in interconnected systems.

    People make decisions by balancing individual and collective interests – a calculation that’s crucial for how infectious diseases spread.

    Research shows that communities experiencing outbreaks respond differently to messaging that frames vaccination as a community problem versus messaging that implies moral failure. In a 2021 study of a community with falling vaccination rates, approaches that acknowledged parents’ genuine concerns while emphasizing the need for community protection made parents 24% more likely to consider vaccinating, while approaches that emphasized personal responsibility or implied selfishness actually decreased their willingness to consider it.

    This confirms what game theory predicts: When people feel their decision-making is under moral attack, they often become more entrenched in their positions rather than more open to change.

    Better communication strategies

    Understanding how people weigh vaccine risks and benefits points to better approaches to communication. For example, clearly conveying risks can help: The 1-in-500 death rate from measles far outweighs the extraordinarily rare serious vaccine side effects. That may sound obvious, but it’s often missing from public discussion. Also, different communities need different approaches – high-vaccination areas need help staying on track, while low-vaccination areas need trust rebuilt.

    Consistency matters tremendously. Research shows that when health experts give conflicting information or change their message, people become more suspicious and decide to hold off on vaccines. And dramatic scare tactics about disease can backfire by pushing people toward extreme positions.

    Making vaccination decisions visible within communities – through community discussions and school-level reporting, where possible – can help establish positive social norms. When parents understand that vaccination protects vulnerable community members, like infants too young for vaccines or people with medical conditions, it helps bridge the gap between individual and collective interests.

    Health care providers remain the most trusted source of vaccine information. When providers understand game theory dynamics, they can address parents’ concerns more effectively, recognizing that for most people, hesitancy comes from weighing risks rather than opposing vaccines outright.

    The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    ref. Game theory explains why reasonable parents make vaccine choices that fuel outbreaks – https://theconversation.com/game-theory-explains-why-reasonable-parents-make-vaccine-choices-that-fuel-outbreaks-256975

    MIL OSI – Global Reports

  • MIL-OSI Global: Detroit voters have an opportunity to pick a mayor who will ease zoning, improve transit and protect long-term residents

    Source: The Conversation – USA – By Brian J. Connolly, Assistant Professor of Business Law, University of Michigan

    Five of Detroit’s mayoral candidates discuss their ideas for the future of the city. Detroit PBS

    Five of the nine candidates in Detroit’s mayoral contest debated on May 29, 2025, during the annual Mackinac Policy Conference.

    When asked about outgoing Mayor Mike Duggan’s 11-year tenure, many of the candidates praised him for skillfully steering Detroit through bankruptcy and attracting new business investment.

    But the candidates also saw an opportunity to do more.

    “Without a doubt, we have to ensure that more investment comes back into our neighborhoods and that we’re activating our commercial corridors,” the race’s front-runner, Detroit City Council President Mary Sheffield, said.

    Helping Detroit residents improve their neighborhoods will be an important task for the city’s next mayor. I do not live in Detroit, but my family lived there for generations before my grandparents joined the white flight from the city in the 1970s. And my research on housing, infrastructure and land use law offers some ideas for how the next mayor could encourage investment while at the same time improving social equity.

    Duggan’s legacy

    By most accounts, the Motor City under Duggan has been an urban revitalization success story.

    Once the nation’s murder capital, its crime rate has fallen dramatically.

    And after experiencing the largest-ever municipal bankruptcy, the city boasts an investment-grade credit rating. For the past two years, the city has gained population after decades of losses. But many of the city’s neighborhoods, from Brightmoor to Jefferson-Chalmers, have not experienced the same economic surge as its booming downtown.

    Detroit’s Brightmoor neighborhood has an artsy vibe – and a high crime rate.
    Patrick Gorski/NurPhoto via Getty Images

    In the city center, offices are being converted to apartments, Michigan’s second-tallest building is rising along with other new developments, and the city has hosted major national events such as the NFL draft. Yet some of Detroit’s outlying areas still suffer from disinvestment and abandonment, poor infrastructure, underperforming schools and crime.

    Many Detroiters are concerned the city’s boom might displace longtime residents if it causes housing prices to increase dramatically or removes affordable homes from the market.

    Detroit’s voters will narrow the field to two candidates on Aug. 5. To help voters evaluate the candidates’ positions between now and then, here are some research-backed ideas for improving life in the city.

    Make it easy to build

    Detroit’s next mayor can make it easier to build new homes and businesses in the city’s neighborhoods.

    Repopulating neighborhoods reduces visual blight, brings life to vacant areas and improves the city’s fiscal health by bringing in new tax revenue. Population growth also supports neighborhood businesses that create jobs and serve the community. And it will mitigate the city’s recent, steep growth in housing prices by adding new supply to the market.

    Easing zoning and building rules is a good place to start. U.S. cities such as Minneapolis and Portland have recently reformed zoning laws to simplify housing construction. They’ve also modified single-family zoning citywide to allow multiplexes and accessory dwelling units. Those interventions have resulted in a small increase in new housing. Even more construction has taken place in cities such as Denver that have allowed higher-density development along major corridors – projects that can be more easily scaled and financed due to their larger size and attractiveness to investors.

    To date, Detroit has not adopted any of these reforms.

    Another way to spur building is to offer developers a predictable approval process. Even if cities maintain building height restrictions, setbacks and design requirements – things Detroit has maintained – predictable procedures reduce development costs and assure investors that projects can be completed on time. For example, cities can shorten the time it takes to review a project. They can also avoid city council or planning commission public hearings with subjective review criteria, which Detroit currently allows under its zoning laws.

    Detroit’s initial efforts to update its zoning in 2018 stalled. Yet the city has an opportunity to become the nation’s easiest place to build, and doing so will ensure that it remains affordable while attracting investment.

    Improve transit service

    Detroit’s next mayor can aid its neighborhoods by improving transit service.

    Without a regional transit system, southeast Michigan remains heavily car-dependent. Yet a 2017 study showed less than half of low-income Detroiters own cars. And of those who don’t own a car, 43% missed work, an appointment or something else due to a lack of transportation. Although this study is several years old, these statistics likely haven’t changed much due to rising costs of housing and car ownership.

    Today, nearly one-third of Detroiters live in poverty – meaning, for a family of four, they earn less than US$32,000 per year – yet the national average annual cost of car ownership exceeds $12,000. Giving lower-income Detroiters a low-cost, reliable means to get to work would benefit the city’s neighborhoods, residents and businesses.

    Expanding transit service has other benefits, too. Transit reduces traffic, encourages the healthy habit of walking to and from stops and improves air quality. Transit investments also increase land values around stations and brings new businesses to these neighborhoods. In addition to serving the needs of working Detroiters, more frequent and reliable bus service would increase neighborhood property values, according to research.

    Make property taxes fairer

    Since the city’s emergence from bankruptcy 11 years ago, housing wealth in Detroit has grown by $4.6 billion.

    Although a rise in land values signals investor confidence in the city and benefits its homeowners, high prices limit Detroiters’ ability to afford housing, the wealth is not shared with everyone, and there is heightened risk of displacing low-income residents.

    And, as candidates frequently mentioned during the debate, after more than 40 years of tax increases to make up for sliding property values, the city has one of the highest effective property tax rates in Michigan, over 2.8%, making housing even less affordable. Nevertheless, Detroit routinely abates taxes for major commercial developments such as Hudson’s Detroit and several downtown hotels, which some residents view as unfair.

    Detroit’s next mayor has an opportunity to reduce the property tax burden for residents and businesses, improve the system’s fairness, and use increasing land prices and new development for public benefit.

    Duggan proposed a land-value tax to replace the city’s property tax in 2023. Unlike property taxes, land-value taxes place a levy on the value of land, not structures on the land. These taxes create an incentive for owners to develop their properties for productive use rather than speculate on underutilized land.

    In a city like Detroit, with thousands of vacant properties, a land-value tax would encourage development by limiting the benefits of long-term land speculation. For lower-income homeowners and renters, the city could avoid displacement through exemptions and other mechanisms.

    Duggan’s proposal failed in the Michigan Legislature, which needs to approve changes to the property tax. But Detroit’s next mayor could revive this push.

    The next mayor could also press the Legislature for other tools, such as the authority to levy development impact fees to build parks and schools or provide social services in neighborhoods affected by new development.

    Michigan law allows the formation of special assessment districts, business improvement zones and other special taxing entities to provide public infrastructure. Expanding these tools may allow Detroit to leverage rising property values to provide public benefits such as streets or parks.

    Importantly, the city can gain better public services and infrastructure while encouraging development. Tools such as the city’s community benefits ordinance, which requires developers of large projects to negotiate with neighbors for services and amenities, look good on paper but can delay projects or mistake individuals’ interests for community needs. Similarly, affordable housing mandates often lead to counterproductive results such as discouraging new development or raising costs on market-rate housing.

    Brian J. Connolly does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Detroit voters have an opportunity to pick a mayor who will ease zoning, improve transit and protect long-term residents – https://theconversation.com/detroit-voters-have-an-opportunity-to-pick-a-mayor-who-will-ease-zoning-improve-transit-and-protect-long-term-residents-254540

    MIL OSI – Global Reports

  • MIL-OSI Global: In pardoning reality TV stars Todd and Julie Chrisley, Trump taps into a sense of persecution felt by his conservative Christian base

    Source: The Conversation – USA – By Diane Winston, Professor and Knight Center Chair in Media & Religion, USC Annenberg School for Communication and Journalism

    Savannah Chrisley, left, spearheaded a campaign to pardon her mother, Julie, and father, Todd, right. Noel Vasquez/Getty Images

    President Donald Trump has never met Todd Chrisley, the reality TV star that he pardoned on May 27, 2025, along with Chrisley’s wife, Julie.

    But the pair have much in common.

    Both are admired by their fans for their brash personas and salty ripostes. Both enjoy lavish lifestyles: Trump is known for his real estate deals and rococo White House redecoration, and Chrisley for his entrepreneurial skill and acquisitions of sprawling properties.

    Quick-tempered tycoons, they live large and keep score – especially when people cross them.

    And maybe most importantly, both have run into legal trouble with Georgia prosecutors. In 2019, The U.S. Attorney’s Office for the Northern District of Georgia indicted the Chrisleys for fraud and tax evasion, and the Fulton County district attorney filed charges against Trump in 2023.

    In 2022, Todd and Julie Chrisley were tried in Fulton County, found guilty and sentenced to 12- and seven-year sentences, respectively. A year later, a Fulton County grand jury indicted Trump as part of an alleged conspiracy to overturn the 2020 presidential election results in Georgia, a case that’s currently in limbo.

    After the Chrisleys went to prison, their daughter Savannah began campaigning for their release. Her efforts to win over prominent conservatives – including her outspoken support for Trump – led to a prime-time appearance at the 2024 Republican National Convention.

    “My family has been persecuted by rogue prosecutors due to our public profile and conservative beliefs,” she told the delegates and a television audience of 15 million viewers.

    Turning an insult into an accolade, she claimed prosecutors had called them the “Trumps of the South.”

    Her framing of her parents’ imprisonment aligns with Trump’s broader campaign narrative of victimization, redemption and retribution, which critics say he has continued to promote and carry out during his second term.

    Preaching perfection

    Like Trump, who starred on “The Apprentice” for 11 years, the Chrisleys had their own reality television show.

    Chrisley Knows Best” aired on USA Network from 2014 to 2023. I’m familiar with the Chrisleys because I wrote about Todd in a 2018 book I co-edited on religion and reality television. The show was particularly popular among viewers in their 30s, who were fascinated by the Chrisleys’ extravagant lifestyle and Todd’s over-the-top personality.

    The self-proclaimed “patriarch of perfection,” Todd flew twice a month to Los Angeles from Atlanta, and later Nashville, to have his hair cut and highlighted. He spoke freely about using Botox and invited viewers into his room-size closet where his clothes were organized by color. No matter the time of day, Todd was camera-ready: buffed, manicured and dressed in designer clothes.

    The family enjoyed all the trappings of success: fancy cars, a palatial home and expensive vacations. Yet, in almost every episode, Todd made clear that his life, and theirs by extension, centered on family, religion and responsibility. In fact, many episodes revolved around Todd’s efforts to promote these values through his parenting lessons.

    On the one hand, Todd tried to teach responsibility and the value of hard work to his five children. On the other hand, he bribed and cajoled them into doing what he wanted. Todd seemed to have it both ways: His strictness and traditional values appealed to Christian viewers, but his sass and cussing won over secular audiences.

    But sometimes his words rang hollow. Todd talked a lot about work, but viewers rarely saw him at a job. He frequently quoted the Bible, but audiences seldom saw him in church. He extolled family, but a few years into the series, his two older children, Lindsie and Kyle, disappeared from the show.

    In 2023, the series disappeared, too. By then, the Chrisleys were in prison.

    Trump knows best

    On the day of his inauguration, when Trump pardoned or commuted the sentences of the roughly 1,500 people involved in the Jan. 6, 2021, insurrection, he vowed to “take appropriate action to correct past misconduct by the Federal Government related to the weaponization of law enforcement.”

    According to the president, the imprisonment of Todd and Julie Chrisley and his pardoning of them is just that.

    “Your parents are going to be free and clean and I hope that we can do it by tomorrow,” Trump told Savannah Chrisley in a recorded phone conversation. “They’ve been given a pretty harsh treatment based on what I’m hearing.”

    Trump’s pardons, which have freed a number of conservatives convicted of fraud, may stem from his belief that he and many others have been falsely accused and persecuted by the elite, liberal establishment.

    But the pardons also strike home for his right-wing religious supporters, many of whom think that Democrats will do anything to quash their faith, including using the justice system to specifically target Christians.

    “We live in a nation founded on freedom, liberty and justice for all. Justice is supposed to be blind. But today, we have a two-faced justice system,” Savannah Chrisley said during her RNC speech. “Look at what they are doing to countless Christians and conservatives that the government has labeled them extremists or even worse.”

    While those claims have been disputed, eradicating anti-Christian bias, at home and abroad, has nevertheless become a centerpiece of Trump’s policies during his second term.

    The lawyers who prosecuted the Chrisleys had a different perspective. They called Todd and Julie “career swindlers who have made a living by jumping from one fraud scheme to another, lying to banks, stiffing vendors and evading taxes at every corner,” and whose reputations were “based on the lie that their wealth came from dedication and hard work.”

    The couple were ultimately found guilty of defrauding Atlanta-area banks of US$36 million by using falsified papers to apply for mortgages, obtaining false loans to repay older loans, and not repaying those loans. They also were convicted of hiding their true income from the IRS and owing $500,000 in back taxes.

    At his sentencing, Todd said that he intended to pay it all back. At a press conference after his pardon, he said he was convicted for something he did not do.

    Todd Chrisley holds a press conference on May 31, 2025, after his release from prison.

    In the days since their release, the Chrisleys announced they were filming a new reality show, which will air on Lifetime. The series will focus on the couple’s legal struggles, imprisonment, pardon and reunification.

    Thanks to the constitutional protections of the presidency, Trump’s reelection has shielded him from ongoing federal criminal prosecution. And now, thanks to the stroke of Trump’s pen, the “Trumps of the South” are back in business, too.

    Diane Winston does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. In pardoning reality TV stars Todd and Julie Chrisley, Trump taps into a sense of persecution felt by his conservative Christian base – https://theconversation.com/in-pardoning-reality-tv-stars-todd-and-julie-chrisley-trump-taps-into-a-sense-of-persecution-felt-by-his-conservative-christian-base-257932

    MIL OSI – Global Reports

  • MIL-OSI Global: Storm damage costs are often a mystery – that’s a problem for understanding extreme weather risk

    Source: The Conversation – USA – By John Nielsen-Gammon, Regents Professor of Atmospheric Sciences, Texas A&M University

    Hail can be destructive, yet the cost of the damage often isn’t publicly tracked. NOAA/NSSL

    On Jan. 5, 2025, at about 2:35 in the afternoon, the first severe hailstorm of the season dropped quarter-size hail in Chatham, Mississippi. According to the federal storm events database, there were no injuries, but it caused $10,000 in property damage.

    How do we know the storm caused $10,000 in damage? We don’t.

    That estimate is probably a best guess from someone whose primary job is weather forecasting. Yet these guesses, and thousands like them, form the foundation for publicly available tallies of the costs of severe weather.

    If the damage estimates from hailstorms are consistently lower in one county than the next, potential property buyers might think it’s because there’s less risk of hailstorms. Instead, it might just be because different people are making the estimates.

    Hail damage in Dallas in June 2012.
    Rondo Estrello/Flickr, CC BY-SA

    We are atmospheric scientists at Texas A&M University who lead the Office of the Texas State Climatologist. Through our involvement in state-level planning for weather-related disasters, we have seen county-scale patterns of storm damage over the past 20 years that just didn’t make sense. So, we decided to dig deeper.

    We looked at storm event reports for a mix of seven urban and rural counties in southeast Texas, with populations ranging from 50,000 to 5 million. We included all reported types of extreme weather. We also talked with people from the two National Weather Service offices that cover the area.

    Storm damage investigations vary widely

    Typically, two specific types of extreme weather receive special attention.

    After a tornado, the National Weather Service conducts an on-site damage survey, examining its track and destruction. That survey forms the basis for the official estimate of a tornado’s strength on the enhanced Fujita scale. Weather Service staff are able to make decent damage cost estimates from knowledge of home values in the area.

    They also investigate flash flood damage in detail, and loss information is available from the National Flood Insurance Program, the main source of flood insurance for U.S. homes.

    Tornadoes in May 2025 destroyed homes in communities in several states, including London, Ky.
    AP Photo/Timothy D. Easley

    Most other losses from extreme weather are privately insured, if they’re insured at all.

    Insured loss information is collected by reinsurance companies – the companies that insure the insurance companies – and gets tabulated for major events. Insurance companies use their own detailed information to try to make better decisions on rates than their competitors do, so event-based loss data by county from insurance companies isn’t readily available.

    Losing billion-dollar disaster data

    There’s one big window into how disaster damage has changed over the years in the U.S.

    The National Oceanic and Atmospheric Administration, or NOAA, compiled information for major disasters, including insured losses by state. Bulk data won’t tell communities or counties about their specific risk, but it enabled NOAA to calculate overall damage estimates, which it released as its billion-dollar disasters list.

    From that program, we know that the number and cost of billion-dollar disasters in the United States has increased dramatically in recent years. News articles and even scientific papers often point to climate change as the primary culprit, but a much larger driver has been the increasing number and value of buildings and other types of infrastructure, particularly along hurricane-prone coasts.

    Critics in the past year called for more transparency and vetting of the procedures used to estimate billion-dollar disasters. But that’s not going to happen, because NOAA in May 2025 stopped making billion-dollar disaster estimates and retired its user interface.

    Previous estimates can still be retrieved from NOAA’s online data archive, but by shutting down that program, the window into current and future disaster losses and insurance claims is now closed.

    Emergency managers at the county level also make local damage estimates, but the resources they have available vary widely. They may estimate damages only when the total might be large enough to trigger a disaster declaration that makes relief funds available from the federal government.

    Patching together very rough estimates

    Without insurance data or county estimates, the local offices of the National Weather Service are on their own to estimate losses.

    There is no standard operating procedure that every office must follow. One office might choose to simply not provide damage estimates for any hailstorms because the staff doesn’t see how it could come up with accurate values. Others may make estimates, but with varying methods.

    The result is a patchwork of damage estimates. Accurate values are more likely for rare events that cause extensive damage. Loss estimates from more frequent events that don’t reach a high damage threshold are generally far less reliable.

    The number of severe hail reports in southeast Texas listed in the National Centers for Environmental Information’s storm events database is strongly correlated with population. The county with the most reports and greatest detail in those reports is home to Houston. Hailstorms in the three easternmost counties are rarely associated with damage estimates.
    John Nielsen-Gammon and B.J. Baule

    Do you want to look at local damage trends? Forget about it. For most extreme weather events, estimation methods vary over time and are not documented.

    Do you want to direct funding to help communities improve resilience to natural disasters where the need is greatest? Forget about it. The places experiencing the largest per capita damages depend not just on actual damages but on the different practices of local National Weather Service offices.

    Are you moving to a location that might be vulnerable to extreme weather? Companies are starting to provide localized risk estimates through real estate websites, but the algorithms tend to be proprietary, and there’s no independent validation.

    4 steps to improve disaster data

    We believe a few fixes could make NOAA’s storm events database and the corresponding values in the larger SHELDUS database, managed by Arizona State University, more reliable. Both databases include county-level disasters and loss estimates for some of those disasters.

    First, the National Weather Service could develop standard procedures for local offices for estimating disaster damages.

    Second, additional state support could encourage local emergency managers to make concrete damage estimates from individual events and share them with the National Weather Service. The local emergency manager generally knows the extent of damage much better than a forecaster sitting in an office a few counties away.

    Third, state or federal governments and insurance companies can agree to make public the aggregate loss information at the county level or other scale that doesn’t jeopardize the privacy of their policyholders. If all companies provide this data, there is no competitive disadvantage for doing so.

    Fourth, NOAA could create a small “tiger team” of damage specialists to make well-informed, consistent damage estimates of larger events and train local offices on how to handle the smaller stuff.

    With these processes in place, the U.S. wouldn’t need a billion-dollar disasters program anymore. We’d have reliable information on all the disasters.

    John Nielsen-Gammon receives funding from the National Oceanic and Atmospheric Administration and the State of Texas.

    William Baule receives funding from NOAA, the State of Texas, & the Austin Community Foundation.

    ref. Storm damage costs are often a mystery – that’s a problem for understanding extreme weather risk – https://theconversation.com/storm-damage-costs-are-often-a-mystery-thats-a-problem-for-understanding-extreme-weather-risk-257105

    MIL OSI – Global Reports

  • MIL-OSI Global: Reproducibility may be the key idea students need to balance trust in evidence with healthy skepticism

    Source: The Conversation – USA – By Sarah R. Supp, Associate Professor of Data Analytics, Denison University

    Reproducing results can increase trust in scientific studies. Huntstock via Getty Images

    Many people have been there.

    The dinner party is going well until someone decides to introduce a controversial topic. In today’s world, that could be anything from vaccines to government budget cuts to immigration policy. Conversation starts to get heated. Finally, someone announces with great authority that a scientific study supports their position. This causes the discussion to come to an abrupt halt because the dinner guests disagree on their belief in scientific evidence. Some may believe science always speaks the truth, some may think science can never be trusted, and others may disagree on which studies with contradicting claims are “right.”

    How can the dinner party – or society – move beyond this kind of impasse? In today’s world of misinformation and disinformation, healthy skepticism is essential. At the same time, much scientific work is rigorous and trustworthy. How do you reach a healthy balance between trust and skepticism? How can researchers increase the transparency of their work to make it possible to evaluate how much confidence the public should have in any particular study?

    As teachers and scholars, we see these problems in our own classrooms and in our students – and they are mirrored in society.

    The concept of reproducibility may offer important answers to these questions.

    Reproducibility is what it sounds like: reproducing results. In some ways, reproducibility is like a well-written recipe, such as a recipe for an award-winning cake at the county fair. To help others reproduce their cake, the proud prizewinner must clearly document the ingredients used and then describe each step of the process by which the ingredients were transformed into a cake. If others can follow the directions and come up with a cake of the same quality, then the recipe is reproducible.

    Think of the English scholar who claims that Shakespeare did not author a play that has historically been attributed to him. A critical reader will want to know exactly how they arrived at that conclusion. What is the evidence? How was it chosen and interpreted? By parsing the analysis step by step, reproducibility allows a critical reader to gauge the strength of any kind of argument.

    We are a group of researchers and professors from a wide range of disciplines who came together to discuss how we use reproducibility in our teaching and research.

    Based on our expertise and the students we encounter, we collectively see a need for higher-education students to learn about reproducibility in their classes, across all majors. It has the potential to benefit students and, ultimately, to enhance the quality of public discourse.

    The foundation of credibility

    Reproducibility has always been a foundation of good science because it allows researchers to scrutinize each other’s studies for rigor and credibility and expand upon prior work to make new discoveries. Researchers are increasingly paying attention to reproducibility in the natural sciences, such as physics and medicine, and in the social sciences, such as economics and environmental studies. Even researchers in the humanities, such as history and philosophy, are concerned with reproducibility in studies involving analysis of texts and evidence, especially with digital and computational methods. Increased interest in transparency and accessibility has followed the rising importance of computer algorithms and numerical analysis in research. This work should be reproducible, but it often remains opaque.

    Broadly, research is reproducible if it answers the question: “How do you know?” − such that another researcher could theoretically repeat the study and produce consistent results.

    Reproducible research is explicit about the materials and methods that were used in a study to make discoveries and come to conclusions. Materials include everything from scientific instruments such as a tensiometer measuring soil moisture to surveys asking people about their daily diet. They also include digital data such as spreadsheets, digitized historic texts, satellite images and more. Methods include how researchers make observations and analyze data.

    To reproduce a social science study, for example, we would ask: What is the central question or hypothesis? Who was in the study? How many individuals were included? What were they asked? After data was collected, how was it cleaned and prepared for analysis? How exactly was the analysis run?

    Proper documentation of all these steps, plus making available the original data from the study, allows other scientists to redo the research, evaluate the decisions made during the process of gathering and analyzing information, and assess the credibility of the findings.

    This short video, made by the National Academies, explains the key concepts in reproducing scientific findings and notes ways the process can be improved.

    Over the past 20 years, the need for reproducibility has become increasingly important. Scientists have discovered that some published studies are too poorly documented for others to repeat, lack verified data sources, are questionably designed, or even fraudulent.

    Putting reproducibility to work: An example

    A highly contentious, retracted study from 1998 linked the measles, mumps and rubella (MMR) vaccine and autism. Scientists and journalists used their understanding of reproducibility to discover the flaws in the study.

    The central question of the study was not about vaccines but aimed to explore a possible relationship between colitis − an inflammation of the large intestine − and developmental disorders. The authors explicitly wrote, “We did not prove an association between measles, mumps, and rubella vaccine and the syndrome described.”

    The study observed just 12 patients who were referred to the authors’ gastroenterology clinic and had histories of recent behavioral disorders, including autism. This sample of children is simply too small and selective to be able to make definitive conclusions.

    In this study, the researchers translated children’s medical charts into summary tables for comparison. When a journalist attempted to reproduce the published data tables from the children’s medical histories, they found pervasive inconsistencies.

    Reproducibility allows for corrections in research. The article was published in a respected journal, but it lacked transparency with regard to patient recruitment, data analysis and conflicts of interest. Whereas traditional peer review involves critical evaluation of a manuscript, reproducibility also opens the door to evaluating the underlying data and methods. When independent researchers attempted to reproduce this study, they found deep flaws. The article was retracted by the journal and by most of its authors. Independent research teams conducted more robust studies, finding no relationship between vaccines and autism.

    Each research discipline has its own set of best practices for achieving reproducibility. Disciplines in which researchers use computational or statistical analysis require sharing the data and software code for reproducing studies. In other disciplines, researchers interpret nonnumerical qualities of data sources such as interviews, historical texts, social media content and more. These disciplines are working to develop standards for sharing their data and research designs for reproducibility. Across disciplines, the core principles are the same: transparency of the evidence and arguments by which researchers arrived at their conclusions.

    Reproducibility in the classroom

    Colleges and universities are uniquely situated to promote reproducibility in research and public conversations. Critical thinking, effective communication and intellectual integrity, staples of higher-education mission statements, are all served by reproducibility.

    Teaching faculty at colleges and universities have started taking some important steps toward incorporating reproducibility into a wide range of undergraduate and graduate courses. These include assignments to replicate existing studies, training in reproducible methods to conduct and document original research, preregistration of hypotheses and analysis plans, and tools to facilitate open collaboration among peers. A number of initiatives to develop and disseminate resources for teaching reproducibility have been launched.

    Despite some progress, reproducibility still needs a central place in higher education. It can be integrated into any course in which students weigh evidence, read published literature to make claims, or learn to conduct their own research. This change is urgently needed to train the next generation of researchers, but that is not the only reason.

    Reproducibility is fundamental to constructing and communicating claims based on evidence. Through a reproducibility lens, students evaluate claims in published studies as contingent on the transparency and soundness of the evidence and analysis on which the claims are based. When faculty teach reproducibility as a core expectation from the beginning of a curriculum, they encourage students to internalize its principles in how they conduct their own research and engage with the research published by others.

    Institutions of higher education already prioritize cultivating engaged, literate and critical citizens capable of solving the world’s most challenging contemporary problems. Teaching reproducibility equips students, and members of the public, with the skills they need to critically analyze claims in published research, in the media and even at dinner parties.

    Also contributing to this article are participants in the 2024 Reproducibility and Replicability in the Liberal Arts workshop, funded by the Alliance to Advance Liberal Arts Colleges (AALAC) [in alphabetical order]: Ben Gebre-Medhin (Department of Sociology and Anthropology, Mount Holyoke College), Xavier Haro-Carrión (Department of Geography, Macalester College), Emmanuel Kaparakis (Quantitative Analysis Center, Wesleyan University), Scott LaCombe (Statistical and Data Sciences, Smith College), Matthew Lavin (Data Analytics Program, Denison University), Joseph J. Merry (Sociology Department, Furman University), Laurie Tupper (Department of Mathematics and Statistics, Mount Holyoke College).

    Sarah Supp receives funding from the National Science Foundation, awards #1915913, #2120609, and #2227298.

    Joseph Holler receives funding from the National Science Foundation, award #2049837.

    Peter Kedron receives funding from the National Science Foundation, award #2049837 and from Esri.

    Richard Ball has received funding from the Alfred P. Sloan Foundation and the United Kingdom Reproducibility Network.

    Anne M. Nurse and Nicholas J. Horton do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    ref. Reproducibility may be the key idea students need to balance trust in evidence with healthy skepticism – https://theconversation.com/reproducibility-may-be-the-key-idea-students-need-to-balance-trust-in-evidence-with-healthy-skepticism-251771

    MIL OSI – Global Reports

  • MIL-OSI Global: How your electric bill may be paying for big data centers’ energy use

    Source: The Conversation – USA – By Ari Peskoe, Lecturer on Law, Harvard University

    Your power bill may be hiding something. photoschmidt/iStock/Getty Images Plus

    In the race to develop artificial intelligence, large technology companies such as Google and Meta are trying to secure massive amounts of electricity to power new data centers. Electric utilities see the prospect of earning large profits by providing electricity to these power-hungry facilities and are competing for their business by offering discounts not available to average consumers.

    In our paper Extracting Profits from the Public, we explain how utilities are forcing regular ratepayers to pay for the discounts enjoyed by some of the nation’s largest companies and identify ways policymakers can limit the costs to the public.

    Shifting costs

    In much of the U.S., utilities are monopolists. Within their service territories, they are the only companies allowed to deliver electricity to consumers. To fund their operations, utilities split the costs of maintaining and expanding their systems among all ratepayers – homeowners, businesses, warehouses, factories and anyone else who uses electricity.

    Historically, a utility expanded its system to meet growing demand for electricity from new factories, businesses and homes. To pay for its expansion − new power plants, new transmission lines and other equipment − the utility would propose to raise electricity rates by different amounts for various types of consumers.

    Public utility commissions are state agencies charged with ensuring that the public gets a fair deal. These commissions monitor how much money the utility spends to provide electric service and how its costs are shared among various types of ratepayers, including residential, commercial and industrial consumers. Ultimately, the public utility commission is supposed to approve any rate increases based on its assessment of what’s fair to consumers.

    Splitting the utility’s costs among all consumers made perfect sense when population growth and economic development across the economy stimulated the need for new infrastructure. But today, in many utility service territories, most of the projected growth in electricity demand is due to new data centers.

    Here’s the problem for consumers: To meet data center demand, utilities are building new power plants and power lines that are needed only because of data center growth. If state regulators allow utilities to follow the standard approach of splitting the costs of new infrastructure among all consumers, the public will end up paying to supply data centers with all that power.

    An artist’s rendering of a proposed Meta data center in Richland Parish, La.
    Meta via Facebook

    A big price tag

    One particularly acute example is in Louisiana. A Meta data center under development in the northeastern corner of the state is projected to use, by our calculations, twice as much energy as the city of New Orleans.

    Entergy, the regional monopoly utility, is proposing to build more than US$3 billion worth of new gas-fired power plants and delivery infrastructure to meet the data center’s energy demand. Rather than billing Meta directly for these costs, Entergy is proposing to include the costs in rates paid by all customers.

    Entergy claims its contract with Meta will cover some portion of the $3 billion price tag and that will mitigate any increases in consumers’ bills. But Entergy has asked state regulators to keep key terms of the contract secret, and only a redacted version of its application is available online.

    The public has no idea how much it might pay if the commission approves the contract. And if the Meta data center ends up using much less power than the company anticipates, the public does not know whether it would be on the hook to pay higher electricity rates for longer periods to guarantee Entergy a profit.

    The electronics in data centers consume large amounts of electricity.
    RJ Sangosti/MediaNews Group/The Denver Post via Getty Images

    Secret agreements

    Our research, reviewing nearly 50 public utility commission proceedings about data centers’ power needs across 10 states, uncovered dozens of secretive contracts between utilities and data centers. Unlike Louisiana, most states require utilities to submit to the public utility commission their one-off deals with data centers, but they allow utilities to conceal the pricing terms from the public.

    In normal rate-review cases, numerous parties advocate for their interests in a public proceeding, including members of the public, industry groups and the utility itself. But as our paper finds, utility commission reviews of data center contracts are based on confidential utility filings that are inaccessible to the general public. Few, if any, outsiders participate, and as a result the commission often hears only the utility’s version of the deal.

    Because the pricing terms are secret, it is impossible to know whether the deal that a utility is offering to a data center is too low to cover the utility’s costs of providing power to the data center, which would mean that the public is subsidizing the deal. History shows, however, that utilities have a long history of exploiting their monopolies to shift costs to the public, including through secret contracts.

    Electric utilities also charge customers for the costs of building and maintaining transmission networks.
    Jay L. Clendenin/Getty Images

    Other public costs

    Our paper also explores other ways that the public pays for data center energy costs. For instance, many high-voltage interstate transmission projects, which connect large power plants to local delivery systems, are developed through regional planning processes run by numerous utilities. These alliances have complex rules for splitting the costs of new transmission lines and equipment among their utility members.

    Once a utility is charged its share, it spreads the costs of new transmission projects among its local ratepayers. Because some regions are building new transmission capacity to accommodate data centers, our analysis finds that the public has been forced to pay billions of dollars for data center growth.

    Data center energy costs can also be shifted when data centers connect directly to existing power plants. Under what are called “co-location” deals, the power plant stops selling energy to the wider public and just sells to the data center. With less supply in the overall market, prices go up and the public faces higher bills as a result.

    Many state legislatures are noticing these problems and working to figure out how to address them. Several recent bills would set new terms and conditions for future data center deals that could help protect the public from data center energy costs.

    Ari Peskoe is the Director of the Electricity Law Initiative at the Harvard Law School Environmental and Energy Law Program (EELP). EELP receives funding from philanthropic foundations that support the clean energy transition.

    Eliza Martin does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. How your electric bill may be paying for big data centers’ energy use – https://theconversation.com/how-your-electric-bill-may-be-paying-for-big-data-centers-energy-use-257794

    MIL OSI – Global Reports

  • MIL-OSI Global: 100 years ago, the Supreme Court made a landmark ruling on parents’ rights in education – today, another case raises new questions

    Source: The Conversation – USA – By Charles J. Russo, Joseph Panzer Chair in Education and Research Professor of Law, University of Dayton

    A selection of books that are part of the Supreme Court case Mahmoud v. Taylor are pictured on April, 15, 2025, in Washington. AP Photo/Pablo Martinez Monsivais

    A century ago, the Supreme Court handed down one of its most important cases about education. On June 1, 1925, the court struck down an Oregon statute requiring all students to attend public school – a law critics argued was meant to limit faith-based schools, at a time when anti-Catholic bias was still common in parts of the United States.

    The majority opinion in Pierce v. Society of Sisters of the Holy Name of Jesus and Mary included a now-famous dictum about parents’ rights to shape their children’s upbringing. According to the court, “the child is not the mere creature of the state; those who nurture him and direct his destiny have the right, coupled with the high duty, to recognize and prepare him for additional obligations.”

    Soon, the Supreme Court is expected to release another decision around parental beliefs and education: Mahmoud v. Taylor. The plaintiffs are parents who want to excuse their children from public school lessons involving storybooks with LGBTQ+ characters – lessons they assert contradict their religious beliefs.

    As someone who teaches education law, I believe this is perhaps the court’s most significant case on parental rights since Pierce. Mahmoud raises questions not only about religious freedom, but also about educators’ ability to determine curricula, and public education in a pluralistic society.

    Picture-book debate

    Controversy arose during the 2022-23 school year in Montgomery County, Maryland’s largest school district, when officials approved various storybooks with LGBTQ+-inclusive themes to be incorporated into the English language-arts curriculum for preschool and elementary students.

    Some parents challenged the materials, including “Pride Puppy!”, a picture book the board later removed from use. Originally approved for preschool and pre-K, the story portrays a family whose puppy gets lost at a LGBTQ+ Pride parade, devoting a page to each letter of the alphabet. At the end of the book, a long “search and find” list of words for children to go back and look for in the pictures of the parade includes “[drag] queen” and “king,” “leather” and “lip ring.”

    Other materials for older children included stories about same-sex marriage, a transgender child and nonbinary bathroom signs.

    Parents who objected to the use of these materials on religious grounds sought to excuse their children from lessons using them. The parents basically argued that requiring their children to participate compelled or coerced them to go against their families’ religious beliefs.

    A group of parents protest in Rockville, Md., on June 27, 2023, in an effort to opt out of books that feature LGBTQ+ characters in Montgomery County schools.
    Sarah L. Voisin/The Washington Post via Getty Images

    Initially, officials agreed to allow opt-outs for elementary schoolers whose parents objected to the materials. However, a day later they changed their minds. Since then, school officials cited concerns about absenteeism, the feasibility of accommodating opt-out requests, and a desire to avoid stigmatizing LGBTQ+ students or families as reasons for their policy.

    A group of Muslim, Orthodox Christian and Catholic families challenged the board’s refusal to excuse their children from lessons using the disputed materials.

    The federal trial court, however, rejected the parents’ claim that having no opt-outs violated their right to due process.

    Parents appealed, and the 4th Circuit affirmed in favor of the school board 2-1. The court added that officials had not violated the parents’ First Amendment rights to freely exercise their faith. “There’s no evidence at present that the Board’s decision not to permit opt-outs compels the Parents or their children to change their religious beliefs or conduct, either at school or elsewhere,” the panel concluded.

    The dissenting judge stridently countered. Officials violated the parents’ free exercise rights by forcing them “to make a choice,” he wrote, between “either adher[ing] to their faith, or receiv[ing] a free public education for their children.” He also noted that the board’s opt-out policy was not neutral toward religion, because under Maryland regulations, children may be excused from sex-ed lessons.

    In January 2025, the Supreme Court agreed to hear the parents’ appeal, addressing whether the schools are burdening parents’ free-exercise rights.

    Court record

    In their brief to the Supreme Court and oral arguments, the parents cited Wisconsin v. Yoder, a Supreme Court ruling from 1972. The court found that Amish parents did not have to send their children to school after the eighth grade, which the families argued would violate their religious beliefs. Amish communities descend from Anabaptist Christians who fled persecution in Europe and emphasize living simply, eschewing many modern technologies.

    In Yoder, the justices agreed with the parents that their children received all the education they needed in their home communities. Under the First Amendment, parents have the right “to guide the religious future and education of their children,” the majority wrote, a matter “established beyond debate.”

    During oral arguments for Mahmoud in April 2025, some justices briefly discussed another precedent: the Supreme Court’s 1943 judgment in West Virginia State Board of Education v. Barnette, resolved at the height of U.S. involvement in World War II. Here, three parents who were Jehovah’s Witnesses refused to have their children participate in public schools’ flag salute and Pledge of Allegiance because they viewed it as a form of idolatry contrary to their religious beliefs. Others objected
    to the salute as “being too much like Hitler’s.”

    The court reasoned that educators could not compel students to participate, because forcing children – or anyone – to engage in activities inconsistent with their beliefs is contrary to their First Amendment rights to the free exercise of religion and freedom of speech.

    Viewed together, these cases highlight how the court has granted parents significant leeway to exempt their children from educational activities inconsistent with their religious beliefs.

    Questions at court

    During oral arguments, a majority of justices appeared to support the parents’ request to excuse children from lessons involving the books about LGBTQ+ characters.

    The board’s attorney argued that students did not have to agree with the books’ messages, simply to participate in the lesson. Being exposed to an idea “does not burden free exercise,” he said.

    Protesters in support of LGBTQ+ rights and against book bans outside the U.S. Supreme Court building on April 22, 2025, the day the court heard arguments in Mahmoud v. Taylor.
    Anna Moneymaker/Getty Images

    Chief Justice John Roberts, however, queried whether it is realistic for 5-year-olds to understand that distinction. He asked, “Do you want to say you don’t have to follow the teacher’s instructions, you don’t have to agree with the teacher? I mean, that may be a more dangerous message than some of the other things.”

    Other conservative justices also appeared skeptical of the idea that the lessons were merely exposing young children to ideas, but not instilling moral lessons. The storybooks do not simply explain that some people believe something and others do not, Justice Amy Coney Barrett suggested; they inform students that “this is the right view of the world.” Similarly, Justice Neil Gorsuch remarked that telling students that “some people think X, and X is wrong and hurtful and negative” is “more than exposure.”

    “What is the big deal about allowing them to opt out of this?” Justice Samuel Alito asked.

    Conversely, Justice Elena Kagan acknowledged that parents’ concerns were “serious,” but wondered how to draw limits on opt-out policies. Did the parents’ argument suggest that anytime “a religious person confronts anything in a classroom that conflicts with her religious beliefs or her parents’ that – that the parent can then demand an opt-out?”

    Justice Sonia Sotomayor pressed the plaintiffs’ attorney on whether “the mere exposure to things that you object to” really counts as coercion. And Justice Ketanji Brown Jackson questioned why, even if opt-outs are not allowed, public schools teaching “something that the parent disagrees with” is coercive, given that homeschooling and private schools are legal.

    Mahmoud raises challenging questions about curricular content, parental control and free exercise of religion – questions the court will hopefully resolve. A ruling is expected in June or early July 2025.

    Charles J. Russo does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. 100 years ago, the Supreme Court made a landmark ruling on parents’ rights in education – today, another case raises new questions – https://theconversation.com/100-years-ago-the-supreme-court-made-a-landmark-ruling-on-parents-rights-in-education-today-another-case-raises-new-questions-257876

    MIL OSI – Global Reports

  • MIL-OSI Global: Why the global tax system needs fixing – podcast

    Source: The Conversation – UK – By Mend Mariwany, Producer, The Conversation Weekly Podcast, The Conversation

    Cagkan Sayin/Shutterstock

    For decades, multinational corporations have used sophisticated strategies to shift profits away from where they do business. As a result, countries around the world lose an estimated US$500 billion annually in unpaid taxes, with developing nations hit particularly hard.

    In the first of two episodes for The Conversation Weekly podcast called The 15% solution, we explore how companies have exploited loopholes in the global tax system. The episode features insights from Annette Alstadsæter, director of the Centre for Tax Research at the Norwegian University of Life Sciences, and Tarcisio Diniz Magalhaes, a professor of tax law at the University of Antwerp in Belgium.

    The problem goes beyond clever accounting. Our international tax rules were built for an industrial age where companies were physically present where they operated. But today’s tech giants can generate billions in revenue from users around the world, without having a single employee or office there, leaving those nations unable to tax those profits at all.

    In 2021, after years of international negotiations, the Organisation for Economic Co-operation and Development unveiled a global tax deal designed to address tax avoidance through a minimum corporate tax rate of 15%. But will this new framework actually work? And what happens when major economies refuse to participate?

    Across two episodes, The 15% solution explores why a new global tax regime is needed, whether it can fix a broken system, and what’s at stake if it fails. Part two will be published on June 6.


    This episode of The Conversation Weekly was written and produced by Mend Mariwany. Gemma Ware is the executive producer. Mixing and sound design by Eloise Stevens and theme music by Neeta Sarl.

    Newsclips in this episode from NBC News, France24, BBC News, DW News and TRT World.

    Listen to The Conversation Weekly via any of the apps listed above, download it directly via our RSS feed or find out how else to listen here. A transcript of this episode is available on Apple Podcasts.

    Tarcísio Diniz Magalhães has received funding from the University of Antwerp Research Fund, Flanders Research Foundation, Social Sciences and Humanities Research Council in Canada and the Ford Foundation. He is a member of the Antwerp Tax Academy and DigiTax Centre of Excellence and is lead professor on International Taxation, Working Group on Tax Reform, ACMinas – Commercial and Business Association of Minas Gerais. Annette Alstadsæter is the Director of Skatteforsk – Centre for Tax Research which collaborates with the EU Tax Observatory on the Atlas of the Offshore World.

    ref. Why the global tax system needs fixing – podcast – https://theconversation.com/why-the-global-tax-system-needs-fixing-podcast-257672

    MIL OSI – Global Reports

  • MIL-OSI Global: The secret to Ukraine’s battlefield successes against Russia – it knows wars are never won in the past

    Source: The Conversation – Global Perspectives – By Matthew Sussex, Associate Professor (Adj), Griffith Asia Institute; and Fellow, Strategic and Defence Studies Centre, Australian National University

    The iconoclastic American general Douglas Macarthur once said that “wars are never won in the past”.

    That sentiment certainly seemed to ring true following Ukraine’s recent audacious attack on Russia’s strategic bomber fleet, using small, cheap drones housed in wooden pods and transported near Russian airfields in trucks.

    The synchronised operation targeted Russian Air Force planes as far away as Irkutsk – more than 5,000 kilometres from Ukraine. Early reports suggest around a third of Russia’s long-range bombers were either destroyed or badly damaged. Russian military bloggers have put the estimated losses lower, but agree the attack was catastrophic for the Russian Air Force, which has struggled to adapt to Ukrainian tactics.

    This particular attack was reportedly 18 months in the making. To keep it secret was an extraordinary feat. Notably, Kyiv did not inform the United States that the attack was in the offing. The Ukrainians judged – perhaps understandably – that sharing intelligence on their plans could have alerted the Kremlin in relatively short order.

    Ukraine’s success once again demonstrates that its armed forces and intelligence services are the modern masters of battlefield innovation and operational security.

    Finding new solutions

    Western military planners have been carefully studying Ukraine’s successes ever since its forces managed to blunt Russia’s initial onslaught deep into its territory in early 2022, and then launched a stunning counteroffensive that drove the Russian invaders back towards their original starting positions.

    There have been other lessons, too, about how the apparently weak can stand up to the strong. These include:

    • attacks on Russian President Vladimir Putin’s vanity project, the Kerch Bridge, linking the Russian mainland to occupied Crimea (the last assault occurred just days ago)

    • the relentless targeting of Russia’s oil and gas infrastructure with drones

    • attacks against targets in Moscow to remind the Russian populace about the war, and

    • its incursion into the Kursk region, which saw Ukrainian forces capture around 1,000 square kilometres of Russian territory.

    On each occasion, Western defence analysts have questioned the wisdom of Kyiv’s moves.

    Why invade Russia using your best troops when Moscow’s forces continue laying waste to cities in Ukraine?

    Why hit Russia’s energy infrastructure if it doesn’t markedly impede the battlefield mobility of Russian forces?

    And why attack symbolic targets like bridges when it could provoke Putin into dangerous “escalation”?

    The answer to this is the key to effective innovation during wartime. Ukraine’s defence and security planners have interpreted their missions – and their best possible outcomes – far more accurately than conventional wisdom would have thought.

    Above all, they have focused on winning the war they are in, rather than those of the past. This means:

    • using technological advancements to force the Russians to change their tactics

    • shaping the information environment to promote their narratives and keep vital Western aid flowing, and

    • deploying surprise attacks not just as ways to boost public morale, but also to impose disproportionate costs on the Russian state.

    The impact of Ukraine’s drone attack

    In doing so, Ukraine has had an eye for strategic effects. As the smaller nation reliant on international support, this has been the only logical choice.

    Putin has been prepared to commit a virtually inexhaustible supply of expendable cannon fodder to continue his country’s war ad infinitum. Russia has typically won its wars this way – by attrition – albeit at a tremendous human and material cost.

    That said, Ukraine’s most recent surprise attack does not change the overall contours of the war. The only person with the ability to end it is Putin himself.

    That’s why Ukraine is putting as much pressure as possible on his regime, as well as domestic and international perceptions of it. It is key to Ukraine’s theory of victory.

    This is also why the latest drone attack is so significant. Russia needs its long-range bomber fleet, not just to fire conventional cruise missiles at Ukrainian civilian and infrastructure targets, but as aerial delivery systems for its strategic nuclear arsenal.

    The destruction of even a small portion of Russia’s deterrence capability has the potential to affect its nuclear strategy. It has increasingly relied on this strategy to threaten the West.

    A second impact of the attack is psychological. The drone attacks are more likely to enrage Putin than bring him to the bargaining table. However, they reinforce to the Russian military that there are few places – even on its own soil – that its air force can act with operational impunity.

    The surprise attacks also provide a shot in the arm domestically, reminding Ukrainians they remain very much in the fight.

    Finally, the drone attacks send a signal to Western leaders. US President Donald Trump and Vice President JD Vance, for instance, have gone to great lengths to tell the world that Ukraine is weak and has “no cards”. This action shows Kyiv does indeed have some powerful cards to play.

    That may, of course, backfire: after all, Trump is acutely sensitive to being made to look a fool. He may look unkindly at resuming military aid to Ukraine after being shown up for saying Ukrainian President Volodymyr Zelensky would be forced to capitulate without US support.

    But Trump’s own hubris has already done that for him. His regular claims that a peace deal is just weeks away have gone beyond wishful thinking and are now monotonous.

    Unsurprisingly, Trump’s reluctance to put anything approaching serious pressure on Putin has merely incentivised the Russian leader to string the process along.

    Indeed, Putin’s insistence on a maximalist victory, requiring Ukrainian demobilisation and disarmament without any security guarantees for Kyiv, is not diplomacy at all. It is merely the reiteration of the same unworkable demands he has made since even before Russia’s full-scale invasion in February 2022.

    However, Ukraine’s ability to smuggle drones undetected onto an opponent’s territory, and then unleash them all together, will pose headaches for Ukraine’s friends, as well as its enemies.

    That’s because it makes domestic intelligence and policing part of any effective defence posture. It is a contingency democracies will have to plan for, just as much as authoritarian regimes, who are also learning from Ukraine’s lessons.

    In other words, while the attack has shown up Russia’s domestic security services for failing to uncover the plan, Western security elites, as well as authoritarian ones, will now be wondering whether their own security apparatuses would be up to the job.

    The drone strikes will also likely lead to questions about how useful it is to invest in high-end and extraordinarily expensive weapons systems when they can be vulnerable. The Security Service of Ukraine estimates the damage cost Russia US$7 billion (A$10.9 billion). Ukraine’s drones, by comparison, cost a couple of thousand dollars each.

    At the very least, coming up with a suitable response to those challenges will require significant thought and effort. But as Ukraine has repeatedly shown us, you can’t win wars in the past.

    Matthew Sussex has received funding from the Australian Research Council, the Atlantic Council, the Fulbright Foundation, the Carnegie Foundation, the Lowy Institute and various Australian government departments and agencies.

    ref. The secret to Ukraine’s battlefield successes against Russia – it knows wars are never won in the past – https://theconversation.com/the-secret-to-ukraines-battlefield-successes-against-russia-it-knows-wars-are-never-won-in-the-past-258172

    MIL OSI – Global Reports

  • MIL-OSI Global: Unprecedented heat in the North Atlantic Ocean kickstarted Europe’s hellish 2023 summer. Now we know what caused it

    Source: The Conversation – Global Perspectives – By Matthew England, Scientia Professor and Deputy Director of the ARC Australian Centre for Excellence in Antarctic Science, UNSW Sydney

    Westend61/Getty Images

    In June 2023, a record-breaking marine heatwave swept across the North Atlantic Ocean, smashing previous temperature records.

    Soon after, deadly heatwaves broke out across large areas of Europe, and torrential rains and flash flooding devastated parts of Spain and Eastern Europe. That year Switzerland lost more than 4% of its total glacier volume, and severe bushfires broke out around the Mediterranean.

    It wasn’t just Europe that was impacted. The coral reefs of the Caribbean were bleaching under severe heat stress. And hurricanes, fuelled by ocean heat, intensified into disasters. For example, Hurricane Idalia hit Florida in August 2023 – causing 12 deaths and an estimated US$3.6 billion in damages.

    Today, in a paper published in Nature, we uncover what drove this unprecedented marine heatwave.

    A strange discovery

    In a strange twist to the global warming story, there is a region of the North Atlantic Ocean to the southeast of Greenland that has been cooling over the last 50 to 100 years.

    This so-called “cold blob” or “warming hole” has been linked to the weakening of what’s known as the Atlantic Meridional Overturning Circulation – a system of ocean currents that conveys warm water from the equator towards the poles.

    During July 2023 we met as a team to analyse this cold blob – how deep it reaches and how robust it is as a measure of the strength of the Atlantic overturning circulation – when it became clear there was a strong reversal of the historical cooling trend. The cold blob had warmed to 2°C above average.

    But was that a sign the overturning circulation had been reinvigorated? Or was something else going on?

    A layered story

    It soon became clear the anomalous warm temperatures southeast of Greenland were part of an unprecedented marine heatwave that had developed across much of the North Atlantic Ocean. By July, basin-averaged warming in the North Atlantic reached 1.4°C above normal, almost double the previous record set in 2010.

    To uncover what was behind these record breaking temperatures, we combined estimates of the atmospheric conditions that prevailed during the heatwave, such as winds and cloud cover, with ocean observations and model simulations.

    We were especially interested in understanding what was happening in the mixed upper layer of water of the ocean, which is strongly affected by the atmosphere.

    Distinct from the deeper layer of cold water, the ocean’s surface mixed layer warms as it’s exposed to more sunlight during spring and summer. But the rate at which this warming happens depends on its thickness. If it’s thick, it will warm more gradually; if it’s thin, rapid warming can ensue.

    During summer the thickness of this surface mixed layer is largely set by winds. Winds churn up the surface ocean and the stronger they are the deeper the mixing penetrates, so strong winds create a think upper layer and weak winds generate a shallower layer.

    Sea surface temperature anomaly (°C) for the month of June 2023, relative to the 1991–2020 reference period.
    Copernicus Climate Change Service/ECMWF

    Thinning at the surface

    Our new research indicates that the primary driver of the marine heatwave was record-breaking weak winds across much of the basin. The winds were at their weakest measured levels during June and July, possibly linked to a developing El Niño in the east Pacific Ocean.

    This led to by far the shallowest upper layer on record. Data from the Argo Program – a global array of nearly 4,000 robotic floats that measure the temperature and salinity in the upper 2,000 metres of the ocean – showed in some areas this layer was only ten metres deep, compared to the usual 20 to 40 metres deep.

    This caused the sun to heat the thin surface layer far more rapidly than usual.

    In addition to these short term changes in 2023, previous research has shown long-term warming associated with anthropogenic climate change is reducing the ability of winds to mix the upper ocean, causing it to gradually thin.

    We also identified a possible secondary driver of more localised warming during the 2023 marine heatwave: above-average solar radiation hitting the ocean. This could be linked in part with the introduction of new international rules in 2020 to reduce sulfate emissions from ships.

    The aim of these rules was to reduce air pollution from ship’s exhaust systems. But sulfate aerosols also reflect solar radiation and can lead to cloud formation. The resultant clearer skies can then lead to more ocean warming.

    Early warning signs

    The extreme 2023 heatwave provides a preview of the future. Marine heatwaves are expected to worsen as Earth continues to warm due to greenhouse gas emissions, with devastating impacts on marine ecosystems such as coral reefs and fisheries. This also means more intense hurricanes – and more intense land-based heatwaves.

    Right now, although the “cold blob” to the southeast of Greenland has returned, parts of the North Atlantic remain significantly warmer than the average. There is a particularly warm patch of water off the coast of the United Kingdom, with temperatures up to 4°C above normal. And this is likely priming Europe for extreme land-based heatwaves this summer.

    Global ocean temperatures on June 2 2025. A patch of abnormally warm water is visible off the southern coast of the United Kingdom.
    National Oceanic and Atmospheric Administration

    To better understand, forecast and plan for the impacts of marine heatwaves, long-term ocean and atmospheric data and models, including those provided by the National Oceanic and Atmospheric Administration (NOAA) in the United States, are crucial. In fact, without these data and models, our new study would not have been possible.

    Despite this, NOAA faces an uncertain future. A proposed budget for the 2026 fiscal year released by the White House last month could mean devastating funding cuts of more than US$1.5 billion – mostly targeting climate-based research and data collection.

    This would be a disaster for monitoring our oceans and climate system, right at a time when change is severe, unprecedented, and proving very costly.

    Matthew England receives funding from the Australian Research Council.

    Alex Sen Gupta receives funding from the Australian Research Council.

    Andrew Kiss receives funding from the Australian Research Council.

    Zhi Li receives funding from the Australian Research Council.

    ref. Unprecedented heat in the North Atlantic Ocean kickstarted Europe’s hellish 2023 summer. Now we know what caused it – https://theconversation.com/unprecedented-heat-in-the-north-atlantic-ocean-kickstarted-europes-hellish-2023-summer-now-we-know-what-caused-it-258061

    MIL OSI – Global Reports

  • MIL-OSI Global: Getting away with it … sort of. How a dictator and a fugitive Nazi advanced international human rights law

    Source: The Conversation – Global Perspectives – By Olivera Simic, Associate Professor in Law, Griffith University

    Pinochet and Rauff? They were alike. Each had two faces. One gentle, the other hard. They were joined.

    And they both got away with it … Sort of.

    Philippe Sands loves to tell stories. A master of historical non-fiction, he has become known for his unique blend of deeply personal, legal and historical narratives, which weave together incredible coincidences with moving stories of human courage in the face of mass atrocities and horror.

    Sands is a leading practitioner of international law, a professor at University College London, an author, a playwright, and the recipient of numerous literary awards. He is also someone whose family was murdered in the vortex of the Holocaust in Ukraine.

    With his previous two books, East West Street: On the Origins of Genocide and Crimes Against Humanity (2016) and The Ratline: Love, Lies and Justice on the Trail of a Nazi Fugitive (2020), he demonstrated his unique skill in presenting complex legal cases to avid readers.

    His latest book, 38 Londres Street: On Impunity, Pinochet in England and a Nazi in Patagonia, rounds out the trilogy.

    If it weren’t based on facts, one might think it was a brilliantly crafted thriller.


    Review: 38 Londres Street: On Impunity, Pinochet in England and a Nazi in Patagonia – Philippe Sands (Weidenfeld & Nicolson)


    38 Londres Street weaves together several narratives, but at its heart is the story of the legal attempts to end impunity for two accused criminals. One is Chilean dictator Augusto Pinochet. The other is Walther Rauff, a former SS officer who fled to South America and allegedly worked with Pinochet’s Secret Intelligence Service.

    Sands brings these two men into a single narrative to highlight the legal struggle against impunity for mass atrocities, though he never loses sight of the victims and their human stories of suffering, courage and persistence.

    These were people whose lives were abruptly and violently taken. Sands includes many of their names and tragic fates in his book. He informs his readers that the Cementerio Sara Braun in Punta Arenas, Chile, has a memorial bearing the names of Pinochet’s many victims. He clearly wants these individuals never to be forgotten.

    Universal jurisdiction and the Pinochet precedent

    The building at 38 Londres Street in Santiago was once a site of pain. At this secret interrogation centre, one of many across Santiago and the rest of Chile, Pinochet’s agents imprisoned, tortured, executed and disappeared tens of thousands of people deemed leftists, socialists, communists or “other undesirables”.

    Pinochet came to power on September 11, 1973, overthrowing the democratically elected socialist government of President Salvador Allende in a military coup. He would rule Chile with an iron fist until 1990.

    Chile’s youth became the targets of his murderous regime. Sands notes that most victims were between 21 and 30 years old. The majority of them were workers; the rest mainly comprised academics, professionals and students. The atrocities were committed with impunity.

    Like all dictators, Pinochet believed himself untouchable. But in October 1998, while visiting the UK, he was arrested in London. Spanish judge Baltasar Garzón was seeking Pinochet’s extradition to Spain in order to try him for human rights abuses.

    Garzón was acting under the then-controversial legal principle of universal jurisdiction, which allows courts in one country to prosecute grave human rights violations committed outside its borders, regardless of the nationality of the accused.

    Never before had a former head of state of one country been arrested by, and in another, for committing international crimes.

    Sands would become involved in one of the most famous cases in international law since the Nuremberg trials more than 50 years earlier. Pinochet’s lawyers offered him an opportunity to participate in the case, arguing for the former dictator’s immunity as a former head of state. His wife threatened to divorce him if he accepted.

    He declined the offer. Instead, Sands represented Human Rights Watch when the Pinochet case was considered by the Law Lords.

    Pinochet had been indicted for crimes against humanity and genocide. At issue was the question of whether Pinochet, as a former head of state, had immunity before the English courts for acts committed in another country while he was in office. Should there be a legal protection for former dictators?

    The proceedings in London were novel and remarkable, writes Sands, because this was an open legal question when Pinochet was arrested. His arrest raised an unprecedented issue: was there an exception to the rule of immunity for a former head of state when a crime in international law was involved? And did the exception apply before a national court, rather than an international one?


    Many believed Pinochet’s immunity should be lifted and extradition proceedings should go ahead, so that he could answer for the deaths of Spanish nationals and others. If that did not happen, it was argued, the travesty of justice would signal that any dictator could get away with genocide. As Sands writes, immunity and impunity often go hand in hand.

    In this landmark case, Pinochet was stripped of the immunity from prosecution he had enjoyed as a former president. He was ordered to stand trial on charges of human rights abuses.

    For the next 16 months, he remained in the UK, awaiting extradition to Spain. But it never happened. The initial judgement on immunity was quashed, due to concerns about possible bias of one of the judges. The case returned to square one. New hearings took place.

    In January 2000, the UK eventually decided not to proceed with extradition, claiming that Pinochet was too ill to stand trial and that “it would not be fair”. He was allowed to return to Chile as a free man, thanks to medical doctors rather than lawyers.

    Political leaders in Europe generally welcomed the ruling. Margaret Thatcher, former British prime minister and Pinochet’s longstanding ally, was adamant that the lengthy legal wrangle had been a waste of public money. Seemingly agitated, she said in front of the cameras:

    Senator Pinochet was a staunch friend of Britain throughout the Falklands War. His reward from this government was to be held prisoner for 16 months. In the meantime, his health has been broken, his reputation tarnished, and vast funds of public money have been squandered on a political vendetta.

    Subsequent attempts to prosecute Pinochet in Chile were unsuccessful. He died in 2006 at the age of 91, without ever being tried for the human rights abuses that occurred while he was in power. Retributive justice, in the end, was not served. But Pinochet’s case opened the gates for efforts to bring other former and serving heads of state to justice.

    Today, the 38 Londres Street serves as a place of national memory where visitors can walk through its halls and learn about its dark past.

    The Nazi who invented the gas chambers

    Running parallel with Pinochet’s story is that of Nazi fugitive Walther Rauff.

    Rauff invented the mobile gas chambers that were precursors to the gas chambers in Nazi concentration camps. At the end of the second world war, he escaped to South America, settling in Chile. Germany made numerous attempts to have Rauff extradited to face charges, but the Chilean government refused these demands. He spent his days in the backwaters of Patagonia, running a king-crab cannery business.

    Sands travels to Patagonia and meets people who remember Rauff, whose identity seems to have been common knowledge among his neighbours and co-workers: “everyone knew rumours and stories of his past”; they knew about “the gas vans” and that he “once killed many people”. But no one seemed to be bothered. They describe Rauff as “cultivated and kind”. To many of Sands’ interlocutors, the stories about Rauff “were long ago and far away”.

    While dealing with the failed attempts for his extradition, Rauff put his energies into “harvesting crabs, making sure the tins were packed tight, [and] managing the workers”. He continued to do so, enjoying the company of his dog Bobby, when Pinochet became Chile’s new leader.

    Pinochet was an old friend. Sands records that the two men met in the 1950s in Quito, Ecuador, where Rauff was staying, having fled an Italian prison camp at the end of the war. The men shared a contempt for communism and an affinity for German culture. Pinochet encouraged Rauff to move to Chile.

    Rauff delighted in Pinochet’s murderous regime. Sands tell us that Pinochet used Rauff’s “expertise” to help with the murder and disappearance of thousands of people. But the controversy over whether Rauff worked for the Chilean military, becoming “chief advisor” to its intelligence services, or perhaps even its “head”, remains unresolved. Definitive and provable evidence about the assistance Rauff may have given to Pinochet was never obtained.

    Holding dictators to account

    One of the many coincidences Sands stumbles upon is that Rauff lived in Punta Arenas in southern Chile on a street called “Jugoslavija”, named after the country where I was born, which disintegrated in the 1990s in a brutal civil war marked by mass atrocities and genocide.

    Former Yugoslavian and Serbian president Slobodan Milošević would become the first-ever serving head of state to be charged with international crimes and extradited to an international court.

    Milošević was extradited to The Hague in 2001 after he was indicted for war crimes committed in Kosovo and Croatia, and for genocide in Bosnia and Herzegovina following an order from the Serbian government. His trial is widely hailed as a landmark moment in the development of international criminal law, though he died in his cell before his trial ended, dying “innocent” like his counterparts Pinochet and Rauff.

    Slobodan Milošević in The Hague, July 2001.
    Robert Goddyn, via Wikimedia Commons, CC BY

    In 38 Londres Street, Sands brings to light the behind-the-scenes struggles to hold Pinochet and Rauff accountable. The book explores the intricacies and politics of international law. Despite its bitter ending, Pinochet’s case remains one of the most far reaching and important in the field of human rights. It caused other countries to reflect on their own legal immunities.

    As a researcher and academic, I found the book significant because it also offers insight into what it takes to conduct such expansive archival and qualitative research. Over several years, “in between work and life”, Sands travels to different corners of the globe and speaks to informants from all walks of life, including descendants of the perpetrators. He visits the sites of the events he recounts, most of them places marked by pain. He seeks to see and feel a past that still lingers.

    His method requires stamina, passion and unwavering diligence. His strong commitment to neutrality, decency and impartiality makes him stand out not only as a highly skilled writer, but a survivor who continues to unpack and share the legacy of the Holocaust. There is much to respect and learn from in Sands’ account, not least about the intricacies of writing a compelling story.

    Holding dictators to account is hard. Pinochet and Rauff deprived victims of the retributive justice they needed and deserved. Yet justice and reparations have many different meanings. They can be symbolic too, and still profoundly meaningful to victims. As one of the survivors of Pinochet’s regime replied to Sands when asked whether he believed his case was one of total impunity: “Not quite total […] Dawson [an island detention camp] has been recognised as a site of national memory, a protected monument, and that means something.”

    Pinochet and Rauff were never convicted, but they were not free. Pinochet spent years under house arrest, bitter and devastated, unable to walk the streets. Rauff lived in constant fear of being arrested and extradited. They were both haunted. This, after all, may have brought some satisfaction to the victims.

    Sands was once asked: “Do you believe in justice?” He replied: “Sort of.” Sands comes to understand that justice is “uneven in its delivery”. He has learned “to tamper expectations”. Maybe we all need to learn that skill from him too. Ultimately, justice remains a work-in-progress, just like the process of learning from a dark past.

    Olivera Simic does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Getting away with it … sort of. How a dictator and a fugitive Nazi advanced international human rights law – https://theconversation.com/getting-away-with-it-sort-of-how-a-dictator-and-a-fugitive-nazi-advanced-international-human-rights-law-257241

    MIL OSI – Global Reports

  • MIL-OSI Global: Taylor Swift now owns all the music she has ever made: a copyright expert breaks it down

    Source: The Conversation – Global Perspectives – By Wellett Potter, Lecturer in Law, University of New England

    On Friday, Taylor Swift announced she now owns all the music she has ever made. This reported US$360 million acquisition includes all the master recordings to her first six albums, music videos, concert films, album art, photos and unreleased material.

    The purchase of this catalogue from private equity firm Shamrock Capital is a profoundly happy event for Swift. She has expressed how personal and difficult it was not to own these works.

    In her announcement, Swift acknowledged that it was due to her fans purchasing her rerecorded music (known as “Taylor’s Version”) and the financial success of the record-breaking Eras Tour which enabled this purchase.

    The story behind “Taylor’s Version” and why she didn’t own the catalogue to her original six albums is due to copyright, music industry practices and contractual terms. Let’s break it down.

    What’s in a music catalogue?

    When it comes to valuing a music catalogue, it largely comes down to two types of rights: master rights and publishing rights.

    Master rights are rights pertaining to the ownership of the actual sound recordings – the final recorded version. These are called “masters” because they’re the original source from which all copies are made.

    Under traditional music industry contracts, record labels usually hold ownership of masters and associated materials. This can be music videos, tour videos, unreleased works, photographs and album covers.

    Through licensing, the label controls the use of this material and retains the majority of the royalties. In return, the label provides the artist with financial backing, recording resources and marketing.

    Publishing rights, on the other hand, relate to the underlying composition – the music and lyrics. The rights to music publishing usually belong to the songwriter, regardless of who performs the song.

    Publishing rights govern how a song can be used and who earns royalties from that use. For example, a song may be played on a streaming platform, covered in a live performance or licensed for a commercial or film.

    Swift’s contracts

    Swift was 15-years-old when she was signed to Scott Borchetta’s Big Machine record label.

    The agreed contractual terms were typical of the music industry. In exchange for the financial support to make, record and promote her subsequent albums and tours, Big Machine held the rights to Swift’s master recordings and associated materials in her first six albums. Her relationship with the label lasted 13 years.

    As a songwriter, Swift retained separate publishing rights to her songs (the music and lyrics) from her first six albums, which she licensed through Sony/ATV Music Publishing.

    In 2018, Swift was reportedly offered to re-sign with Big Machine, in a deal which would involve her “earning” the rights to one original album for each new one she produced.

    Swift did not renew her contract and moved to Republic Records (Universal Music Group), who allow her to own her masters. She also moved to Universal Music Publishing Group for her music publishing.

    Subsequent sales

    In June 2019, Big Machine’s catalogue was sold to Scooter Braun’s Ithaca Holdings, for a reported US$330 million, with US$140 million representing Swift’s catalogue.

    Swift described this as her “worst case scenario”, as she had a tumultuous history of alleged bullying from Braun. She also alleged she found out about the acquisition at the time it was announced to the world, without being given the opportunity to purchase her catalogue.

    Throughout 2019 and 2020 it was reported she attempted to regain ownership, but negotiations fell through.

    In October 2020, Swift’s catalogue was sold to Shamrock Capital, a private equity firm, for an estimated US$300+ million. In recent years, private equity firms have been purchasing music catalogues as profitable long-term financial assets, rather than for artistic or cultural reasons.

    These events led Swift to rerecord her first six albums, branding them “Taylor’s Version”. Four have been released.

    Swift rerecorded her albums, branding them ‘Taylor’s Version’.
    melissamn/Shutterstock

    She was able to create new versions of her songs, with their own intellectual property rights attached.

    As owner of these new masters, she has control over where these songs are used, and she receives a greater portion of the income from the streams, downloads and licensing.

    The decision was enormously successful. Mobilising her fans’ support via social media, they prioritised purchasing “Taylor’s Version” over the original masters, diluting the value of the originals.

    Successful futures

    Swift has repeatedly emphasised the need for artists to retain control over their work and to receive fair compensation. In a 2020 interview she said she believes artists should always own their master records and licence them back to the label for a limited period.

    This would mean the label could monetise, control and manage the recordings for a certain time, but the artist retains the ownership. They eventually gain back full control, rather than handing over permanent rights to the label.

    Swift’s experience has sparked conversations within the industry, prompting emerging artists to approach record labels with caution and advocate for fairer deals and ownership rights. Olivia Rodrigo negotiated her contract with Swift’s saga as a cautionary tale.

    Purchasing her catalogue and masters gives Swift autonomy about how the rights to all of her music is used. Her fans are likely to continue to support her and purchase both the originals and “Taylor’s Version”, so the value of her original albums may rise.

    And, in the long-run, her new acquisition will likely make her much wealthier.

    Wellett Potter does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Taylor Swift now owns all the music she has ever made: a copyright expert breaks it down – https://theconversation.com/taylor-swift-now-owns-all-the-music-she-has-ever-made-a-copyright-expert-breaks-it-down-257965

    MIL OSI – Global Reports

  • MIL-OSI Global: How did humans evolve such rotten genetics?

    Source: The Conversation – UK – By Laurence D. Hurst, Professor of Evolutionary Genetics at The Milner Centre for Evolution, University of Bath

    MaksEvs/Shutterstock

    To Shakespeare’s Hamlet we humans are “the paragon of animals”. But recent advances in genetics are suggesting that humans are far from being evolution’s greatest achievement.

    For example, humans have an exceptionally high proportion of fertilised eggs that have the wrong number of chromosomes and one of the highest rates of harmful genetic mutation.

    In my new book The Evolution of Imperfection I suggest that two features of our biology explain why our genetics are in such a poor state. First, we evolved a lot of our human features when our populations were small and second, we feed our young across a placenta.


    Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.


    Our reproduction is notoriously risky for both mother and embryo. For every child born another two fertilised eggs never made it.

    Most human early embryos have chromosomal problems. For older mothers, these embryos tend to have too many or too few chromosomes due to problems in the process of making eggs with just one copy of each chromosome. Most chromosomally abnormal embryos don’t make it to week six so are never a recognised pregnancy.

    About 15% of recognised pregnancies spontaneously miscarry, usually before week 12, rising to 65% in women over 40. About half of miscarriages are because of chromosomal issues.

    Other mammals have similar chromosome-number problems but with an error rate of about 1% per chromosome. Cows should have 30 chromosomes in sperm or egg but about 30% of their fertilised eggs have odd chromosome numbers.

    Humans with 23 chromosomes should have about 23% of fertilised eggs with the wrong number of chromosomes but our rate is higher in part because we presently reproduce late and chromosomal errors escalate with maternal age.

    Survive that, then gestational diabetes and high blood pressures issues await, most notably pre-eclampsia, potentially lethal to mother and child, affecting about 5% of pregnancies. It is unique to humans.

    Historically, up until about 1800, childbirth was remarkably dangerous with about 1% maternal mortality risk, largely owing to pre-eclampsia, bleeding and infection. In Japanese macaques by contrast, despite offspring also having a large head, maternal mortality isn’t seen. Advances in maternal care have seen current UK maternal mortality rates plummet to 0.01%.

    Many of these problems are contingent on the placenta. Compare us to a kiwi bird that loads its large egg with resources and sits on it, even if it is dead: time and energy wasted. In mammals, if the embryo is not viable, the mother may not even know she had conceived.

    The high rate of chromosomal issues in our early embryos is a mammalian trait connected to the fact that early termination of a pregnancy lessens the costs, meaning less time wasted holding onto a dead embryo and not giving up the resources that are needed for a viable embryo to grow into a baby.

    But reduced costs are not enough to explain why chromosomal problems are so common in mammals.

    During the process of making a fertilisable egg with one copy of each chromosome, a sister cell is produced, called the polar body. It’s there to discard half of the chromosomes. It can “pay” in evolutionary terms for a chromosome to not go to the polar body when it should instead stay behind in the soon to be fertilised egg.

    It forces redirection of resources to viable offspring. This can explain why chromosomal errors are mostly maternal and why, given their lack of ability to redirect saved energy, other vertebrates don’t seem to have embryonic chromosome problems.

    Our problems with gestational diabetes are a consequence of foetuses releasing chemicals from the placenta into the mother’s blood to keep glucose available. The problems with pre-eclampsia are associated with malfunctioning placentas, in part owing to maternal immune rejection of the foetus.

    Regular unprotected sex can protect women against pre-eclampsia by helping the mother become used to paternal proteins. The fact that pre-eclampsia is human-specific may be related to our exceptionally invasive placenta that burrows deep into the uterine lining, possibly required to build our unusually large brains.

    Our other peculiarities are predicted by the most influential evolutionary theory of the last 50 years, the nearly-neutral theory. It states that natural selection is less efficient when a species has few individuals.

    A slightly harmful mutation can be removed from a population if that population is large but can increase in frequency, by chance, if the population is small. Most human-specific features evolved when our population size was around 10,000 in Africa prior to its recent (last 20,000 years) expansion. Minuscule compared to, for example, bacterial populations.

    This explains why we have such a bloated genome. The main job of DNA is to give instructions to our cells about how to make the proteins vital for life.

    That is done by just 1% of our DNA but by 85% of that of our gut-dwelling bacteria Escherichia coli. Some of our DNA is required for other reasons, such as controlling which genes get activated and when. Yet only about 10% of our DNA shows any signs of being useful.

    If you have a small population size, you also have more problems stopping genetical errors like mutations. Although DNA mutations can be beneficial, they are more commonly a curse. They are the basis of genetic diseases, be they complex (such as Crohn’s disease and predispositions to cancer), or owing to single gene effects (like cystic fibrosis and Huntington’s disease).

    We have one of the highest mutation rates of all species. Other species with massive populations have mutation rates over three orders of magnitude lower, another prediction of the nearly-neutral theory.

    A consequence of our high mutation rate is that around 5% of us suffer a “rare” genetic disease.

    Modern medicine may help cure our many ailments, but if we can’t do anything about our mutation rate, we will still get ill.

    Laurence D. Hurst is the author of The Evolution of Imperfection, published by Princeton University Press. This was enabled by funding from The Humboldt Foundation and the European Research Council.

    ref. How did humans evolve such rotten genetics? – https://theconversation.com/how-did-humans-evolve-such-rotten-genetics-255473

    MIL OSI – Global Reports

  • MIL-OSI Global: Trump’s Middle East pivot aims to counter China’s rising influence

    Source: The Conversation – UK – By Maria Papageorgiou, Leverhulme Early Career Researcher, School of Geography, Politics, and Sociology, Newcastle University

    The US president, Donald Trump, claimed he was able to secure deals totalling more than US$2 trillion (£1.5 trillion) for the US on his tour of the Gulf states in May. Trump said “there has never been anything like” the amount of jobs and money these agreements will bring to the US.

    However, providing a lift for the US economy wasn’t the only thing on Trump’s mind. China’s influence in the wider Middle East region is growing fast – so much so that it was even able to mediate a detente between bitter regional rivals Saudi Arabia and Iran in 2023.

    Trump’s attempt to strengthen ties with countries in the Middle East is probably also a deliberate attempt to contain China’s growing regional ambitions.

    China has spent the past two decades building up its economic and political relations with the Middle East. In 2020, it replaced the EU as the largest trading partner to the Gulf Cooperation Council, which includes Bahrain, Kuwait, Oman, Qatar, Saudi Arabia and the United Arab Emirates (UAE). Bilateral trade between them was valued at over US$161 billion (£119 billion).


    Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.


    The Middle East has also become an important partner to China’s sprawling Belt and Road Initiative (BRI). Massive infrastructure projects in the region, such as high-speed railway lines in Saudi Arabia, have provided lucrative opportunities for Chinese companies.

    The total value of Chinese construction and investment deals in the Middle East reached US$39 billion in 2024, the most of any region in the world. That year, the three countries with the highest volume of BRI-related construction contracts and investment were all in the Middle East: Saudi Arabia, Iraq and the UAE.

    China has also strengthened its financial cooperation with Middle Eastern countries, particularly the UAE and Saudi Arabia. As part of China’s efforts to reduce global reliance on the US dollar for trade, it has arranged cross-border trade settlements, currency swap agreements, and is engaging in digital currency collaboration initiatives with these countries.

    American security guarantees have historically fostered an alignment between the Gulf states and the west. The string of agreements Trump signed with countries there reflects an attempt to draw them away from China and back towards Washington’s orbit.

    Countering China

    One of the more significant developments from Trump’s trip was an agreement to deepen US technological cooperation with the UAE, Saudi Arabia and Qatar. The US and UAE announced they would work together to construct the largest AI data centre outside of the US in Abu Dhabi.

    Technology is one of the key areas where China has been trying to assert its influence in the region. Through Beijing’s so-called “Digital Silk Road” initiative, which aims to develop a global digital ecosystem with China at its centre, Chinese firms have secured deals with Middle Eastern countries to provide 5G mobile network technology.

    Chinese tech giants Huawei and Alibaba are also in the process of signing partnerships with telecommunications providers in the region for collaboration and research in cloud computing. These companies have gained traction by aligning closely with national government priorities, such as Saudi Arabia’s initiative to diversify its economy through tech development.

    American companies, including Amazon, Microsoft and Google, have spent years building regional tech ecosystems across the Gulf. Trump is looking to recover this momentum. He was joined in the Middle East by more than 30 leaders of top American companies, who also secured commercial deals with their peers from the Gulf.

    US quantum computing company Quantinuum and Qatari investment firm Al Rabban Capital finalised a joint venture worth up to a US$1 billion. The agreement will see investment in quantum technologies and workforce development in the US and Qatar.

    There are two other areas where Trump is trying to cut China off. American companies and Abu Dhabi’s state-run oil firm agreed a US$60 billion energy partnership. China is heavily dependent on the Middle East for energy, with almost half of the oil it uses coming from the region. Greater alignment with the US could hamper Beijing’s ability to secure the resources it needs.

    Trump also signed a raft of defence deals with Qatar and Saudi Arabia. These included a US$1 billion deal for Qatar to acquire drone defence technology from American aerospace conglomerate Raytheon RTX, and a US$142 billion agreement for the Saudis to buy military equipment from US firms.

    These moves underscore Washington’s intention to limit China’s influence in key defence sectors. China is a key player in the global market for commercial and military drones, providing Saudi Arabia and the UAE with a large share of their combat drones.

    One final aspect of Trump’s trip was his brief meeting with Syria’s interim president Ahmed al-Sharaa. Trump signalled possible sanctions relief, which has since come into effect. This constituted more than a diplomatic thaw.

    With China positioning itself as a regional mediator and Russia struggling with a diminished role following the fall of Bashar al-Assad in Syria, the US is looking to reassert itself as the primary power broker in the region.

    Dr Maria (Mary) Papageorgiou receives funding from the Leverhulme Trust.

    ref. Trump’s Middle East pivot aims to counter China’s rising influence – https://theconversation.com/trumps-middle-east-pivot-aims-to-counter-chinas-rising-influence-257366

    MIL OSI – Global Reports

  • MIL-OSI Global: Gen Z and the sustainability paradox: Why ideals and shopping habits don’t always align

    Source: The Conversation – Canada – By Melise Panetta, Lecturer of Marketing in the Lazaridis School of Business and Economics, Wilfrid Laurier University

    Often praised as the ‘sustainability generation,’ Gen Z has been at the forefront of calls for ethical production, environmental accountability and climate-conscious living. (Shutterstock)

    As the summer shopping season kicks off, all eyes are on Gen Z — those born between 1997 and 2012 and whose purchasing power wields significant influence over market trends.

    Often lauded as the “sustainability generation,” a closer look reveals a complex internal struggle: despite their strong desire for eco-conscious living, many Gen Z consumers find themselves drawn to the allure of fast, affordable, trend-driven consumption.

    This discrepancy between belief and action, known as the “attitude-behaviour gap,” is a defining characteristic of Gen Z consumerism. While it’s not unique to Gen Z, it’s particularly pronounced due to their vocal environmentalism and their immersion in a hyper-consumerist digital world.

    Understanding consumer behaviour at a deeper level means looking past stated preferences and focusing instead on the economic, technological and cultural forces that shape real-world decisions.

    The rise of the eco-conscious Gen Z consumer

    There’s no denying Gen Z’s pronounced environmental awareness compared to other generations.

    Raised in the era of climate crisis and corporate responsibility, they gravitate toward brands that reflect their values. Over 75 per cent say sustainability matters more than brand name, and 81 per cent are willing to pay more for eco-friendly products.

    This isn’t merely performative — Gen Z actively integrates sustainability into their lives. They’re more likely than any other generation to research a brand’s ethics and environmental impact before buying, often using social media to guide decisions.

    More than 70 per cent discover sustainable products via platforms like Instagram and TikTok, fuelling social movements like Who Made My Clothes and supporting businesses like LastObject, a company that uses digital crowdfunding to engage environmentally conscious consumers.

    They’re also behind the rise of the second-hand market, which is expected to hit US$329 billion globally by 2029. With 40 per cent of Gen Z — the highest rate of any age group — shopping resale, platforms like Depop and ThredUp have seen explosive growth.

    Gen Z’s consumer behaviour is also influencing the spending habits of older generations. According to the World Economic Forum, increased spending on sustainable brands by groups like Generation X is being driven, in part, by Gen Z’s values, behaviours and expectations.

    Gen Z’s push for sustainable consumption is shifting the market and everyone in it.

    When values clash with spending habits

    Fast fashion, frictionless e-commerce and the constant churn of social media trends have created a marketplace where sustainable intentions are easily sidelined.

    Viral phenomena like Shein hauls — videos where social media influencers flaunt dozens of ultra-cheap outfits — spotlight the contradiction.

    In the first 19 weeks of 2025 alone, Shein’s app amassed over 54 million downloads, a staggering number that underscores how affordability and instant gratification often win out over sustainability. Built on rapid production and ultra-low prices, Shein’s model encourages frequent, high-volume purchases — the antithesis of the “buy less, buy better” ethos that underpins sustainable consumption.

    And this pattern extends far beyond fashion. The wider consumer landscape rewards speed and low cost at every turn. Gen Z came of age with one-click ordering and next-day delivery — conveniences that are now baseline expectations for shoppers. These days, nearly half of Gen Z consumers prioritize fast shipping, despite its high environmental cost.

    Meanwhile, the social media platforms where they discover new eco-conscious brands are the same ones pushing relentless trend cycles that encourage over-consumption, from gadgets to clothing and lifestyle products.

    Sustainability often comes with a steep price tag, one many young Gen Z consumers simply can’t afford. Brands like Patagonia or Allbirds are aspirational, but in the context of the cost-of-living crisis, fast-fashion giants like Zara, H&M and TJX Companies offer more budget-friendly options.

    Navigating the ‘attitude-behaviour’ gap

    The disconnect between Gen Z’s values and their consumption patterns isn’t about hypocrisy. Rather, it’s about navigating a system where sustainable choices are harder, more expensive and often less visible.

    Gen Z’s struggle shows that living sustainably in a world designed for speed, savings and social validation is an uphill battle — even for the generation most determined to make a difference.

    Bridging this gap demands action on several fronts. For businesses, it means innovating to make sustainable options more affordable and accessible. Transparency in supply chain practices and clear communication about environmental impact are also key to building trust with consumers.

    For Gen Z themselves, transparency about the true cost of consumption is vital. Fostering critical thinking about marketing messages and the impact of social media trends can empower them to make choices that more consistently align with their values.

    As the summer unfolds and consumer spending rises, the choices made by Gen Z will be a significant indicator of our collective path towards a more sustainable economy. Their ideals are a powerful force for change, but translating those ideals into consistent action remains the critical challenge.

    Melise Panetta does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Gen Z and the sustainability paradox: Why ideals and shopping habits don’t always align – https://theconversation.com/gen-z-and-the-sustainability-paradox-why-ideals-and-shopping-habits-dont-always-align-257601

    MIL OSI – Global Reports

  • MIL-OSI Global: A First Nations power authority could transform electricity generation for Indigenous nations

    Source: The Conversation – Canada – By Christina E. Hoicka, Canada Research Chair in Urban Planning for Climate Change, Associate Professor of Geography and Civil Engineering, University of Victoria

    First Nations across British Columbia have developed renewable electricity projects for decades. Yet they’ve experienced significant barriers to implementing, owning and managing their own electricity supply. That’s because there have been few procurement policies in place that require their involvement.

    While municipalities are allowed to own and operate electricity utilities in B.C., First Nations are not. The Declaration on the Rights of Indigenous Peoples Act (DRIPA) in B.C. requires that First Nations are provided with opportunities for economic development without discrimination.

    Many First Nations in B.C. view the development of renewable electricity projects on their lands (like hydro power, solar panels, wind turbines and transmission lines) as a way to achieve social, environmental and economic goals that are important to their community.

    These goals may include powering buildings in the community, creating economic development and local jobs, earning revenue, improving access to affordable and reliable electricity or using less diesel.

    Our new study shares the story of a coalition of First Nations and organizations that advocated for changes to electricity regulations and laws to give Indigenous communities more control to develop renewable electricity projects. Our interviews with knowledge holders from 14 First Nations offer insight into motivations behind their calls for regulatory changes.

    The coalition includes the Clean Energy Association of B.C., New Relationship Trust, Pembina Institute, First Nations Power Authority, Nuu-Chah-Nulth Tribal Council, and the First Nations Clean Energy Working Group.

    Models for a First Nations power authority

    Almost all electricity customers in B.C. are served by BC Hydro, the electric utility owned by the provincial government.

    The coalition argues that applying DRIPA to the electricity sector should allow First Nations to form a First Nations power authority. Such an organization would provide them with control over the development of electricity infrastructure that aligns with their values and would also help B.C. meet its greenhouse gas reduction targets.

    In the Re-Imagining Social Energy Transitions CoLaboratory (ReSET CoLab) at the University of Victoria, we analyzed regulatory documents from the B.C. Utilities Commission, and advocacy documents and presentations for discussion developed by the coalition.

    We identified six proposed First Nations power authority (Indigenous Utility) models:

    A capacity building point-of-contact model streamlines the development of renewable electricity projects to sell power to the provincial utility. For example, the First Nations Power Authority in Saskatchewan was formed for this purpose by SaskPower.

    This would be the most conformative model. It would provide vital networks and connections to First Nations while allowing BC Hydro and the British Columbia Utilities Commission to maintain full control over the electricity sector.

    In the second model, called a “put” contract, a B.C. First Nations Power Authority represents First Nations wishing to develop renewable electricity projects. Whenever the province needs to build new electricity generation projects to meet growing electricity demand, a portion of the new generation is developed by the First Nations authority.

    In the third model, First Nations build and operate electricity transmission and distribution lines to allow remote industrial facilities and communities to connect to the electricity grid. This is called “Industrial Interconnection.”

    For example, the Wataynikaneyap Power Transmission line in Ontario is a 1,800-kilometre line that provides an electricity grid connection for 17 previously remote nations. Twenty-four First Nations own 51 per cent of the line, while private investors own 49 per cent.

    In the fourth model, the B.C. First Nation Power Authority acts as the designated body for various opportunities in the electricity sector, such as the development of electricity transmission, distribution, generation or customer services. This model is referred to as “local or regional ‘ticket’ opportunities.”

    Fifth, the First Nation Power Authority develops renewable electricity projects and distributes electricity from these projects to customers as a retailer, or under an agreement through the BC Hydro electricity grid. For example, Nova Scotia Power’s Green Choice program procures renewable electricity from independent power producers to supply to electricity customers.

    Sixth, new utility is formed in B.C., owned by First Nations, that owns and operates electricity generation, transmission and distribution services and offers standard customer services in a specific region of B.C. (called a “Regional Vertically-Integrated Power Authority”).

    Most of these models would require changes to regulations. The sixth and most transformative model would provide First Nations with full decision-making control over electricity generation, transmission and distribution. It would also give them the ability to sell to customers and require extensive changes in electricity regulation.

    Improving living standards

    First Nations knowledge-holders told us that a lack of reliable power, high electricity rates, lack of control over projects on their traditional lands and the need for resilience in the face of climate events were motivations for taking electricity planning into their own hands.

    They also expressed that varied factors motivate community interest in renewable energy: improving the quality of life for community members; financial independence; mitigating climate change; protecting the environment; reducing diesel use and providing stable and safe power for current and future generations.

    First Nations are already seeking to capitalize on the benefits of renewable energy by developing their own projects within the current regulatory system.

    Most of those we spoke to see a First Nations power authority in B.C. as a means to provide opportunities for economic development without discrimination — and to achieve self-determination, self-reliance and reconciliation by addressing the root causes of some of the colonial injustices they face by obtaining control over the electricity sector on their lands.

    This article was co-authored by David Benton, an adopted member and Clean Energy Project Lead of Gitga’at First Nation and Kayla Klym, a BSc student in Geography at the University of Victoria.

    For this research project, Dr. Christina E. Hoicka received funding from Natural Resources Canada Clean Energy for Rural and Remote Communities Program (CERRC), Capacity Building Stream funding program. The research was conducted in partnership for the Clean Energy Association of British Columbia, and the New Relationship Trust. This work was also supported by the New Frontiers in Research Fund Global NFRFG-2020-00339 and the Canada Research Chair Secretariat CRC-2020-00055.

    Anna Berka is affiliated with Community Power Agency, a not-for-profit workers co-operative working to ensure a fair and accessible energy transition for all.

    Adam J. Regier and Sara Chitsaz do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    ref. A First Nations power authority could transform electricity generation for Indigenous nations – https://theconversation.com/a-first-nations-power-authority-could-transform-electricity-generation-for-indigenous-nations-254982

    MIL OSI – Global Reports

  • MIL-OSI Global: We mapped 18,000 children’s playgrounds and revealed inequality across England

    Source: The Conversation – UK – By Paul Brindley, Senior Lecturer, Department of Landscape Architecture, University of Sheffield

    Daxiao Productions / shutterstock

    Outside of the home, public playgrounds are the most common places for children to play and the fundamental right of every child to play is even recognised in a UN convention. Despite this, there has been very limited research exploring inequality in the provision of playgrounds.

    To help address this, we have analysed data from almost 34,000 playgrounds in England – the largest national dataset on playgrounds yet. In particular, we looked at England’s largest 534 settlements with populations over 15,000 and mapped patterns from the 18,077 children’s playgrounds within them.

    We found substantial inequalities. For example, with two places broadly comparable in population size, one might have five times the number of children per playground.


    Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.


    With the exception of London, deprived settlements in England tend to have fewer, smaller and further-away playgrounds – a serious social justice issue. In London however, relationships were found to be the opposite, with deprived areas tending to have more playgrounds in close proximity.

    There are many different ways to measure the provision of playgrounds, but we used 21 indicators across three domains: the number of playgrounds per child, the size of playgrounds, and their closeness to where children live.

    This ensured our results were not heavily influenced by a single variable, since some settlements excelled in one domain but were lacking in others.

    Winners and losers

    The graph below shows children’s playground provision for major settlements in England:

    More deprived settlements tend to have fewer, smaller playgrounds.
    Brindley & Martin (2025)

    Places on the left of the graph have smaller playgrounds, while in places towards the bottom of the graph kids have to travel further to a playground. Circle size indicates how many playgrounds there are per child.

    Here’s the same graph for boroughs of London, where the relationship is reversed:

    In London, kids in more deprived inner city boroughs have better access to playgrounds.
    Brindley & Martin (2025)

    These are the top settlements in each category:


    Brindley & Martin (2025), CC BY-SA

    And these are the bottom:


    Brindley & Martin (2025), CC BY-SA

    Comparing major settlements, Liverpool has nearly five times more children under 16 per playground than Norwich (1,104 compared to 236). In London, the difference is even greater: the borough of Redbridge has nearly eight times more children per playground than Islington (1,567 v 204).

    In terms of playground size, Leicester dedicates four times more of its urban area to playgrounds than Leeds (0.30% v 0.07%), while Norwich offers seven times more playground space per child than Birmingham (4.2 metres to 0.7 metres). In London, Islington has five times the playground area of Barnet (0.64% of total urban area v 0.13%), and three times more space per child than Redbridge (2.8 metres v 0.9 metres).

    Liverpool has the lowest percentage of children within 100, 300 and 500 metres of playgrounds, with Coventry having the lowest percentage at 800 metres. In contrast, Southampton, Plymouth and Reading have the highest percentages of children living close to playgrounds.

    In London, Redbridge and Kingston upon Thames had the lowest percentages of children living close to playground, while Islington, Tower Hamlets and Hackney had the highest levels of provision. These distance measures will be heavily influenced by population density, especially in London (Redbridge is suburban; Islington is inner city). However, patterns outside of London appear more complex.

    Different solutions for different places

    Places like Norwich, Islington and Milton Keynes fared well across all three domains, while places like Liverpool, Leeds or Stockton-on-Tees did comparably poorly in all three. But most areas fell somewhere in between.

    For example, places such as Portsmouth or Nottingham have good scores for distance but have poor provision in terms of size. They would, therefore, benefit most from expanding existing playgrounds.

    In contrast, playgrounds in Brighton and Lincoln are bigger but tend to be further away. Places like these would benefit from a few new strategically positioned playgrounds to fill in the gaps.

    As with any dataset, there are constraints. In future, we want to incorporate additional data on accessibility for disabled children, and we recognise that playgrounds are just one element across the wider spectrum of places where children play. For instance, children in outer London boroughs with few playgrounds might live nearer to woods or sports fields.

    We also acknowledge that we have no data to monitor the quality of playgrounds. Is a 100 square metre playground filled with interesting and safe features? Or a single worn out slide surrounded by fencing? Ultimately, playground use rather than provision is the most important measure. After all, a bad playground will not make children more active.

    Following the launch of the first all-party parliamentary group on play in May 2025, our work is helping campaigners lobby for a “play sufficiency duty” in England (similar to Scotland and Wales) and a new national play strategy.

    Our hope is that, as people become more aware of the problem, we’ll see new policies and better placemaking for children. Already we are working with Play England (England’s national charity for play) on a “digital dashboard” capable of supporting councils to plan more strategically for play in their local areas.

    The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    ref. We mapped 18,000 children’s playgrounds and revealed inequality across England – https://theconversation.com/we-mapped-18-000-childrens-playgrounds-and-revealed-inequality-across-england-252239

    MIL OSI – Global Reports

  • MIL-OSI Global: Dry mouth, bad breath and tooth damage: the effects Ozempic and Wegovy can have on your mouth

    Source: The Conversation – UK – By Adam Taylor, Professor of Anatomy, Lancaster University

    Bad breath is a commonly reported side-effect of weight loss drugs. antoniodiaz/ Shutterstock

    Ozempic and Wegovy have been hailed as wonder drugs when it comes to weight loss. But as the drug has become more widely used, a number of unintended side-effects have become apparent – with the weight loss drug affecting the appearance of everything from your butt to your feet.

    “Ozempic face” is another commonly reported consequences of using these popular weight loss drugs. This is a sunken or hollowed out appearance the face can take on in people taking weight loss drugs. It can also increase signs of ageing – including lines, wrinkles and sagging skin.

    This happens because the action of semaglutide (the active ingredient in both Ozempic and Wegovy) isn’t localised to act just on the fat in places we don’t want it. Instead, it acts on fat across the whole body – including in the face.

    But it isn’t just the appearance of your face that semaglutide affects. These drugs may also affect the mouth and teeth, too. And these side-effects could potentially lead to lasting damage.


    Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.


    Dry mouth

    Semaglutide effects the salivary glands in the mouth. It does this by reducing saliva production (hyposalivation), which can in turn lead to dry mouth (xerostomia). This means there isn’t enough saliva to keep the mouth wet.

    It isn’t exactly clear why semaglutide has this effect on the salivary glands. But in animal studies of the drug, it appears the drug makes saliva stickier. This means there’s less fluid to moisten the mouth, causing it to dry out.

    GLP-1 receptor agonist drugs (such as semaglutide) can also reduce water intake by affecting areas in the brain responsible for thirst. Low fluid intake further reduces saliva production, and may even cause the saliva to become thick and frothy and the tongue to become sticky.

    Bad breath

    One other commonly reported unwanted effect by semaglutide users is bad breath (halitosis).

    When there’s less saliva flowing through the mouth, this encourages bacteria that contribute to bad breath and the formation of cavities to thrive. These bacterial species include Streptococcus mutans and some strains of Lactobacillus.

    Another species that has been shown to thrive in conditions where saliva is reduced is Porphyromonas gingivalis. This bacteria is a significant contributor to the production of volatile sulphur compounds, which cause the foul odours characteristic of halitosis.

    Another factor that might explain why semaglutide causes bad breath is because less saliva being produced means the tongue isn’t cleaned. This is the same reason why your “morning breath” is so bad, because we naturally produce less saliva at night. This allows bacteria to grow and produce odours. Case report images show some people taking semaglutide have a “furry”-like or coated appearance to their tongue. This indicates a build up of bacteria that contribute to bad breath.

    Some people taking the weight loss drug experience a bacterial buidl-up on their tongue.
    sruilk/ Shutterstock

    Tooth damage

    One of the major side-effects of Ozempic is vomiting. Semaglutide slows how quickly the stomach empties, delaying digestion which can lead to bloating, nausea and vomiting.

    Repeated vomiting can damage the teeth. This is because stomach acid, composed primarily of hydrochloric acid, erodes the enamel of the teeth. Where vomiting occurs over a prolonged period of months and years the more damage will occur. The back surface of the teeth (palatal surface) closest to the tongue are more likely to see damage – and this damage may not be obvious to the sufferer.

    Vomiting also reduces the amount of fluid in the body. When combined with reduced saliva production, this puts the teeth at even greater risk of damage. This is because saliva helps neutralise the acid that causes dental damage.

    Saliva also contributes to the dental pellicle – a thin, protective layer that the saliva forms on the surface of the teeth. It’s thickest on the tongue-facing surface of the bottom row of teeth. In people who produce less saliva, the dental pellicle contains fewer mucins – a type of mucus which helps saliva stick to the teeth.

    Reducing the risk of damage

    If you’re taking semaglutide, there are many things you can do to keep your mouth healthy.

    Drinking water regularly during the day can help to keep the oral surfaces from drying out. This helps maintain your natural oral microbiome, which can reduce the risk of an overgrowth of the bacteria that cause bad breath and tooth damage.

    Drinking plenty of water also enables the body to produce the saliva needed to prevent dry mouth, ideally the recommended daily amount of six to eight glasses. Chewing sugar-free gum is also a sensible option as it helps to encourage saliva production. Swallowing this saliva keeps the valuable fluid within the body. Gums containing eucalyptus may help to prevent halitosis, too.

    There’s some evidence that probiotics may help to alleviate bad breath, at least in the short term. Using a probiotic supplements or consuming probiotic-rich foods (such as yoghurt or kefir) may be a good idea.

    Practising good basic oral hygiene, tooth brushing, reducing acidic foods and sugary drinks and using a mouthwash all help to protect your teeth as well.

    Women are twice as likely to have side-effects when taking GLP-1 receptor agonists – including gastrointestinal symptoms such as vomiting. This may be due to the sex hormones oestrogen and progesterone, which can alter the gut’s sensitivity. To avoid vomiting, try eating smaller meals since the stomach stays fuller for longer while taking semaglutide.

    If you are sick, don’t immediately brush your teeth as this will spread the stomach’s acid over the surface of the teeth and increase the risk of damage. Instead, rinse your mouth out with water or mouthwash to reduce the strength of the acid and wait at least 30 minutes before brushing.

    It isn’t clear how long these side effects last, they’ll likely disappear when the medication is stopped, but any damage to the teeth is permanent. Gastrointestinal side-effects can last a few weeks but usually resolve on their own unless a higher dose is taken.

    Adam Taylor does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Dry mouth, bad breath and tooth damage: the effects Ozempic and Wegovy can have on your mouth – https://theconversation.com/dry-mouth-bad-breath-and-tooth-damage-the-effects-ozempic-and-wegovy-can-have-on-your-mouth-257859

    MIL OSI – Global Reports