Category: Academic Analysis

  • MIL-OSI Global: Managing forests and other ecosystems under rising threats requires thinking across wide-ranging scenarios

    Source: The Conversation – USA – By Kyra Clark-Wolf, Research Scientist in Ecological Transformation, University of Colorado Boulder

    Thinking through scenarios allows land managers to prepare for many potential outcomes. Benjamin Slyngstad via USGS

    In Sequoia and Kings Canyon National Parks in California, trees that have persisted through rain and shine for thousands of years are now facing multiple threats triggered by a changing climate.

    Scientists and park managers once thought giant sequoia forests nearly impervious to stressors like wildfire, drought and pests. Yet, even very large trees are proving vulnerable, particularly when those stressors are amplified by rising temperatures and increasing weather extremes.

    The rapid pace of climate change – combined with threats like the spread of invasive species and diseases – can affect ecosystems in ways that defy expectations based on past experiences. As a result, Western forests are transitioning to grasslands or shrublands after unprecedented wildfires. Woody plants are expanding into coastal wetlands. Coral reefs are being lost entirely.

    Nate Stephenson, from the U.S. Geological Survey, talks about the fire damage at Redwood Mountain Grove in the Kings Canyon National Park, Calif., in 2021.
    AP Photo/Gary Kazanjian

    To protect these places, which are valued for their natural beauty and the benefits they provide for recreation, clean water and wildlife, forest and land managers increasingly must anticipate risks they have never seen before. And they must prepare for what those risks will mean for stewardship as ecosystems rapidly transform.

    As ecologists and a climate scientist, we’re helping them figure out how to do that.

    Managing changing ecosystems

    Traditional management approaches focus on maintaining or restoring how ecosystems looked and functioned historically.

    However, that doesn’t always work when ecosystems are subjected to new and rapidly shifting conditions.

    Ecosystems have many moving parts – plants, animals, fungi and microbes; and the soil, air and water in which they live – that interact with one another in complex ways.

    When the climate changes, it’s like shifting the ground on which everything rests. The results can undermine the integrity of the system, leading to ecological changes that are hard to predict.

    To plan for an uncertain future, natural resource managers need to consider many different ways changes in climate and ecosystems could affect their landscapes. Essentially, what scenarios are possible?

    Preparing for multiple possibilities

    At Sequoia and Kings Canyon, park managers were aware that climate change posed some big risks to the iconic trees under their care. More than a decade ago, they undertook a major effort to explore different scenarios that could play out in the future.

    It’s a good thing they did, because some of the more extreme possibilities they imagined happened sooner than expected.

    In 2014, drought in California caused the giant sequoias’ foliage to die back, something never documented before. In 2017, sequoia trees began dying from insect damage. And, in 2020 and 2021, fires burned through sequoia groves, killing thousands of ancient trees.

    While these extreme events came as a surprise to many people, thinking through the possibilities ahead of time meant the park managers had already begun to take steps that proved beneficial. One example was prioritizing prescribed burns to remove undergrowth that could fuel hotter, more destructive fires.

    Insulating wraps protected the giant sequoia General Sherman from a fire in 2021.
    Patrick T. Fallon/AFP via Getty Images

    The key to effective planning is a thoughtful consideration of a suite of strategies that are likely to succeed in the face of many different changes in climates and ecosystems. That involves thinking through wide-ranging potential outcomes to see how different strategies might fare under each scenario – including preparing for catastrophic possibilities, even those considered unlikely.

    For example, prescribed burning may reduce risks from both catastrophic wildfire and drought by reducing the density of plant growth, whereas suppressing all fires could increase those risks in the long run.

    Strategies undertaken today have consequences for decades to come. Managers need to have confidence that they are making good investments when they put limited resources toward actions like forest thinning, invasive species control, buying seeds or replanting trees. Scenarios can help inform those investment choices.

    Constructing credible scenarios of ecological change to inform this type of planning requires considering the most important unknowns. Scenarios look not only at how the climate could change, but also how complex ecosystems could react and what surprises might lay beyond the horizon.

    Scientists at the North Central Climate Adaptation Science Center are collaborating with managers in the Nebraska Sandhills to develop scenarios of future ecological change under different climate conditions, disturbance events like fires and extreme droughts, and land uses like grazing.
    Photos: T. Walz, M. Lavin, C. Helzer, O. Richmond, NPS (top to bottom)., CC BY

    Key ingredients for crafting ecological scenarios

    To provide some guidance to people tasked with managing these landscapes, we brought together a group of experts in ecology, climate science, and natural resource management from across universities and government agencies.

    We identified three key ingredients for constructing credible ecological scenarios:

    1. Embracing ecological uncertainty: Instead of banking on one “most likely” outcome for ecosystems in a changing climate, managers can better prepare by mapping out multiple possibilities. In Nebraska’s Sandhills, we are exploring how this mostly intact native prairie could transform, with outcomes as divergent as woodlands and open dunes.

    2. Thinking in trajectories: It’s helpful to consider not just the outcomes, but also the potential pathways for getting there. Will ecological changes unfold gradually or all at once? By envisioning different pathways through which ecosystems might respond to climate change and other stressors, natural resource managers can identify critical moments where specific actions, such as removing tree seedlings encroaching into grasslands, can steer ecosystems toward a more desirable future.

    3. Preparing for surprises: Planning for rare disasters or sudden species collapses helps managers respond nimbly when the unexpected strikes, such as a severe drought leading to widespread erosion. Being prepared for abrupt changes and having contingency plans can mean the difference between quickly helping an ecosystem recover and losing it entirely.

    Over the past decade, access to climate model projections through easy-to-use websites has revolutionized resource managers’ ability to explore different scenarios of how the local climate might change.

    What managers are missing today is similar access to ecological model projections and tools that can help them anticipate possible changes in ecosystems. To bridge this gap, we believe the scientific community should prioritize developing ecological projections and decision-support tools that can empower managers to plan for ecological uncertainty with greater confidence and foresight.

    Ecological scenarios don’t eliminate uncertainty, but they can help to navigate it more effectively by identifying strategic actions to manage forests and other ecosystems.

    Kyra Clark-Wolf receives funding from USGS, NSF, and National Park Service. She is affiliated with the Cooperative Institute for Research in Environmental Sciences at the University of Colorado Boulder and the North Central Climate Adaptation Science Center.

    Brian W. Miller receives funding from the U.S. Geological Survey North Central Climate Adaptation Science Center. Any use of trade, firm, or product names is for descriptive purposes only and does not imply endorsement by the U.S. Government.

    Imtiaz Rangwala receives funding from USGS, USDA, NOAA, US Forest Service and National Park Service. He is affiliated with the Cooperative Institute for Research in Environmental Sciences at the University of Colorado Boulder, North Central Climate Adaptation Science Center, Western Water Assessment and Boundless In Motion.

    ref. Managing forests and other ecosystems under rising threats requires thinking across wide-ranging scenarios – https://theconversation.com/managing-forests-and-other-ecosystems-under-rising-threats-requires-thinking-across-wide-ranging-scenarios-253842

    MIL OSI – Global Reports

  • MIL-OSI Global: Christianity has long revered saints who would be called ‘transgender’ today

    Source: The Conversation – USA – By Sarah Barringer, Ph.D. Candidate in English, University of Iowa

    Several Republican-led states have restricted transgender rights: Iowa has signed a law removing civil rights protection for transgender people; Wyoming has prohibited state agencies from requiring the use of preferred pronouns; and Alabama recently passed a law that only two sexes would be recognized. Hundreds of bills have been introduced in other state legislatures to curtail trans rights.

    Earlier in the year, several White House executive orders pushed to deny trans identity. One of them, “Eradicating Anti-Christian Bias,” claimed that gender-affirming policies of the Biden administration were “anti-Christian.” It accused the Biden Equal Employment Opportunity Commission of forcing “Christians to affirm radical transgender ideology against their faith.”

    To be clear, not all Christians are anti-trans. And in my research of medieval history and literature, I found evidence of a long history in Christianity of what today could be called “transgender” saints. While such a term did not exist in medieval times, the idea of men living as women, or women living as men, was unquestionably present in the medieval period. Many scholars have suggested that using the modern term transgender creates valuable connections to understand the historical parallels.

    There are at least 34 documented stories of transgender saints’ lives from the early centuries of Christianity. Originally appearing in Latin or Greek, several stories of transgender saints made their way into vernacular languages.

    Transgender saints

    Of the 34 original saints, at least three gained widespread popularity in medieval Europe: St. Eugenia, St. Euphrosyne and St. Marinos. All three were born as women but cut their hair and put on men’s clothes to live as men and join monasteries.

    Eugenia, raised pagan, joined a monastery to learn more about Christianity and later became abbot. Euphrosyne joined a monastery to escape an unwanted suitor and spent the rest of his life there. Marinos, born Marina, decided to renounce womanhood and live with his father at the monastery as a man.

    These were well-read stories. Eugenia’s story appeared in two of the most popular manuscripts of their day – Ælfric’s “Lives of Saints” and “The Golden Legend.” Ælfric was an English abbot who translated Latin saints’ lives into Old English in the 10th century, making them widely available to a lay audience. “The Golden Legend” was written in Latin and compiled in the 13th century; it is part of more than a thousand manuscripts.

    Euphrosyne also appears in Ælfric’s saints’ lives, as well as in other texts in Latin, Middle English, and Old French. Marinos’ story is available in over a dozen manuscripts in at least 10 languages. For those who couldn’t read, Ælfric’s saints’ lives and other manuscripts were read aloud in churches during service on the saint’s day.

    Euphrosyne of Alexandria.
    Anonymous via Wikimedia Commons

    A small church in Paris built in the 10th century was dedicated to Marinos, and relics of his body were supposedly kept in Qannoubine monastery in Lebanon.

    This is all to say, a lot of people were talking about these saints.

    Holy transness

    In the medieval period, saints’ lives were less important as history and more important as morality tales. As a morality tale, the audience was not intended to replicate a saint’s life, but learn to emulate Christian values. Transitioning between male and female becomes a metaphor for transitioning from pagan to Christian, affluence to poverty, worldliness to spirituality. The Catholic Church opposed cross-dressing in laws, liturgical meetings and other writings. However, Christianity honored the holiness of these transgender saints.

    In a 2021 collection of essays about transgender and queer saints in the medieval period, scholars Alicia Spencer-Hall and Blake Gutt argue that medieval Christianity saw transness as holy.

    “Transness is not merely compatible with holiness; transness itself is holy,” they write. Transgender saints had to reject convention in order to live their own authentic lives, just as early Christians had to reject convention in order to live as Christians.

    Literature scholar Rhonda McDaniel explains that in 10th-century England, adopting the Christian values of shunning wealth, militarism and sex made it easier for people to go beyond strict ideas about male and female gender. Instead of defining gender by separate male and female values, all individuals could be defined by the same Christian values.

    Historically and even in contemporary times, gender is associated with specific values and roles, such as assuming that homemaking is for women, or that men are stronger. But adopting these Christian values allowed individuals to transcend such distinctions, especially when they entered monasteries and nunneries.

    According to McDaniel, even cisgender saints like St. Agnes, St. Sebastian and St. George exemplified these values, exhibiting how anyone in the audience could push against gender stereotypes without changing their bodies.

    Agnes’ love of God allowed her to give up the role of wife. When offered love and wealth by men, she rejected them in favor of Christianity. Sebastian and George were powerful Roman men who were expected, as men, to engage in violent militarism. However, both rejected their violent Roman masculinity in favor of Christian pacifism.

    A life worth emulating

    Although most saints’ lives were written primarily as morality tales, the story of Joseph of Schönau was told as both very real and worthy of emulation by the audience. His story is told as a historical account of a life that would be attainable for ordinary Christians.

    In the late 12th century, Joseph, born female, joined a Cistercian monastery in Schönau, Germany. During his deathbed confession, Joseph told his life story, including his pilgrimage to Jerusalem as a child and his difficult journey back to Europe after the death of his father. When he finally returned to his birthplace of Cologne, he entered a monastery as a man in gratitude to God for returning him home safely.

    Despite arguing that Joseph’s life was worth emulating, the first author of Joseph’s story, Engelhard of Langheim, had a complicated relationship with Joseph’s gender. He claimed Joseph was a woman, but regularly used masculine pronouns to describe him.

    Marinos the monk.
    Richard de Montbaston via Wikimedia Commons

    Even though Eugenia, Euphrosyne and Marinos’ stories are told as morality tales, their authors had similarly complicated relationships with their gender. In the case of Eugenia, in one manuscript, the author refers to her with entirely female pronouns, but in another, the scribe slips into male pronouns.

    Marinos and Euphrosyne were also frequently referred to as male. The fact that the authors referred to these characters as male suggests that their transition to masculinity was not only a metaphor, but in some ways just as real as Joseph’s.

    Based on these stories, I argue that Christianity has a transgender history to pull from and many opportunities to embrace transness as an essential part of its values.

    Sarah Barringer does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Christianity has long revered saints who would be called ‘transgender’ today – https://theconversation.com/christianity-has-long-revered-saints-who-would-be-called-transgender-today-254769

    MIL OSI – Global Reports

  • MIL-OSI Global: Pope Leo XIV is the first member of the Order of St. Augustine to be elected pope – but who are the Augustinians?

    Source: The Conversation – USA – By Joanne M. Pierce, Professor Emerita of Religious Studies, College of the Holy Cross

    Pope Leo XIV leaves the Augustinian General House in Rome after a visit on May 13, 2025. AP Photo/Domenico Stinellis

    When Pope Leo XIV was elected pope, the assembled crowd reacted with joy but also with surprise: He was the first pope from the United States, and North America more broadly. Moreover, he was the first member of the Order of St. Augustine to be elected to the papacy.

    Out of all 267 popes, only 51 have been members of religious orders. Pope Francis was elected in 2013 as the first member of the Jesuit order, the Society of Jesus; he was also the first member of any religious order to be chosen in over 150 years.

    As a specialist in medieval Christianity, I am familiar with the origins of many Catholic religious orders, and I was intrigued by the choice of a member of the Order of St. Augustine to follow a Jesuit as pope.

    So, who are the Augustinians?

    Early monks and concern for community

    In antiquity, some Christians chose to lead a more perfect religious life by leaving ordinary society and living together in groups, in the wilderness. They would be led by an older, more experienced person – an abbot. As monks, they followed a set of regulations and guidelines called a “monastic rule.”

    The earliest of these rules, composed about the year 400, is attributed to an influential theologian, later a bishop in North Africa, called St. Augustine of Hippo. The Rule of St. Augustine is a short text that offered monks a firm structure for their daily lives of work and prayer, as well as guidelines on how these rules could be implemented by the abbot in different situations. The rule is both firm and flexible.

    The first chapter stresses the importance of “common life”: It instructs monks to love God and one’s neighbor by living “together in oneness of mind and heart, mutually honoring God in yourselves, whose temples you have become.”

    This is the overriding principle that shapes all later instructions in Augustinian rule.

    For example, Chapter III deals with how the monks should behave when out in public. They should not go alone, but in a group, and not engage in scandalous behavior – specifically, staring at women.

    If one monk starts staring at a woman, one of the other monks with him should “admonish” him. If he does it again, his companion should tell the abbot first, before any other witnesses are notified, so that the monk can try to change his behavior on his own first, so as not to cause disruption in the community.

    Because of this clarity and flexibility, its concern for both the community and the individual members, many early religious communities in the early Middle Ages adopted the Rule of St. Augustine; formal papal approval was not required at this time.

    Mendicant friars in medieval Europe

    By the end of the 12th century, Western Europe had become much more urbanized.

    In response, a new form of religious life emerged: the mendicant friars. Unlike monks who withdrew from ordinary life, mendicants stressed a life of poverty, spent in travel from town to town to preach and help the poor. They would beg for alms along the way to provide for their own needs.

    The first mendicant orders, like the Franciscans and Dominicans, received papal approval in the early 13th century. Others were organized later.

    A few decades later, several hermits living in the Italian region of Tuscany decided to join together to form a new mendicant order. They chose to follow the Rule of St. Augustine under one superior general; Pope Innocent IV approved the new order as the Order of Hermits of St. Augustine in 1244. Later, in 1254, Pope Alexander IV included other groups of hermits in the order, known as the Grand Union.

    The new order grew and eventually expanded across Western Europe, becoming involved in preaching and other kinds of pastoral work in several countries.

    Early missionaries to modern times

    As European countries began to explore the New World, missionary priests took their place on ships sent from Catholic countries, like Spain and Portugal.

    Augustinians were among these early missionaries, quickly establishing themselves in Latin America, several countries in Africa and parts of Southeast Asia and Oceania, arriving in the Philippines in the 16th century.

    There, they not only ministered to the European crews and colonists, but they also evangelized – preached the Christian gospel – to the native inhabitants of the country.

    Augustinian missionaries started the process of setting up Catholic parishes and, eventually, new dioceses. In time, they founded and taught in seminaries to train native-born men who wanted to join their order.

    It wasn’t until the end of the 18th century that Augustinian friars arrived in the United States. Despite many struggles and setbacks in the 19th century, they established Villanova University in Pennsylvania and other ministries in New York and Massachusetts. Except for two 17th-century missionaries, Augustinian friars didn’t arrive in Canada until the 20th century, when they were sent from the German province of the order to escape financial pressure from the economic depression of the 1920s and political pressure from the Nazis.

    Pope Francis meets with members of the Order of Augustinian Recollects at the Vatican on Oct. 20, 2016.
    L’Osservatore Romano/Pool Photo via AP

    Today, there are some 2,800 Augustinian friars in almost 50 countries worldwide. They serve as pastors, teachers and bishops, and have founded schools, colleges and universities on almost every continent. They are also active in promoting social justice in many places – for example, in North America and Australasia, comprising Australia and parts of South Asia.

    Based on his years as a missionary and as provincial of the entire order worldwide, Leo XIV draws on the rich interpersonal tradition of the Order of St. Augustine. I believe his pontificate will be one marked by his experiential awareness of Catholicism as a genuinely global religion, and his deep concern for the suffering of the marginalized and those crushed by political and economic injustice.

    Joanne M. Pierce does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Pope Leo XIV is the first member of the Order of St. Augustine to be elected pope – but who are the Augustinians? – https://theconversation.com/pope-leo-xiv-is-the-first-member-of-the-order-of-st-augustine-to-be-elected-pope-but-who-are-the-augustinians-257175

    MIL OSI – Global Reports

  • MIL-OSI Global: Europeans are concerned that the US will withdraw support from NATO. They are right to worry − Americans should, too

    Source: The Conversation – USA – By John Deni, Research Professor of Joint, Interagency, Intergovernmental, and Multinational Security Studies, US Army War College

    American soldiers join 3,000 troops from other NATO member countries in a four-week exercise in Hohenfels, Germany, in March 2025. Sean Gallup/Getty Images

    The United States has long played a leadership role in NATO, the most successful military alliance in history.

    The U.S. and 11 other countries in North America and Europe founded NATO in 1949, following World War II. NATO has since grown its membership to include 32 countries in Europe and North America.

    But now, European leaders and politicians fear the United States has become a less reliable ally, posing major challenges for Europe and, by implication, NATO.

    This concern is not unfounded.

    President Donald Trump has repeatedly spoken of a desire to seize Greenland, which is an autonomous territory of Denmark, a NATO member. He has declared that Canada, another NATO member, should become “the 51st state.” Trump has also sided with Russia at the United Nations and said that the European Union, the political and economic group uniting 27 European countries, was designed to “screw” the U.S.

    Still, Trump – as well as other senior U.S. government officials – has said that the U.S. remains committed to staying in and supporting NATO.

    For decades, both liberal and conservative American politicians have recognized that the U.S. strengthens its own military and economic interests by being a leader in NATO – and by keeping thousands of U.S. troops based in Europe to underwrite its commitment.

    President Donald Trump speaks at a NATO Summit in July 2018 during his first term.
    Sean Gallup/Getty Images

    Understanding NATO

    The U.S., Canada and 10 Western European countries formed NATO nearly 80 years ago as a way to help maintain peace and stability in Europe following World War II. NATO helped European and North American countries bind together and defend themselves against the threat once posed by the Soviet Union, a former communist empire that fell in 1991.

    NATO employs about 2,000 people at its headquarters in Brussels. It does not have its own military troops and relies on its 32 member countries to volunteer their own military forces to conduct operations and other tasks under NATO’s leadership.

    NATO does have its own military command structure, led by an American military officer, and including military officers from other countries. This team plans and executes all NATO military operations.

    In peacetime, military forces working with NATO conduct training exercises across Eastern Europe and other places to help reassure allies about the strength of the military coalition – and to deter potential aggressors, like Russia.

    NATO has a relatively small annual budget of around US$3.6 billion. The U.S. and Germany are the largest contributors to this budget, each responsible for funding 16% of NATO’s costs each year.

    Separate from NATO’s annual budget, in 2014, NATO members agreed that each participating country should spend the equivalent of 2% of its gross domestic product on their own national defense. Twenty two of NATO’s 31 members with military forces were expected that 2% threshold as of April 2025.

    Although NATO is chiefly a military alliance, it has roots in the mutual economic interests of both the U.S. and Europe.

    Europe is the United States’ most important economic partner. Roughly one-quarter of all U.S. trade is with Europe – more than the U.S. has with Canada, China or Mexico.

    Over 2.3 million American jobs are directly tied to producing exports that reach European countries that are part of NATO.

    NATO helps safeguard this mutual economic relationship between the U.S. and Europe. If Russia or another country tries to intimidate, dominate or even invade a European country, this could hurt the American economy. In this way, NATO can be seen as the insurance policy that underwrites the strength and vitality of the American economy.

    The heart of that insurance policy is Article 5, a mutual defense pledge that member countries agree to when they join NATO.

    Article 5 says that an armed attack against one NATO member is considered an attack against the entire alliance. If one NATO member is attacked, all other NATO members must help defend the country in question. NATO members have only invoked Article 5 once, following the Sept. 11, 2001, attacks in the U.S., when the alliance deployed aircraft to monitor U.S. skies.

    A wavering commitment to Article 5

    Trump has questioned whether he would enforce Article 5 and help defend a NATO country if it is not paying the required 2% of its gross domestic product.

    NBC News also reported in April 2025 that the U.S. is likely going to cut 10,000 or more of the nearly 85,000 American troops stationed in Europe. The U.S. might also relinquish its top military leadership position within NATO, according to NBC.

    Many political analysts expect the U.S. to shift its national security focus away from Europe and toward threats posed by China – specifically, the threat of China invading or attacking Taiwan.

    At the same time, the Trump administration appears eager to reset relations with Russia. This is despite the Russian military’s atrocities committed against Ukrainian military forces and civilians in the war Russia began in 2022, and Russia’s intensifying hybrid war against Europeans in the form of covert spy attacks across Europe. This hybrid warfare allegedly includes Russia conducting cyberattacks and sabotage operations across Europe. It also involves Russia allegedly trying to plant incendiary devices on planes headed to North America, among other things.

    President Joe Biden speaks during a NATO summit in Washington in July 2024.
    Roberto Schmidt/AFP via Getty Images

    A shifting role in Europe

    The available evidence indicates that the U.S. is backing away from its role in Europe. At best – from a European security perspective – the U.S. could still defend European allies with the potential threat of its nuclear weapon arsennal. The U.S. has significantly more nuclear weapons than any Western European country, but it is not clear that this is enough to deter Russia without the clear presence of large numbers of American troops in Europe, especially given that Moscow continues to perceive the U.S. as NATO’s most important and most powerful member.

    For this reason, significantly downsizing the number of U.S. troops in Europe, giving up key American military leadership positions in NATO, or backing away from the alliance in other ways appears exceptionally perilous. Such actions could increase Russian aggression across Europe, ultimately threatening not just European security bu America’s as well.

    Maintaining America’s leadership position in NATO and sustaining its troop levels in Europe helps reinforce the U.S. commitment to defending its most important allies. This is the best way to protect vital U.S. economic interests in Europe today and ensure Washington will have friends to call on in the future.

    John Deni does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Europeans are concerned that the US will withdraw support from NATO. They are right to worry − Americans should, too – https://theconversation.com/europeans-are-concerned-that-the-us-will-withdraw-support-from-nato-they-are-right-to-worry-americans-should-too-253907

    MIL OSI – Global Reports

  • MIL-OSI Global: Why some towns lose local news − and others don’t

    Source: The Conversation – USA – By Abby Youran Qin, Ph.D. candidate at School of Journalism & Mass Communication, University of Wisconsin-Madison

    Five elements determine which towns lose their papers and which ones beat the odds. Hans Henning Wenk/Getty Images

    Why did your hometown newspaper vanish while the next town over kept theirs?

    This isn’t bad luck − it’s a systemic pattern. Since 2005, the United States has lost over one-third of its local newspapers, creating “news deserts” where corruption is more likely to spread and communities may become politically polarized.

    My research, published in Journalism & Mass Communication Quarterly, analyzes the factors behind the decline of local newspapers between 2004 and 2018. It identifies five key drivers − ranging from racial disparity to market forces − that determine which towns lose their papers and which ones beat the odds.

    1. Newspapers follow the money, not community needs

    You might expect news media to gravitate toward areas where their work is needed most − communities experiencing population growth or facing systemic challenges. But in reality, newspapers, like any business, tend to thrive where the financial resources are greatest.

    My analyses suggest that local newspapers survive where affluent subscribers and deep-pocketed advertisers cluster. That means wealthy white suburbs keep their watchdogs, while low-income and diverse communities lose theirs.

    When police brutality spikes, when welfare offices deny claims, when local officials divert funds − these are the moments when communities need their journalists the most.

    Bertram de Souza works on a story for The Vindicator newspaper in Youngstown, Ohio, on Aug. 7, 2019. The 150-year-old paper shut down later that month because of financial struggles.
    Tony Dejak, AP Photos

    Poor and racially diverse communities often face the harshest policing and interact more with street-level bureaucrats than wealthier citizens. That makes them more vulnerable to government corruption and misconduct. Yet, these same communities are the first to lose their newspapers, because there are no luxury real estate agencies buying ads, and few residents can afford the monthly subscriptions.

    Without journalistic scrutiny, scholars find that mismanagement flourishes, corruption costs balloon, and the communities most vulnerable to abuse receive the least accountability. This is how news deserts exacerbate inequality.

    2. Newspapers don’t adequately serve diverse communities

    Picture this: A newsroom sends its reporters, most of whom are white, to a Black neighborhood − but only after reports of gunshots or building fires. Residents, still in shock, don’t want to talk. So journalists call the same three community leaders they always quote, run the tragic story and disappear until the next crisis. This approach, often referred to as “parachute journalism,” results in shallow coverage that paints the community in a negative light while overlooking its complexities.

    Year after year, the pattern repeats. The only time residents see their neighborhood in the paper is when something terrible happens. No feature story of the family-owned restaurant celebrating its 20-year anniversary, no reporter at the town hall when the new police chief gets grilled about stop-and-frisk − just the constant drumbeat of crime and crisis.

    Is it any wonder racially diverse communities stop trusting and paying for that paper? Not when many working-class families of color can barely afford to add a newspaper subscription to their bills.

    Diverse neighborhoods get hit twice. First, their local papers inadequately represent them. Then, when people understandably turn away, subscriptions drop, advertisers pull back and the outlets shut down, leaving whole communities without a voice.

    Only in recent years have more media outlets begun to make a concerted effort to engage with and reflect the communities they serve. However, such efforts are often led by newer media organizations with fresh ideologies, while many long-standing media outlets remain stuck in traditional reporting practices, as illustrated in Jacob Nelson’s “Imagined Audiences.” Although my analyses of local newspaper decline from 2004 to 2018 paints a frustrating picture, the emerging trend of community-oriented journalism holds promise for positive changes in diverse communities.

    3. Population growth doesn’t always save newspapers

    It’s easy to assume that more people = more readers = healthier news organizations. But my research tells a different story: Counties with larger population growth actually saw greater declines in local newspapers.

    The catch lies in who is moving in: Population growth saves papers only when it comes with wealth. Affluent newcomers bring subscriptions and advertisers’ attention. But growth driven by high birth rates, typically seen in less developed areas with more racial and ethnic minorities, doesn’t translate to revenue. In short, growth alone isn’t enough − it’s the type of growth, and the economic power behind it, that matters.

    This highlights the fragility of market-dependent journalism. The news gap experienced by fast-growing communities may persist where local journalism depends primarily on traditional advertising and subscription revenues rather than diversified revenue sources such as grants and philanthropic donations. The latter, which often focus on community needs rather than profit potential, are more likely to help sustain journalism in areas with significant population growth.

    Local news sources help residents hold their elected officials accountable.
    Jim Mone/AP Photos

    4. Neighbors’ newspapers can save yours

    You’d think that competition between newspapers would be a cutthroat affair. But in an era of decline, my analyses reveal a counterintuitive truth: Your town’s paper actually has better odds when nearby communities keep theirs.

    Rather than competing, neighboring papers often become allies, sharing breaking news, splitting investigative costs and attracting advertisers who want regional reach. While this collaboration can sometimes cause papers to lose their local identity, having some local journalism is still better than none. It ensures some level of accountability, even if the news isn’t as focused on each town’s unique needs.

    Resilient local journalism clusters together. When one paper invests in original reporting, its neighbors often benefit too. When regional businesses support multiple outlets, the entire news ecosystem becomes more sustainable.

    5. Left or right? Local papers die either way

    In this highly polarized era, it turns out that there’s no significant link between a county’s partisan makeup and its ability to keep newspapers.

    Urban hubs such as Chicago keep robust media thanks to dense populations and corporate advertisers, not because they vote for Democrats. Meanwhile, newspapers in conservative rural areas can survive by cultivating loyal readerships within their communities.

    In contrast, communities with lower income and a diverse population lose outlets no matter whether they are red, blue or purple.

    Partisan battles might dominate national headlines, but local journalism’s survival hinges on practical factors such as money and market size. Saving local news isn’t a left vs. right debate − it’s a community issue that requires nonpartisan solutions.

    Abby Youran Qin does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Why some towns lose local news − and others don’t – https://theconversation.com/why-some-towns-lose-local-news-and-others-dont-252155

    MIL OSI – Global Reports

  • MIL-OSI Global: Mountain chickadee chatter: Scientists are decoding the songbird’s complex calls

    Source: The Conversation – USA – By Sofia Marie Haley, Ph.D. Student in Cognitive Ecology, University of Nevada, Reno

    Mountain chickadees are unusual in having more complex calls than songs. Vladimir Pravosudov

    I approach a flock of mountain chickadees feasting on pine nuts. A cacophony of sounds, coming from the many different bird species that rely on the Sierra Nevada’s diverse pine cone crop, fill the crisp mountain air.

    The strong “chick-a-dee” call sticks out among the bird vocalizations. The chickadees are communicating to each other about food sources – and my approach.

    Mountain chickadees are a member of the family Paridae, which is known for its complex vocal communication systems and cognitive abilities. Along with my advisers, behavioral ecologists Vladimir Pravosudov and Carrie Branch, I’m studying mountain chickadees at our study site in Sagehen Experimental Forest, outside of Truckee, California, for my doctoral research. I am focusing on how these birds convey a variety of information with their calls.

    The chilly autumn air on top of the mountain reminds me that it will soon be winter. It is time for the mountain chickadees to leave the socially monogamous partnerships they had while raising their chicks to form larger flocks. Forming social groups is not always simple; young chickadees are joining new flocks, and social dynamics need to be established before the winter storms arrive.

    I can hear them working this out vocally. There’s an unusual variety of complex calls, with melodic “gargle calls” at the forefront, coming from individuals announcing their dominance over other flock members.

    Examining and decoding bird calls is becoming an increasingly popular field of study, as scientists like me are discovering that many birds – including mountain chickadees – follow systematic rules to share important information, stringing together syllables like words in a sentence.

    Sofia Haley describes how she records chickadee vocalizations in the forest.

    Songs vs. calls

    For social animals, communication is a crucial part of everyday life. Communication can come in the form of visual, chemical, tactile, electrical or vocal signals.

    Birds are highly vocal, often relying on vocal communication to effectively interact with their environments and flock members. Temperate songbirds, including cardinals, bluebirds, wrens and blackbirds, have two main categories of vocalizations: songs and calls.

    Songs are vocalizations that are used primarily in the spring, during breeding season. Males in temperate regions sing to attract females and defend territories.

    Calls are basically any vocalization that is not a song. This category includes a limitless variety of vocalizations that communicate all sorts of essential information.

    Most songbird species have complex songs and fairly simple calls. This is why vocalizations sound most melodic during the spring, when birds are attracting mates and breeding.

    Members of the Pravosudov lab catch and release resident chickadees to attach identifying bands that allow the researchers to track individual birds.
    Sofia Haley

    However, chickadees are unusual in that they sing very simple songs relative to the complexity of their calls. Research suggests this is largely due to their social structure and complex environments. Living in flocks for the majority of the year means they need an elaborate communication system year-round. This is in contrast to many other songbird species that are more solitary during the nonbreeding season.

    Scientists know quite a lot about birdsong: It is highly organized and composed of multiple units that are strung together into “phrases,” like how musical notes are strung together in a song.

    Some species manipulate their song to sound more impressive, by incorporating new elements or performing impressive acoustic feats through note modification – imagine a trill or an impressive high note.

    Some songbirds must learn their songs from their parents and other adult males during a sensitive period in the first several months of their lives. It’s similar to how human children must learn how to speak from adults during a similar early sensitive period.

    In contrast, we know relatively little about the structure and organization of complex calls. Scientists have often regarded calls as unexciting and simple compared with birdsong. However, calls are arguably the most important type of vocalization, at least for highly social bird species.

    Translating mountain chickadee calls

    A focal microphone allows researchers to record the call of one bird at a time.
    Sofia Haley

    I spend my days out at our field site in the beautiful Sierra Nevada, following and recording chickadees as they communicate with each other. I have taken numerous focal recordings, where I stand in the forest with a directional microphone, identifying vocalizations and behaviors in real time.

    I also have hundreds of hours of recordings taken by automated recording devices called AudioMoths. These allow me to record vocalizations in the absence of people.

    The extensive vocal repertoire of mountain chickadees has yet to be fully documented. There are five basic categories of call types:

    • Contact calls: communicate identity, sort of like a name, and location.
    • “Chick-a-dee” calls: coordinate flock movement and communicate a variety of complex information about the environment, from food availability to predator presence and type.
    • Alarm calls: alert others of the presence of a predator.
    • Begging calls: used by chicks or females to elicit feeding behavior from males.
    • Gargle calls: advertise dominance over other individuals in a flock, primarily used by males.

    “Chick-a-dee” calls contain several elements resembling the basic elements of human grammar. Essentially, the various sounds a chickadee utters mean different things, similar to words in human languages. And the way that a chickadee combines these sounds changes the meaning. Word order matters, just like grammar matters in human language. If a chickadee were to phrase its calls in the wrong note order, the call would no longer convey the same meaning, even if composed of the same elements.

    The “chick-a-dee” call of the mountain chickadee contains six elements, known as notes or syllables, that can be combined in hundreds of unique combinations to say many different things. These elements are labeled A, A/B, B, C, D and Dh.

    Although scientists don’t fully know the meaning of each note in different contexts, it is generally believed that A notes typically contain identifying information about how important the topic seems to the caller, while A/B and B notes tend to further inform the listener of the topic of conversation. C notes contain information about the subject of the call, often a food source, and D notes convey information about the excitement and urgency of the message, including level of threat of a spotted predator or size of a food source. The D notes basically function like exclamation points at the end of a sentence, while the other notes convey more specific information.

    Mountain chickadees can use their “chick-a-dee” calls to convey hundreds of different phrases that are relevant to navigating their habitats and social environments. As a hypothetical example, a mountain chickadee call might have the following syntax: A-A-A/B-B-D-D, which could roughly translate to something like, “Listen to me carefully (A-A): there is a predator (A/B) close by (B) and a medium threat level (DD).”

    If the note order switched to D-A-B-D-A/B-A, the sentence would look more like: “Noteworthy listen close by noteworthy predator listen to me.” Although all the same elements are there, this sentence is now much more difficult to comprehend. Notes that are out of order can confuse chickadees, preventing them from grasping the correct meaning of the call.

    This “translation” is an example based on what we have learned from playback experiments, but the exact meaning will depend on the specific population and surrounding environment.

    Analyzing the ‘chick-a-dee’ calls

    Back in the lab, I parse through the endless hours of recordings using a deep-learning algorithm that I have modified to identify the specific calls of our chickadee population.

    A spectrogram visualizes a chickadee call, with frequency on the vertical axis and time on the horizontal axis.
    Sofia Haley

    I then use Raven Pro software, developed by the Cornell Lab of Ornithology, to visually inspect and analyze these calls on a spectrogram: a visual representation of sound, with frequency on the vertical axis, and time on the horizontal axis. This visualization allows me to study the structure of calls in great detail.

    Studying spectrograms can get me only so far. The next step is to experimentally test different “chick-a-dee” calls out in the wild. Using audio editing software, I manipulate the syntax of calls to either follow grammatical rules or violate them. Then, I broadcast these manipulated recordings out in the forest and observe how our chickadees react to grammatically incorrect calls, which would sound like gibberish to them.

    Audio editing software allows researchers to mix up the order of a chickadee’s call in order to see how birds react to the garbled message.
    Sofia Haley

    My hope is that this combination of experimental testing of calls and careful visual analysis will provide a step toward understanding the subtle complexities of chickadee communication. I’m trying to home in on the meaning of different syllables and syntax, the grammatical rules.

    Back in the forest with my directional microphone, watching the chickadees flit about, I hear different versions of the “chick-a-dee” calls. Some feature more D notes, which would indicate a higher level of excitement. Others feature more A, B or C notes, communicating more specific, identifying information. I am also surrounded by melodic gargle calls, harsh scolding calls and barely audible soft calls.

    Next time you find yourself out in the forest, stop and listen to the chickadees as they talk to each other. Maybe you’ll be able to hear the variation in their calls and know that they are talking about different things − and that grammar matters.

    Sofia Marie Haley does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Mountain chickadee chatter: Scientists are decoding the songbird’s complex calls – https://theconversation.com/mountain-chickadee-chatter-scientists-are-decoding-the-songbirds-complex-calls-247091

    MIL OSI – Global Reports

  • MIL-OSI Global: IDF firing ‘warning shots’ near diplomats sets an unacceptable precedent in international relations

    Source: The Conversation – UK – By Andrew Forde, Assistant Professor – European Human Rights Law, Dublin City University

    A still from footage of the incident when ‘warning shots’ were fired above visiting diplomats in Jenin on May 21. X (Twitter)

    The Israel Defense Forces (IDF) appears to have “crossed the Rubicon” in the West Bank town of Jenin, when it opened fire in the vicinity of a group of visiting diplomats on May 21 – in flagrant violation of international law. The group of diplomats representing 31 countries – including Ireland, UK, France, Germany, Italy, Egypt, Russia and China – were on an official mission organised by the Palestinian Authority to observe the humanitarian situation there.

    They were giving media interviews when IDF troops fired what they later referred to as “warning shots” over their heads, forcing them to run for cover. The shots came despite the visit having been flagged and coordinated in advance with both the Palestinian Authority and the IDF, which has effective control over the area.

    Jenin has long been a flash point in the Israeli-Palestinian conflict. With much of the population descendants of Palestinian refugees from the 1948 war, Israeli occupation and active Palestinian resistance are observable in the town.

    The international community’s reaction to the warning shots incident – in particular, by those states whose diplomatic officials were directly involved – was one of swift and widespread outrage. The high representative of the European Union for foreign affairs and security policy, Kaja Kallas, called for a full investigation into the incident, and for those responsible to be held accountable. “Any threats on diplomats’ lives are not acceptable,” she said.

    The Palestinian foreign ministry accused Israel of having “deliberately targeted with live fire an accredited diplomatic delegation”.

    Israel acknowledged the incident and triggered an initial investigation, but downplayed its significance. A spokesman for the IDF said it “regrets the inconvenience caused” by the incident. But its statement went on to effectively justify the action, arguing that the diplomats had “deviated from the approved route” by entering a restricted area – leading to IDF soldiers firing warning shots into the air.

    Such a response doesn’t remotely correspond to the seriousness of the situation, and Israel is perfectly aware of this.

    International law and diplomats

    Diplomats carry out functions on behalf of the country they represent. They are the eyes, ears and voice of their country, called upon to pursue legitimate diplomatic activities. The protections afforded to individual diplomats must therefore be seen in the context of broader and longer-term diplomatic relations between states.

    To carry out diplomatic functions effectively, those individuals must be allowed to perform their functions without hindrance, coercion or harassment from any country that hosts their delegations. These customary rules are thousands of years old, and have been codified in international law through the Vienna convention on diplomatic relations – to which Israel is a signatory.

    That convention provides for diplomatic inviolability, immunity from criminal, civil and administrative jurisdiction, and freedom from detention or arrest. It also affords diplomatic staff the right to freedom of movement and free communications.

    Most importantly for this case, article 29 of the convention states that the host state “shall take all appropriate steps to prevent any attack on [their] person, freedom or dignity”.

    Firing warning shots in the vicinity of diplomats, even if done in error or without ill-intent, represents a serious threat to the person and their dignity. As such, it constitutes a flagrant abdication of Israel’s duty to protect them.

    Moreover, the firing of warning shots in Jenin immediately interrupted the diplomatic work there, and as such can be seen as an attempt to intimidate or limit the efficient and effective performance of diplomatic functions on behalf of their governments.

    Need for accountability

    Any use of force against diplomats, even indirect, is incompatible with the principles of diplomatic immunity enshrined in international law. The onus is on the host state to ensure the safety and inviolability of diplomatic personnel.

    And this duty of care is not diminished in situations of conflict. On the contrary, states have a special duty in times of conflict to protect diplomats and preserve diplomatic channels of communication.

    Israel’s actions in firing above these diplomats may or may not have been deliberate. But they had an intimidatory effect, which undermines the foundational principles of international relations. In a climate where Israel’s courts have effectively endorsed a media blackout in conflict-affected regions, the role of diplomats is indispensable.

    The entire system of diplomatic relations relies on the presumption that diplomats can carry out their functions freely and effectively. Diplomatic protections work effectively when they are reciprocal. Without trust, the system quickly unravels.

    It would be wrong to suggest this act may have tipped the balance of international opinion against Israel, when you consider the 19 months of violence in Gaza. The killing by the IDF of vast numbers of civilians (including thousands of women and children), the seeming use of starvation as a weapon of war, and the destruction of vast swaths of Gaza have rightly attracted growing international condemnation.

    On May 19, Britain, France and Canada – staunch allies of Israel – said they will “not stand by”, and would take “concrete actions” if the military offensive is not halted and humanitarian aid is not delivered to the people of Gaza.

    But threatening diplomats – even if not actively shooting at them – is an egregious breach of trust under the laws of diplomatic relations, which requires a meaningful apology and effective investigation. Those responsible for giving the orders to fire the “warning shots” need to be held accountable for that decision.

    Andrew Forde is affiliated with Dublin City University (Assistant Professor, European Human Rights Law).

    He is also, separately, affiliated with the Irish Human Rights and Equality Commission (Commissioner).

    ref. IDF firing ‘warning shots’ near diplomats sets an unacceptable precedent in international relations – https://theconversation.com/idf-firing-warning-shots-near-diplomats-sets-an-unacceptable-precedent-in-international-relations-257488

    MIL OSI – Global Reports

  • MIL-OSI Global: Trump v Harvard: why this battle will damage the US’s reputation globally

    Source: The Conversation – UK – By Thomas Gift, Associate Professor and Director of the Centre on US Politics, UCL

    Harvard University is suing the Trump administration over its unprecedented attempt to bar international students from its campus. The latest salvo is that the administration has said it is cancelling all federal funds, totalling US$100 million (£73.8 million). Although a federal judge has temporarily blocked the order to ban foreign students, many observers are rightly expressing deep concern about the global ramifications of the battle for the reputation of the US.

    The story hits home for me. Every year for the last decade, I’ve taught a course on globalisation in the Harvard summer school. Although 27% of Harvard’s student body is international, my course – due to its topical focus – draws a disproportionate number of international students, many from emerging economies.

    As I know firsthand, these students contribute enormously to the classroom experience. Their insights, shaped by distinct national contexts, enliven discussion and further understanding for everyone — international and domestic students alike. Without them, the classroom isn’t just quieter; it’s poorer in perspective.

    Yet my concern with Trump’s latest attempt to put a political target on Harvard’s back extends beyond international students. For centuries Harvard and countless other leading US institutions of higher learning have welcomed international students to their campuses. This isn’t purely a selfless act. These students are a boon to the US at home and abroad. Here’s why.

    1. Spreading democracy

    Universities aren’t just a key economic driver for the United States. They’re also a reflection of its democratic values. Students who attend Harvard and similar universities, especially those from outside advanced, Organisation for Economic Co-operation and Development (OECD) democracies, often return to their native countries after they’ve received their diplomas, poised to make a difference in national politics.

    My own research suggests this can help to promote democracy in autocratic parts of the world. Because of how they’re socialised both inside and outside the classroom, students who attend western universities and go on to become national leaders are more likely to embrace democratic values, and highly educated leaders also tend to increase economic growth.

    Personal connections that they’ve forged in the west also bind them into international networks that are pro-democracy.

    An attack on Harvard will also damage its soft power, say some.

    Consider one example: Ellen Johnson Sirleaf, the former president of Liberia, attended the Harvard Kennedy School, then went onto serve as head of her country from 2006 to 2018. As the first female elected head of state in Africa, Sirleaf proceeded to win the Nobel peace prize in 2011 for her “non-violent efforts to promote peace and her struggle for women’s rights”.

    2. Projecting soft power

    The best universities in the US are also a crucial component of what the late Harvard political scientist Joseph Nye called “soft power”. Soft power is about how western nations such as the US influence the world rather than via bullets and tanks. It’s an approach to foreign relations that projects US culture by winning “hearts and minds”.

    Harvard isn’t just one of the leading brands in US higher education, it’s one of the US’s leading brands. More generally, American universities dominate the international league tables, such as the QS World Rankings, where ten of the top 25 schools are US-based. That makes universities key ambassadors for the US.

    There’s a reason why Harvard attracts students from more than 140 countries. Its reputation for academic excellence, combined with its world-leading research in areas as diverse as curing neurogenerative diseases to improving economic mobility, make it a magnet for students angling to test their mettle against the best and brightest.

    3. Driving the US economy

    Many international graduates of top US universities go on to become entrepreneurs or to pursue careers in cutting-edge fields at companies such as Apple, Google and Meta, filling jobs for which there’s a shortage of talent in the US labour market.

    The upper echelons of the executive class in the US is also filled with leaders who were once international students in the US. Tesla CEO Elon Musk, who studied at the University of Pennsylvania, and Microsoft CEO Satya Nadella, who studied at the University of Chicago, are two prominent examples.

    According to a report from the National Foundation for American Policy, approximately 25% of US firms worth at least a billion dollars had a founder who enrolled at a US university as an international student.

    It’s also worth noting that international students are more likely to pay much higher tuition fees than US students. These dollars subsidise academic and student programming for domestic students, enabling places like Harvard to maintain the high standards which they’re renowned.

    Recent data from the Association of International Educators show that international students at US colleges and universities “contributed US$43.8 billion to the US economy during the 2023-2024 academic year and supported more than 378,000 jobs”.

    But, says the Economist’s US editor John Prideaux, this whole battle is really about power. “If you stand up to the Trump administration they will come after you.”

    Former Harvard president Larry Summers has said, that the ban on international students “would be devastating…, not just for the university but for the image of the United States in the world, where our universities in general, and Harvard in particular, have been a beacon”.

    The reputation of US universities around the world is especially vital today as Trump’s “America first” foreign policy signals a descent into belligerent isolationism. As the US retreats from the world and its president attacks multilateral institutions and shows a lack of respect for allies , this latest tussle with Harvard could erode the US’s international image even further.

    Thomas Gift teaches an annual course in the Harvard Summer School, and worked full-time at the Harvard Kennedy School in 2015-16.

    ref. Trump v Harvard: why this battle will damage the US’s reputation globally – https://theconversation.com/trump-v-harvard-why-this-battle-will-damage-the-uss-reputation-globally-257512

    MIL OSI – Global Reports

  • MIL-OSI Global: Why Alberta’s push for independence pales in comparison to Scotland’s in 2014

    Source: The Conversation – Canada – By Piers Eaton, PhD Candidate in Political Science, L’Université d’Ottawa/University of Ottawa

    One day after the Liberal Party secured their fourth consecutive federal election victory, Alberta Premier Danielle Smith tabled legislation to change the signature threshold needed to put citizen-proposed constitutional questions on the ballot. She lowered it from the current 600,000 signatures to 177,000.

    Since the pro-independence Alberta Prosperity Project already claims to have 240,000 pledges in support of an Albertan sovereignty referendum, the change clears a path to a separation referendum.

    In 2014, Scottish voters went to the polls on a similar question to the one proposed by the Alberta Prosperity Project, but asking voters whether they wanted to regain their independence from Britain. Although the Scottish “Yes” campaign was defeated, it garnered 45 per cent of the vote, far exceeding what most thought was possible at the start of the campaign.

    The 2014 Scottish referendum injected a huge amount of enthusiasm into the Scottish separatist parties, with the largest, the Scottish National Party (SNP) — which led the fight for the Yes side — soaring from 20,000 members in 2013 to more than 100,000 months after the referendum.

    While the Yes campaign did not achieve its goals and the Scottish historical context is very different from Alberta’s, there are still important lessons about how people can be won over to the cause of independence. Albertan separatists don’t seem to be heading down the same path.

    Timeline

    Smith has suggested that if the necessary signatures were collected, that she would aim to hold a referendum in 2026. But the Alberta Prosperity Project’s Jeffrey Rath suggested the group would push Smith to allow a referendum before the end of 2025, giving the referendum a maximum of seven months of official campaigning.

    The broad ground rules of the Scottish referendum were established in the Edinburgh Agreement in October 2012. On March 2013, the SNP-led Scottish government announced the date of the independence referendum — Sept. 18, 2014. The long campaign period allowed a wide variety of grassroots campaign groups to organize in favour of independence.

    While Alberta separatism is less likely to be buoyed by artist collectives and Green Party activists like Scottish independence was, a longer independence campaign would allow a variety of members of Albertan society to make the case for independence.

    Dennis Modry, a co-leader of the Alberta Prosperity Project, recently told CBC News that the initial signature threshold of 600,000 was not all bad, as it would “get (us) closer to the referendum plurality as well.” That remark suggested Modry sees value having more time to campaign before a referendum is held.

    In this regard, he and Rath seem to be sounding different notes.

    Leadership

    Hints that the Alberta Prosperity Project is already divided raises broader questions of leadership. In 2014, the Scottish Yes side had a clear and undisputed leader — First Minister Alex Salmond, head of the SNP.

    The late Salmond led the SNP to back-to-back electoral victories in Scotland, including the only outright majority ever won in the history of the Scottish parliament in 2011.

    Salmond was able to speak in favour of independence in debates and to answer, with democratic legitimacy, specific questions about what the initial policy of an independent Scotland would be.

    The SNP government published a report, Scotland’s Future, that systematically sought to assuage skeptics. Its “frequently asked questions” (FAQ) section answered 650 potential questions about independence. The Alberta Prosperity Project, on the other hand, only answers 74 questions in its FAQ.

    Whereas Salmon’s rise to the leadership of the Scottish independence movement was done in full public view and according to party rules, the Alberta Prosperity Project’s leadership structure is far murkier.

    The organization claims there “is no prima facie leader of the APP, but there (is) a management team which is featured on the website https://albertaprosperityproject.com/about-us/.” Follow that link, however, and no names or management structures are listed.

    Clarity and democracy

    While independence always involves some unknowns, clear leadership can provide answers about where a newly independent nation might find stability. The Yes Scotland campaign promised independence within Europe, meaning Scotland would retain access to the European Union’s common market.

    By contrast, the Alberta Prosperity Project isn’t clear on the fundamental question of whether a sovereign Alberta should remain independent or attempt to join the United States as its 51st state.

    Despite the claim on its website that “the objective of the Alberta Prosperity Project is for Alberta to become a sovereign nation, not the 51st state of the USA,” the organization backed Rath’s recent trip to Washington, D.C. to gauge support for Albertan integration into the U.S.

    Rath has also said that becoming a U.S. territory is “probably the best way to go.”

    Rath in an interview with Rachel Parker, an Alberta-based independent journalist. (Rachel Parker’s YouTube channel)

    The 2014 referendum in Scotland was called a “festival of democracy”, and even anti-independence forces agreed the referendum had been good for democracy.

    It took time and leadership to put forward a positive case for independence, one that voters could decide upon with confidence.

    Alberta could learn from Scotland and strengthen its democracy by holding a referendum based on legitimate leadership, reasonable timelines, diverse voices and clear aims. Or it could lurch into a rushed campaign, with divided leaders of dubious legitimacy, arguing for unclear outcomes — and end up, no matter which side wins, weakening its democracy in the process.

    Piers Eaton does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Why Alberta’s push for independence pales in comparison to Scotland’s in 2014 – https://theconversation.com/why-albertas-push-for-independence-pales-in-comparison-to-scotlands-in-2014-256838

    MIL OSI – Global Reports

  • MIL-OSI Global: For opioid addiction, treatment underdosing can lead to fentanyl overdosing – a physician explains

    Source: The Conversation – USA – By Lucinda Grande, Clinical Associate Professor of Family Medicine, University of Washington

    Buprenorphine is most effective when doctors and patients find the right dose together. AP Photo/Ted S. Warren

    Imagine a patient named Rosa tells you she wakes up night after night in a drenching sweat after having very realistic dreams of smoking fentanyl.

    The dreams seem crazy to her. Three months ago, newly pregnant, Rosa began visualizing being a good parent. She realized it was finally time to give up her self-destructive use of street fentanyl. With tremendous effort, she started treatment with buprenorphine for her opioid use disorder.

    As hoped, she was intensely relieved to be free from the distressing withdrawal symptoms – restless legs, anxiety, bone pain, nausea and chills – and from the guilt, shame and hardship of living with addiction. But even so, Rosa found herself musing throughout the day about the rewarding rush of fentanyl, which seemed ever more appealing. And she couldn’t escape those dreams at night.

    Rosa asks you, her doctor, for a higher dose of buprenorphine. You consider her request carefully. Your clinic follows the Food and Drug Administration prescribing guideline that has changed very little in over 20 years. It recommends her current prescription – 16 milligrams – as the “target” dose. You are aware of the prevailing view among medical providers that most patients don’t need a dose higher than that. Many believe that patients or others would use the extra pills to get high.

    But after many visits, you feel that you know Rosa well. You believe in her sincerity. She is a responsible 25-year-old with a full-time job who never misses appointments. She now has stable housing with her parents after years of couch surfing. You reluctantly agree and raise her daily dose by one additional 8-milligram pill, totaling 24 milligrams.

    At her next visit, Rosa tells you that the higher dose solved her daytime fentanyl craving, but the nightmares have continued. She would like to try an even higher dose.

    How should you respond? The FDA guideline clearly states there is no evidence to support any benefit above her new dose. You begin to doubt Rosa’s sincerity and your own judgment.

    Harms of low doses

    This hypothetical scenario has played out countless times in the U.S. since 2002, when buprenorphine was first approved as a treatment for opioid use disorder. As a family physician specializing in addiction medicine, I have frequently encountered patients who still experience withdrawal symptoms at the “target dose” and even at the suggested maximum dose of 24 milligrams.

    People like Rosa, plagued by uncontrolled fentanyl craving – either awake or in dreams – are at high risk of leaving treatment and returning to addiction. Yet from 2019 to 2020, only 2% of buprenorphine prescriptions were written for over 24 milligrams.

    Withdrawal symptoms and cravings make staying in recovery difficult.
    iStock/Getty Images Plus

    I was able to help some of those people in my work as co-founder and medical director of a low-barrier clinic, which is a clinic that makes it easier for people to get started with buprenorphine. I asked our clinicians to offer a higher dose when they believed the current one wasn’t meeting the patient’s needs.

    The dose choice may be a life-or-death decision. Increasing it by one more pill – to 32 milligrams – often makes the difference between a patient staying in or leaving treatment. The risk of leaving treatment is particularly significant for the patients we typically see at low-barrier clinics, many of whom face severe life challenges. While patients do sometimes give away or sell extra pills, research consistently shows that illegally obtained pills are most commonly used for self-treatment – to control withdrawal and help quit opioids when treatment is unavailable.

    Medicaid in my state of Washington began paying for prescriptions up to 32 milligrams in 2019. But clinicians may still encounter constraints from other health insurers and at pharmacies. Some states, such as Tennessee, Kentucky and Ohio, have dose restrictions cemented in law.

    Finding the right dose

    The challenge of finding the right treatment dose became more acute for clinicians and patients as fentanyl swept across the country starting in 2013. Fentanyl now dominates the unregulated opioid supply. Fifty times stronger than heroin, fentanyl overwhelms the ability of low doses of buprenorphine to counter its effects.

    Buprenorphine – also known by the brand name Suboxone, which contains a mix of buprenorphine and naloxone – is an opioid medication with the quirk of both activating the brain’s opioid receptors and partially blocking them. It provides just enough opioid effects to prevent withdrawal symptoms and craving while also blocking the reward of euphoria. It relieves pain like other opioids but doesn’t cause breathing to stop. It can dramatically reduce the risk of overdose death by as much as 70%.

    In medicine, there is a general concern that too high a dose may have toxic effects. However, as many clinicians and researchers have observed, using too low a dose of some treatments can also lead to harm, including death from patients going back to fentanyl.

    After observing so many patients responding well to higher doses, my colleagues and I looked in the medical literature for more information. We discovered over a dozen reports as far back as 1999 providing evidence that buprenorphine’s benefits steadily increase up to at least 32 milligrams.

    At higher doses, patients stay on treatment longer, use illicit opioids less often, have fewer complications such as hepatitis C, have fewer emergency room visits and hospitalizations, and suffer less from chronic pain. Brain scans show that buprenorphine at 32 milligrams occupies more opioid receptors – over 90% of receptors in some brain regions – compared with lower doses. One study even showed that a high enough dose of buprenorphine can directly prevent fentanyl overdose.

    As illicit opioids become more potent, addiction becomes more deadly – and more urgent to treat.

    Patients with some health conditions may especially benefit from higher doses. During pregnancy, as in Rosa’s case, withdrawal symptoms can grow more intense because of metabolism changes that reduce the blood concentration of most medications. A higher dose may be needed to maintain the level of effects they had before pregnancy. Additionally, I found that the patients in my clinic with chronic pain, post-traumatic stress disorder or longtime opioid use were most likely to find relief at a dose above 24 milligrams.

    The American Society of Addiction Medicine recommends
    four goals of treatment: suppressing opioid withdrawal, blocking the effects of illicit opioids, stopping opioid cravings and reducing the use of illicit opioids, and promoting recovery-oriented activities.

    Similarly, patients seek a comfortable and effective dose – that is, one that avoids withdrawal symptoms and craving, and allows them to avoid illicit drug use and the associated worry and stress. Many patients also yearn to feel trusted, accepted and understood by their clinician. Achieving that goal requires shared decision-making.

    A clinician can never be sure a patient is meeting all the goals of treatment. But a patient who reports positive life changes – such as stable housing and improved relationships – and reports low or no craving while awake or dreaming will likely be satisfied with the current dose. For a patient who does not make progress with a dose increase to 32 milligrams, the clinician might consider a different treatment plan, such as a 30-day buprenorphine injection, which can provide an even higher dose, or transition to methadone, the other highly effective FDA-approved medication for opioid use disorder.

    The FDA guideline change

    In August 2022, a team of addiction physicians attempted to move the FDA to change dosing guidelines for buprenorphine. They submitted a petition asking for a modernized guideline that based dosing on how a patient responds to buprenorphine – including symptom relief and reduced illicit drug use – rather than a fixed “target” dose. They asked to remove language that incorrectly denied evidence that patients benefited from doses above 24 milligrams.

    The FDA listened. In December 2023, it convened a public meeting with leading addiction clinicians, researchers and policymakers to review the evidence on buprenorphine dosing. The group came to an overwhelming consensus that there was extensive research showing benefit at doses above 24 milligrams. Moreover, they doubted whether the guideline’s dosing conclusions, made before fentanyl infiltrated the drug supply, applied today.

    Treatment is most effective when patients feel their needs are understood.
    Spencer Platt/Getty Images

    Then, the FDA responded. In December 2024, it announced a new buprenorphine recommendation that would not mention a target dose and would not deny the existence of evidence of benefits above 24 milligrams. Only time will tell whether and when the FDA’s new guideline will meaningfully alter prescribing patterns, insurance and pharmacy restrictions, and state laws.

    To maintain the national trend toward lower overdose deaths, the best possible use of each effective treatment is critical. Yet the Trump administration’s proposed cuts to Medicaid – which covers nearly half of all buprenorphine prescriptions – put access seriously at risk. Most people with untreated addiction would be blocked from accessing treatment altogether, let alone at an effective dose or with the behavioral health, social work and recovery support services needed for the best outcomes. Research shows that a sharp reduction in buprenorphine prescriptions occurred following 2023 Medicaid coverage restrictions.

    Opioid use disorder is treatable. Buprenorphine works well and saves lives when given at the right dose. An inadequate dose can directly harm patients who are simply trying to survive and improve their lives.

    Lucinda Grande is a physician and partner at Pioneer Family Practice in Lacey, Washington.

    ref. For opioid addiction, treatment underdosing can lead to fentanyl overdosing – a physician explains – https://theconversation.com/for-opioid-addiction-treatment-underdosing-can-lead-to-fentanyl-overdosing-a-physician-explains-250588

    MIL OSI – Global Reports

  • MIL-Evening Report: Could a bold anti-poverty experiment from the 1960s inspire a new era in housing justice?

    Source: The Conversation (Au and NZ) – By Deyanira Nevárez Martínez, Assistant Professor of Urban and Regional Planning, Michigan State University

    Model Cities staff in front of a Baltimore field office in 1971. Robert Breck Chapman Collection, Langsdale Library Special Collections, University of Baltimore, CC BY-NC-ND

    In cities across the U.S., the housing crisis has reached a breaking point. Rents are skyrocketing, homelessness is rising and working-class neighborhoods are threatened by displacement.

    These challenges might feel unprecedented. But they echo a moment more than half a century ago.

    In the 1950s and 1960s, housing and urban inequality were at the center of national politics. American cities were grappling with rapid urban decline, segregated and substandard housing, and the fallout of highway construction and urban renewal projects that displaced hundreds of thousands of disproportionately low-income and Black residents.

    The federal government decided to try to do something about it.

    President Lyndon B. Johnson launched one of the most ambitious experiments in urban policy: the Model Cities Program.

    As a scholar of housing justice and urban planning, I’ve studied how this short-lived initiative aimed to move beyond patchwork fixes to poverty and instead tackle its structural causes by empowering communities to shape their own futures.

    Building a great society

    The Model Cities Program emerged in 1966 as part of Johnson’s Great Society agenda, a sweeping effort to eliminate poverty, reduce racial injustice and expand social welfare programs in the United States.

    Earlier urban renewal programs had been roundly criticized for displacing communities of color. Much of this displacement occurred through federally funded highway and slum clearance projects that demolished entire neighborhoods and often left residents without decent options for new housing.

    So the Johnson administration sought a more holistic approach. The Demonstration Cities and Metropolitan Development Act established a federal framework for cities to coordinate housing, education, employment, health care and social services at the neighborhood level.

    New York City neighborhoods designated for revitalization with funding from the Model Cities Program.
    The City of New York, Community Development Program: A Progress Report, December 1968.

    To qualify for the program, cities had to apply for planning grants by submitting a detailed proposal that included an analysis of neighborhood conditions, long-term goals and strategies for addressing problems.

    Federal funds went directly to city governments, which then distributed them to local agencies and community organizations through contracts. These funds were relatively flexible but had to be tied to locally tailored plans. For example, Kansas City, Missouri, used Model Cities funding to support a loan program that expanded access to capital for local small businesses, helping them secure financing that might otherwise have been out of reach.

    Unlike previous programs, Model Cities emphasized what Johnson described as “comprehensive” and “concentrated” efforts. It wasn’t just about rebuilding streets or erecting public housing. It was about creating new ways for government to work in partnership with the people most affected by poverty and racism.

    A revolutionary approach to poverty

    What made Model Cities unique wasn’t just its scale but its philosophy. At the heart of the program was an insistence on “widespread citizen participation,” which required cities that received funding to include residents in the planning and oversight of local programs.

    The program also drew inspiration from civil rights leaders. One of its early architects, Whitney M. Young Jr., had called for a “Domestic Marshall Plan” – a reference to the federal government’s efforts to rebuild Europe after World War II – to redress centuries of racial inequality.

    Civil rights activist Whitney M. Young Jr. helped shape the vision of the Model Cities Program.
    Bettmann/Getty Images

    Young’s vision helped shape the Model Cities framework, which proposed targeted systemic investments in housing, health, education, employment and civic leadership in minority communities. In Atlanta, for example, the Model Cities Program helped fund neighborhood health clinics and job training programs. But the program also funded leadership councils that for the first time gave local low-income residents a direct voice in how city funds were spent.

    In other words, neighborhood residents weren’t just beneficiaries. They were planners, advisers and, in some cases, staffers.

    This commitment to community participation gave rise to a new kind of public servant – what sociologists Martin and Carolyn Needleman famously called “guerrillas in the bureaucracy.”

    A Model Cities staffer discusses the program to a group of students gathered at Denver’s Metropolitan Youth Education Center in 1970.
    Bill Wunsch/The Denver Post via Getty Images

    These were radical planners – often young, idealistic and deeply embedded in the neighborhoods they served. Many were recruited and hired through new Model Cities funding that allowed local governments to expand their staff with community workers aligned with the program’s goals.

    Working from within city agencies, these new planners used their positions to challenge top-down decision-making and push for community-driven planning.

    Their work was revolutionary not because they dismantled institutions but because they reimagined how institutions could function, prioritizing the voices of residents long excluded from power.

    Strengthening community ties

    In cities across the country, planners fought to redirect public resources toward locally defined priorities.

    A mobile dentist office in Baltimore.
    Robert Breck Chapman Collection, Langsdale Library Special Collections, University of Baltimore, CC BY-NC-ND

    In some cities, such as Tucson, the program funded education initiatives such as bilingual cultural programming and college scholarships for local students. In Baltimore, it funded mobile health services and youth sports programs.

    In New York City, the program supported new kinds of housing projects called vest-pocket developments, which got their name from their smaller scale: midsize buildings or complexes built on vacant lots or underutilized land. New housing such as the Betances Houses in the South Bronx were designed to add density without major redevelopment taking place – a direct response to midcentury urban renewal projects, which had destroyed and displaced entire neighborhoods populated by the city’s poorest residents. Meanwhile, cities such as Seattle used the funds to renovate older apartment buildings instead of tearing them down, which helped preserve the character of local neighborhoods.

    The goal was to create affordable housing while keeping communities intact.

    An Atlanta neighborhood identified as a candidate for street paving and home rehabilitation as part of the Model Cities Program.
    Georgia State University Special Collections

    What went wrong?

    Despite its ambitious vision, Model Cities faced resistance almost from the start. The program was underfunded and politically fragile. While some officials had hoped for US$2 billion in annual funding, the actual allocation was closer to $500 million to $600 million, spread across more than 60 cities.

    Then the political winds shifted. Though designed during the optimism of the mid-1960s, the program started being implemented under President Richard Nixon in 1969. His administration pivoted away from “people programs” and toward capital investment and physical development. Requirements for resident participation were weakened, and local officials often maintained control over the process, effectively marginalizing the everyday citizens the program was meant to empower.

    In cities such as San Francisco and Chicago, residents clashed with bureaucrats over control, transparency and decision-making. In some places, participation was reduced to token advisory roles. In others, internal conflict and political pressure made sustained community governance nearly impossible.

    Critics, including Black community workers and civil rights activists, warned that the program risked becoming a new form of “neocolonialism,” one that used the language of empowerment while concentrating control in the hands of white elected officials and federal administrators.

    A legacy worth revisiting

    Although the program was phased out by 1974, its legacy lived on.

    In cities across the country, Model Cities trained a generation of Black and brown civic leaders in what community development leaders and policy advocates John A. Sasso and Priscilla Foley called “a little noticed revolution.” In their book of the same name, they describe how those involved in the program went on to serve in local government, start nonprofits and advocate for community development.

    It also left an imprint on later policies. Efforts such as participatory budgeting, community land trusts and neighborhood planning initiatives owe a debt to Model Cities’ insistence that residents should help shape the future of their communities. And even as some criticized the program for failing to meet its lofty goals, others saw its value in creating space for democratic experimentation.

    A housing meeting takes place at a local Model Cities field office in Baltimore in 1972.
    Robert Breck Chapman Collection, Langsdale Library Special Collections, University of Baltimore, CC BY-NC-ND

    Today’s housing crisis demands structural solutions to structural problems. The affordable housing crisis is deeply connected to other intersecting crises, such as climate change, environmental injustice and health disparities, creating compounding risks for the most vulnerable communities. Addressing these issues through a fragmented social safety net – whether through housing vouchers or narrowly targeted benefit programs – has proven ineffective.

    Today, as policymakers once again debate how to respond to deepening inequality and a lack of affordable housing, the lost promise of Model Cities offers vital lessons.

    Model Cities was far from perfect. But it offered a vision of how democratic, local planning could promote health, security and community.

    Deyanira Nevárez Martínez is a trustee of the Lansing School District Board of Education and is currently a candidate for the Lansing City Council Ward 2.

    ref. Could a bold anti-poverty experiment from the 1960s inspire a new era in housing justice? – https://theconversation.com/could-a-bold-anti-poverty-experiment-from-the-1960s-inspire-a-new-era-in-housing-justice-253706

    MIL OSI AnalysisEveningReport.nz

  • MIL-OSI Global: 10 years ago Kenya set out to fix gender gaps in education – what’s working and what still needs to be done

    Source: The Conversation – Africa – By Benta A. Abuya, Research Scientist, African Population and Health Research Center

    The Kenyan government launched a big attempt in 2015 to promote gender equality in and through the education sector. This was guided by principles of equal participation and inclusion of women and men, and girls and boys in national development.

    The Education and Training Sector Gender Policy aligned with national, regional and global commitments. This included the constitution, and Sustainable Development Goals 4 on quality education and 5 on gender equality.

    Years later, however, it became clear that the government wasn’t achieving some policy’s objectives. Gaps remained in reducing gender inequalities in access, participation and achievement at all levels of education.

    The government decided to review the causes of these challenges and what could be done differently.

    This led to a two-year joint study in partnership with the African Population and Health Research Center. The study began in 2022. Its overall objective was to provide evidence for action on mainstreaming gender issues in basic education in Kenya. Gender mainstreaming generally refers to being sensitive to gender when developing policies and curricula, governing schools, teaching and using learning materials.

    The study specifically aimed to:

    1. examine how the teacher-training curriculum prepares teachers to implement gender mainstreaming strategies within the basic education sector

    2. examine how gender mainstreaming is practised in classrooms during teaching and learning

    3. assess the relationship between teaching practices and students’ attendance, choice of subjects and academic performance

    4. evaluate the availability of institutional policies, practices and guidelines to mainstream gender issues and the extent to which they influence gender mainstreaming in education.

    I’m a gender and education researcher and was part of the team from the African Population and Health Research Center that collected data for the policy review. This data came from 10 counties with high child poverty rates and urban informal settlements. These indicators highlight an inability to access one or more basic needs or services.

    The study involved teacher trainers and trainees. We also spoke to education officials, and learners in primary and secondary schools. We carried out classroom observations, knowledge and attitude surveys, questionnaires, key informant interviews and focus group discussions.




    Read more:
    6 priorities to get Kenya’s curriculum back on track – or risk excluding many children from education


    The data showed gaps in teacher training, as well as institutional and teaching practices at the basic education level. Policy wasn’t being carried through in practice.

    The gaps

    Our study found that Kenya needs to review its teacher education curriculum to make it more gender responsive.

    Teachers also need more training to follow practices that are gender responsive. These practices include extending positive reinforcement to girls and boys, maintaining eye contact and allowing learners to speak without interruption.

    Deliberate steps should be taken to ensure that schools and teacher training colleges are gender inclusive in their practices, guidelines and programmes.

    More specifically, our study found:

    • Teacher trainees had a relatively good understanding of gender-equitable teaching and learning practices. But there was a need to place greater importance on this in lesson planning and in supporting girls in science, technology, engineering and mathematics (STEM).

    • Gender mainstreaming is not built into the teacher training curriculum. It isn’t taught as a standalone unit. Teacher trainees learnt about it mainly from general courses, such as child development and psychology, or private training. And teacher trainees were unaware that they were being tested on this.

    • There were no significant gender differences in how teachers in pre-primary and primary school taught boys and girls. At the secondary level, however, teachers engaged boys more than girls during during literacy and STEM lessons.

    • At both primary and secondary levels, gender-equitable practices positively influenced learning outcomes in English and STEM subjects. These practices improved academic performances in English at the primary level. They led to improvements in biology, English, mathematics and physics at the secondary level.

    • The odds of school attendance increased if teachers treated boys and girls in equitable ways.

    • The odds of boys selecting chemistry and physics at the secondary level increased if the teacher of the subject was approachable and if the subject was considered applicable to future careers.

    • More than 40% of primary and secondary schools didn’t have guidelines on sexual harassment and gender-based violence for teachers and students. And most of the schools that said they had these guidelines couldn’t provide them to the research team. These guidelines help mainstream gender issues in schools and communities.

    What next

    To advance gender equality, Kenya must move beyond policy awareness. It must be more responsive to gender in teacher training, classroom practices and institutional leadership.

    Our study recommends:

    • creating a positive and inclusive learning environment where both boys and girls feel valued, capable, and motivated to learn

    • teaching gender mainstreaming as a standalone unit, or integrating it into the teaching methodology

    • coaching, mentorship and modelling of best practices to trainee teachers

    • financial support for gender mainstreaming in all areas of teacher education

    • encouraging girls to pursue STEM subjects and careers at an early age through formal mentorship programmes

    • encouraging and empowering women teachers and parents to take up leadership positions in schools to provide role models for students.




    Read more:
    Kenya’s decision to make maths optional in high school is a bad idea – what should happen instead


    Our findings offer a critical evidence base for the education ministry and other stakeholders. They should put accountability mechanisms in place.

    Only through sustained, data-driven action can Kenya achieve a truly inclusive and equitable education system.

    Benta A. Abuya does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. 10 years ago Kenya set out to fix gender gaps in education – what’s working and what still needs to be done – https://theconversation.com/10-years-ago-kenya-set-out-to-fix-gender-gaps-in-education-whats-working-and-what-still-needs-to-be-done-255400

    MIL OSI – Global Reports

  • MIL-OSI Global: How vitamin B12 deficiency may disrupt pregnant women’s bodies

    Source: The Conversation – UK – By Adaikala Antonysunil, Senior Lecturer in Biochemistry, School of Science and Technology, Nottingham Trent University

    Just Life/Shutterstock

    Despite living in an age of dietary abundance, vitamin B12 deficiency is on the rise.

    One major culprit? Our growing reliance on ultra-processed foods (UPFs) – those convenient, calorie-dense and nutrient-poor products that dominate supermarket shelves. While they might fill us up, they’re fuelling a global epidemic of “hidden hunger”.

    This refers to a lack of essential micronutrients including B12, folate, iron and zinc, even when people consume enough (or too many) calories. It’s often invisible but can have long-term consequences, particularly for vulnerable groups like pregnant women, children and the elderly.

    B12 deficiency in pregnancy, especially in the context of a diet high in ultra-processed foods, can disturb how fat is processed and increase systemic inflammation. This raises the risk of long-term health problems for both mother and baby.


    Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.


    A recent study shed light on how B12 deficiency during pregnancy may disrupt two critical systems in the body: fat metabolism and inflammation – both of which are closely linked to chronic diseases like heart disease and type 2 diabetes.

    Researchers studied fat tissue from 115 pregnant women with low B12 levels, focusing on two types of abdominal fat: subcutaneous (under the skin) and omental or visceral (around the organs). They also examined lab-grown fat cells exposed to different B12 levels and collected samples from women of different body weights.

    The results were striking. Women with low B12 had higher body weight and lower levels of HDL (the “good” form of cholesterol). Their fat cells showed increased fat storage, reduced fat breakdown, and impaired mitochondrial function – the energy engines inside our cells.

    Most concerning, these women’s fat tissue released higher levels of inflammatory molecules, suggesting that B12 deficiency might place the body into a constant state of low-grade stress.

    Ancient molecule

    What sets B12 apart from other vitamins is that it’s made exclusively by bacteria and archaea (tiny single-celled organisms similar to bacteria but with important genetic and biochemical differences). Neither plants, animals nor humans can produce B12.

    Some scientists even speculate that B12 may have formed prebiotically, before life itself began. It shares part of its structure, known as a tetrapyrrole ring, with several other of life’s most vital compounds including chlorophyll (for photosynthesis) and heme (for carrying oxygen in our blood).

    Although heme has typically been seen as the elder of all these molecules, recent evidence suggests B12 might have come first. Its core structure – a tetrapyrrole known as the corrin ring – has been found in bacteria that don’t produce heme at all, hinting at even deeper evolutionary roots.

    Because humans can’t make B12, we depend on our diet to get it. Ruminant animals like cows and sheep are able to host B12-producing bacteria in their stomachs and absorb the nutrient directly. We, however, must obtain it from animal-based foods – or from supplements and fortified products.

    Since plants neither produce nor store B12, vegetarians and vegans are at higher risk of this deficiency unless they supplement regularly. As diets become more processed and less diverse, B12 intake and absorption drops, leading to problems in brain function, metabolism and fetal development. Often, the deficiency isn’t spotted until symptoms become serious or irreversible.

    The takeaway is that we need to pay more attention to micronutrients, not just calories. Ensuring adequate B12 levels, particularly before and during pregnancy, is crucial. That means prioritising whole foods, fruits, vegetables and quality sources of protein, while limiting ultra-processed products.

    From the primordial soup to the modern dinner plate, vitamin B12 is more than a nutrient – it’s a molecular link between our evolutionary past and our future health. Recognising its importance might just be one of the most powerful steps we can take toward a healthier, more informed life.

    Adaikala Antonysunil receives funding from DRWF, BBSRC, Rosetrees Trust and Society of Endocrinology.

    ref. How vitamin B12 deficiency may disrupt pregnant women’s bodies – https://theconversation.com/how-vitamin-b12-deficiency-may-disrupt-pregnant-womens-bodies-256244

    MIL OSI – Global Reports

  • MIL-OSI Global: What the hidden rhythms of orangutan calls can tell us about language – new research

    Source: The Conversation – UK – By Chiara De Gregorio, Post Doctoral Research Fellow, University of Warwick

    Don Mammoser/Shutterstock

    In the dense forests of Indonesia, you can hear strange and haunting sounds. At first, these calls may seem like a random collection of noises – but my rhythmic analyses reveal a different story.

    Those noises are the calls of Sumatran orangutans (Pongo abelii), used to warn others about the presence of predators. Orangutans belong to our animal family – we’re both great apes. That means we share a common ancestor – a species that lived millions of years ago, from which we both evolved.

    Like us, orangutans have hands that can grasp, they use tools and can learn new things. We share about 97% of our DNA with orangutans, which means many parts of our bodies and brains work in similar ways.

    That’s why studying orangutans can also help us understand more about how humans evolved, especially when it comes to things like communication, intelligence and the roots of language and rhythm.

    Research on orangutan communication conducted by evolutionary psychologist Adriano Lameira and colleagues in 2024 focused on a different species of orangutan, the wild Bornean orangutan (Pongo pygmaeus wurmbii). They looked at a type of vocalisation made only by males, known as the long call, and found that long calls are organised into two levels of rhythmic hierarchy.

    This was a groundbreaking discovery, showing that orangutan rhythms are structured in a recursive way. Human language is deeply recursive.

    Recursion is when something is built from smaller parts that follow the same pattern. For example, in language, a sentence can contain another sentence inside it. In music, a rhythm can be made of smaller rhythms nested within each other. It’s a way of organising information in layers, where the same structure repeats at different levels.

    So, when the two-level rhythmic pattern was discovered in the long calls of male Bornean orangutans, my team wanted to know whether this kind of rhythm was unique to those particular calls, or revealed a deeper part of how orangutans communicate. To find out, we studied the alarm calls of wild female Sumatran orangutans and found something surprising.

    Instead of two levels, as had been seen in the Bornean males, this time we found three. This is an even more sophisticated pattern than we expected.

    The shared roots of language

    Returning to those alarm calls echoing through the Indonesian forest, we can now hear them with new ears. With the help of statistical tools, what sounded like random noise now takes on a clear structure – a rhythmic pattern of calls grouped into regular bouts and repeated in sequences.

    Each layer follows a steady rhythm, like the ticking of a metronome.

    Until recently, many scientists believed only humans could build layered vocal structures. This belief helped reinforce the idea of a divide between us and other animals.

    But our discovery adds to a growing body of research showing this divide may not be so clear-cut. Studies on great apes and other animals such as lemurs, whales and dolphins have revealed they are capable of rhythmic structuring, vocal learning, combining signals and sounds to make new ones, and even using vowels and consonants. These findings suggest the roots of language may lie in shared evolutionary mechanisms.

    Human language is unique in many ways. But it probably did not appear suddenly. Even the most striking traits in life evolve by reshaping what already exists, through the slow work of variation and natural selection. Our work suggests the brain systems needed to build recursive patterns were present in our ancestors millions of years ago.

    The evolution of language

    We wanted to take our investigation a step further and ask why recursive patterns evolved. So, we designed an experiment in which wild orangutans were exposed to different predator models, some posing a more realistic threat than others.

    This involved a person walking on all fours under different-coloured blankets. One had tiger stripes (tigers are orangutan predators). The other blankets were blue, white or multi-coloured.

    We found that more structured, regular and faster orangutan alarm sequences were made in response to tiger stripes. When the predator seemed less convincing, the vocalisations lost that regularity and slowed down. So, rhythm may help listeners gauge the seriousness of a situation.

    These patterns in orangutan calls give us some important hints about how language might have started. But it’s possible that other animals have similar ways of communicating that we haven’t discovered yet. To really understand how things like evolution, social life and the environment shape these interesting communication skills, we need to keep studying many different animals.

    Perhaps the most surprising lesson is this: complexity doesn’t always need words. The rhythms, patterns and structures we have uncovered in orangutan alarms remind us that meaningful communication can emerge in many forms – and that the roots of our language may lie not just in what is said, but how it is expressed.

    Chiara De Gregorio does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. What the hidden rhythms of orangutan calls can tell us about language – new research – https://theconversation.com/what-the-hidden-rhythms-of-orangutan-calls-can-tell-us-about-language-new-research-257400

    MIL OSI – Global Reports

  • MIL-Evening Report: What’s the difference between abs and core? One term focuses on aesthetics – and the other on function

    Source: The Conversation (Au and NZ) – By Hunter Bennett, Lecturer in Exercise Science, University of South Australia

    Maksim Goncharenok/Pexels

    You’ve probably heard the terms “abs” and “core” used in social media videos, Pilates classes, or even by physiotherapists.

    Given they seem to refer to the same general area of your body, you might have wondered what the difference is.

    When people talk about “abs”, they’re often referring to the abdominal muscles you can see. Conversely, the term “core” is used to describe a broader group of muscles in the context of function, rather than aesthetics.

    While abs and core are often spoken about separately, there’s a lot of overlap between them.

    What are abs?

    The term “abs” is short for abdominal muscles. These are the muscles that run along the front and side of your stomach.

    When someone talks about getting a six-pack, they’re usually referring to toning the rectus abdominis, the long muscle that goes from the bottom of your ribs to the top of your pelvis.

    Your abdominals also include your obliques, which sit on the side of your body, and your transverse abdominis, which sits underneath your other abdominal muscles and wraps around your waist like a belt.

    The term “abs” has been around for a long time, and is perhaps most often used when discussing aesthetics.

    For example, it’s common to see health and wellness publications offering advice on how to achieve “flat” or “six-pack” abs.

    The long muscle that goes from the bottom of your ribs to the top of your pelvis is called the rectus abdominis.
    phoenix creation/Shutterstock

    What about the core?

    When people talk about the “core”, they are often referring to your abdominals, but also the muscles in your back (your spinal erectors), hips, glutes, pelvic floor, and your diaphragm.

    These are the muscles that can stabilise your spine against movement, and aid in the transfer of force between the upper and lower limbs.

    The term “core” wasn’t commonly used until the early 2000s, when it became synonymous with core training.

    While the exact reason for its surge in popularity isn’t clear, it most likely followed a study published in 1998 that suggested people with lower back pain might have impaired function of their deep abdominal muscles.

    From there, the concept of “core training” entered the mainstream, where it was proposed to reduce lower back pain and improve athletic performance.

    ‘Core’ training only entered the mainstream this century.
    nadia_acosta/Shutterstock

    What does the evidence say?

    When we consider all the muscles that make up the core, it seems obvious they would be important – but it might not be for the reasons you think.

    For example, having good core stability doesn’t necessarily prevent lower back pain, as it’s been touted to do.

    There’s evidence suggesting core stability training, which might include exercises such as planks and dead bugs, can help reduce bouts of lower back pain. However it doesn’t appear to be any more effective than other types of exercise, such as walking or weight training.

    Other research suggests there aren’t any differences in how people with and without lower back pain recruit and use their core muscles.

    In a separate study, improvements in core strength and stability after a nine-week core stability training program were not significantly associated with improvements in pain and function, further questioning this relationship.

    The link between core strength and athletic performance is also unclear.

    A 2016 review found some very small associations between measures of core muscle strength and measures of whole body strength, power and balance. However, because of the design of the studies reviewed, we don’t know whether people who have better strength, power and balance simply have stronger core muscles, or whether stronger core muscles increase strength, power and balance.

    An earlier review summarised the effect of core stability training on measures of athletic performance, including jumping, sprinting and throwing. It concluded this type of training is unlikely to provide substantial benefits to measures of general athletic performance such as jumping and sprinting.

    However, this review also suggested that, given the important role of the abs in torso rotation, strengthening these muscles might have merit in improving performance in sports that involve swinging a bat or throwing a ball.

    This is likely to apply to other sports that involve rapid torso movement as well, such as mixed martial arts and kayaking.

    Stronger abdominal muscles could offer an advantage in sports that involve rotation.
    Lino Khim Medrina/Pexels

    How can you exercise your abs and core?

    There’s good evidence that simply getting stronger by lifting weights can help prevent injuries. Training your core to get stronger should have a similar impact, as long as it’s part of a broader training program.

    We also know having weaker muscles makes you more likely to experience functional limitations and disability in older age. So alongside any other potential benefits, improving core strength with the rest of your body could help keep you fit and healthy as you get older.

    There are plenty of exercises you can do to train your core and abs.

    If you’re new to core training, you might want to start off with some lower-level isolation exercises that don’t involve any movement of the core. These include things like planks, bird dogs, and pallof presses. These are unlikely to cause too much muscle soreness, but will train your core muscles.

    Once you feel like these are going well, you can start moving into some more dynamic exercises such as sit ups, Russian twists and leg raises, where you train your abdominals using a full range of motion.

    Hunter Bennett does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. What’s the difference between abs and core? One term focuses on aesthetics – and the other on function – https://theconversation.com/whats-the-difference-between-abs-and-core-one-term-focuses-on-aesthetics-and-the-other-on-function-254582

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: The drought is back – we need a new way to help farmers survive tough times

    Source: The Conversation (Au and NZ) – By Linda Botterill, Visiting Fellow, Crawford School of Public Policy, Australian National University

    Australia in 2025 is living up to Dorothy McKellar’s poetic vision of a country stricken by “drought and flooding rains”.

    The clean up is underway from the deadly floods in the Hunter and mid-north coast regions of New South Wales. At the same time, large swathes of Victoria, South Australia and Tasmania are severely drought affected due to some of the lowest rainfall on record.

    Do we have the right support arrangements in place to help farmers and communities survive the current dry period?

    Or is there a better way to help primary producers through the tough times, which are predicted to become more frequent and severe under climate change?

    Managing risk

    Drought is not a natural disaster – at least not according to Australia’s National Drought Policy. In 1989, drought was removed from what are now known as the Natural Disaster Relief and Recovery Arrangements.

    The decision was made for several reasons, including the high level of expenditure on drought relief in Queensland. The federal finance minister at the time, Peter Walsh, suggested the Queensland government was using the arrangements as a “sort of National Party slush fund to be distributed to National Party toadies and apparatchiks”.

    The more considered reason was that our scientific understanding of the drivers of Australia’s climate, such as El Niño, suggested drought was a normal part of our environment. Since then, climate modelling points to droughts becoming an even more familiar sight in Australia as a result of global warming.

    So the focus of drought relief shifted from disaster response to risk management.

    Building resilience

    The National Drought Policy announced in 1992 stated drought should be managed like any other business risk.

    Since then, the language of resilience has been added to the mix and the government lists three objectives for drought policy:

    • to build the drought resilience of farming businesses by enabling preparedness, risk management and financial self-reliance
    • to ensure an appropriate safety net is always available to those experiencing hardship
    • to encourage stakeholders to work together to address the challenges of drought.

    Since 1992, various governments have introduced, and tweaked, different programs aimed at supporting drought-affected farmers.

    The most successful program is the Farm Management Deposits Scheme. This has accumulated a whisker under A$6 billion in farmer savings, which are available to be drawn down during drought to support farm businesses.

    Others have come and gone – for example, the much-criticised Exceptional Circumstances Program.

    More help needed

    In 2025, the federal government is using the Future Drought Fund to invest $100 million per year to promote resilience. It also offers support through the Farm Household Allowance and concessional loans for farms and related small businesses.

    Apart from the Farm Management Deposit Scheme and the Farm Household Allowance, these programs do not offer immediate financial assistance to the increasing number of farmers across southern Australia being impacted by drought. If the drought worsens, it is likely there will be increasing calls for greater support.

    This provides the government with a dilemma: it is already investing significantly in the risk and resilience approach to drought, but politically, it is hard to resist cries for help from farmers who are a highly valued group in our community.

    A better way?

    There is a solution available to government to improve support. It can be done through the provision of “revenue contingent loans” for drought-affected farmers. Financial support would be available to farmers when they need it, consistent with the risk management principles underpinning the national drought policy.

    Our detailed modelling, extending now over 25 years, shows compellingly that revenue-based loans would mean taxpayers spending less on drought arrangements. But the assistance compared with other forms of public sector help would be greater.

    Capacity to repay would be the defining feature of the scheme. A revenue contingent loan is only paid down in periods when the farm is experiencing healthy cash flow. If a farm’s annual financial situation is difficult, no repayments are required.

    These loans would also remove foreclosure risk associated with an inability to repay when times are tough. Loan defaults simply can’t happen, a feature which also takes away the psychological trauma associated with the fear of losing the property due to unforeseen financial difficulties.

    Good policy

    These benefits would address governments’ main motivation with drought policy, which is risk management. That is because repayment concerns and default prospects would be eliminated. With farming, in which there is great uncertainty, these are very significant pluses for policy.

    Revenue contingent loans are a proper risk management financial instrument that requires low or no subsidies from government. They would complement the Farm Management Deposit Scheme and be an effective replacement for the concessional loans currently on offer.

    A win-win for farmer and taxpayer, alike.

    Linda Botterill has in the past received funding from the Australian Research Council, the Grains Research and Development Corporation, and Rural Industries Research and Development Corporation (now Agrifutures).

    Bruce Chapman has received funding from the Australian Research Council in various years, and was a consultant to the Federal Government’s Department of Education University Accord Enquiry in 2023/24.

    ref. The drought is back – we need a new way to help farmers survive tough times – https://theconversation.com/the-drought-is-back-we-need-a-new-way-to-help-farmers-survive-tough-times-256576

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: Australia’s first machete ban is coming to Victoria. Will it work, or is it just another political quick fix?

    Source: The Conversation (Au and NZ) – By Samara McPhedran, Principal Research Fellow, Griffith University

    Following a shopping centre brawl in Melbourne at the weekend, Victorian Premier Jacinta Allan announced the state will ban the sale of all machetes from Wednesday.

    In March this year, the Victorian government had already announced that from September 1 machetes would become a “prohibited weapon”.

    Prohibited weapons are items considered inappropriate for general possession and use without a police commissioner’s approval or a Governor in Council Exemption Order.

    This means machetes will be added to the list of things – such as swords, crossbows, slingshots, pepper spray and about 40 other items – that are essentially banned.

    Possession of a prohibited item can result in penalties of two years imprisonment or a fine of more than $47,000.

    Victoria is the first state in Australia to outright ban machetes. In other jurisdictions, machetes (like knives) may be used for lawful purposes, and are “controlled” or “restricted” – meaning you need a reasonable excuse or valid reason for possessing one.

    Most jurisdictions (except Tasmania and the Northern Territory) prohibit sales to minors.

    Will there be exemptions?

    Allan said the sales ban will have no exceptions, meaning nobody will be able to purchase a machete.

    However, machetes are a useful tool, particularly for agricultural purposes, and outdoor uses such as camping.

    When the new laws come into effect in September, people will be able to apply for a special “commissioner’s approval” to possess a machete. The exact details of who may be granted an exemption, and under what circumstances, are not yet clear.

    Nor is it clear whether people will have to, for example, pay for a permit to own a machete, or what measures people may have to take to prevent unauthorised access or theft.

    How much of a problem is knife crime in Australia?

    Despite alarming headlines and political rhetoric about a knife crime epidemic, it is hard to say exactly how much of a problem knife crime is.

    Statistics about weapon use and unlawful possession are not always disaggregated by type of weapon.

    Crime statistics are notoriously slippery, and what looks like a “crisis” can often be the result of changes in policing practices. For instance, when police run an intensive operation searching for knives in public places, they are more likely to find knives in public places. This does not necessarily mean there are more people out there carrying knives.

    The one crime where statistics are fairly clear is homicide: knives or other sharp instruments have long been the most common weapon used in Australia.

    The actual number of homicides involving knives or sharp instruments has stayed relatively stable over time. When you take into account the increase in how many people live in Australia, the rate per head of population has fallen.

    It is tempting to think a machete ban would reduce these figures even more. Unfortunately, violence prevention is not that simple.

    Homicides that involve people using their hands and feet have declined markedly over time. Why has this “method”, which is available to anybody, fallen so much? The answer is: nobody really knows.

    This tells us we need to look beyond types of weapons.

    Will the ban achieve anything?

    Violence is complex and simple “solutions” may make people feel safe (at least temporarily) but seldom deliver real results over the longer term.

    It’s easy for governments to ban things, which is why they do it so often. But we should pay close attention to what Victorian Police Minister Anthony Carbine said in March:

    This is Australia’s first machete ban, and we agree with police that it must be done once and done right. It took the UK (United Kingdom) 18 months – we can do it in six.

    Lawmaking should never be a race. Nor should politicians be mere mouthpieces doing what police tell them.

    Police are the ones we turn to for protection when violence breaks out, but this does not mean they are the only ones we should go to when we are looking for the most effective ways to deal with problems.

    Tackling violence takes serious commitment to complex and intensive programs that focus on the root causes, particularly among at-risk families and disadvantaged, marginalised youth.

    This is hard work that takes a long time, includes many different stakeholders, and seldom sways votes. Focusing on the choice of weapon is simply a distraction.

    There is no question the sight of machete-wielding youths storming through a busy shopping centre is terrifying. People should be able to go about their business without fearing they will be attacked.

    But reducing violence takes a lot more than banning one particular weapon, as Victoria will likely find out.

    Dr Samara McPhedran does not does not work for, consult to, own shares in or receive funding from any company or organisation that might benefit from this article.

    ref. Australia’s first machete ban is coming to Victoria. Will it work, or is it just another political quick fix? – https://theconversation.com/australias-first-machete-ban-is-coming-to-victoria-will-it-work-or-is-it-just-another-political-quick-fix-257541

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: A not-so-modern epidemic: what 17th-century nuns can teach us about coping with loneliness

    Source: The Conversation (Au and NZ) – By Claire Walker, Associate Professor, School of Historical and Classical Studies, University of Adelaide

    La Religieuse Tenant La Sainte Croix (The Nun Holds the Cross), Jacques Callot, French,1621–35. The Metropolitan Museum of Art

    Is loneliness a modern epidemic as we are so often told? Did people in the past suffer similar feelings of isolation?

    The word “loneliness” was not common before the 19th century. Cultural historian Fay Bound Alberti argues it was rarely used before 1800.

    This does not mean people didn’t feel alone. They just had different names for it – and they didn’t always think it was bad. Modern people living hectic lives in bustling cities often yearn for peace and tranquillity; so did our forebears.

    From the hermits of the early Christian church escaping society for lives of solitary prayer, to medieval anchorites in secluded cells, isolation was a prerequisite for spiritual success.

    But were isolated monks, nuns and hermits also lonely, as we would understand the word today? And do early modern nuns offer solutions for our own loneliness epidemic?

    Searching for solitude

    Early Christian religious thinkers and medieval churchmen viewed voluntary loneliness positively, with successful practitioners becoming saints. But religious solitude was not without its problems.

    Holy recluses, far from escaping society, were pursued for spiritual advice. Some, like Simeon Stylites (390–459), went to extraordinary measures, living atop a pillar near Aleppo for 30-odd years to achieve solitude.

    Monasticism provided an alternative. Monastic rules, like that of Benedict of Nursia (480–547), institutionalised isolation. In Benedictine monasteries, solitude was created through seclusion from society, strict silence, and prohibition of close friendships.

    Yet, like hermits, monks and nuns couldn’t escape the world completely. Monasteries constituted vital spiritual resources, providing multiple services and conducting business for wider society.

    Nuns at Work, Follower of Alessandro Magnasco (Italian, Milanese, first half 18th century).
    The Metropolitan Museum of Art

    Over the centuries, reforming bishops believed there was too much interaction between monasteries and the wider community. This led to repeated church reforms from the 10th century onwards to secure separation.

    Male members of the clergy were particularly worried about nuns who were considered “less capable” of maintaining holy solitude. As a result, women had to observe strict enclosure behind convent walls, limiting their economic and spiritual capacity. Reforms in the 16th century upheld nuns’ incarceration.

    Many women resisted, but others embraced isolation as spiritually liberating.

    Isolation in exile

    Early modern English convents, exiled in Europe after Henry VIII’s dissolution of the monasteries, shed light on nuns’ experiences of loneliness.

    The convents were subject to traditional rules of enclosure and silence. To become nuns, women left their homeland, family and friends. They joined English houses, so they were not alone among strangers, but they had to remain emotionally distant from one another, despite living in a community where they did everything together.

    Women wanting spiritual fulfilment often sought additional solitude.

    Benedictine mystic Gertrude More (1606–33) praised prescribed periods of silence because in them she might hear her Lord’s whispers.

    Carmelite prioress Teresa of Jesus Maria Worsley (1601–42) took time from her busy administrative role and hid from the other nuns to pray in solitude.

    The Nun in Count Burckhardt, from the periodical Once a Week. After James McNeill Whistler, American. Associated with Dalziel Brothers, British. September 27 1862.
    The Metropolitan Museum of Art

    Not all women found seclusion and silence so fulfilling, however, with some experiencing bouts of spiritual doubt and poor mental health. Many missed their family and homeland.

    This was particularly common among young sisters and those in convent schools. In the 1660s, Catherine Aston returned to England to recover after suffering poor health and depression.

    Alone in a crowd

    Nuns’ diverse experiences of monastic solitude reflect modern urban loneliness.

    In 1812 Lord Byron expressed the contradictory nature of loneliness in the poem Childe Harold, juxtaposing the positive solitary contemplation of nature with its negative counterpart – aloneness “midst the crowd”.

    In the present day many people feel alone in cities, even domestic households, as Olivia Laing and Keith Snell have shown.

    How might this be countered? Do early modern nuns offer solutions?

    A study of 21st century Spanish monks and nuns found monastic training, prayer and silence create feelings of spiritual satisfaction and purpose which lessens loneliness.

    Prayer is not the answer for everyone because modern isolation is caused by multiple factors in a largely secular society. There are alternative paths to meditation, however, through yoga or mindfulness which can provide feelings akin to monks’ and nuns’ “spiritual satisfaction”.

    Similarly, the nuns’ sense of “purpose” might be achieved through nostalgia. Nostalgia is the longing for an idealised and unobtainable past – a time when life was better. Research by psychologists suggests nostalgia can be beneficial in counteracting loneliness, even enabling forward-looking and proactive behaviours.

    Nuns at Mass, Amedor, Spanish, 1900.
    Getty Museum

    This was certainly true for the nuns exiled in Europe following Henry VIII’s abolition of monasticism in England. They dreamt of a future when their convents would return to England, family and friends. All nuns prayed both communally and in private for this outcome.

    Some went further, engaging in missionary work and political intrigue to achieve their goal.

    We cannot know whether this stifled loneliness, but by combining the benefits of meditation and activism it likely fostered a shared sense of purpose.

    Just as Gertrude More and Teresa of Jesus Maria Worsley found solitude essential for spiritual satisfaction, activist nuns believed they might reverse the English reformation from their exiled convents. Solitude, prayer and political engagement gave them a sense of purpose.

    Everyone’s situation is unique. There is no single solution for resolving isolation in the contemporary world. But the knowledge that it can be positive is perhaps a step towards countering the modern epidemic.

    Claire Walker has received funding from the Australian Research Council.

    ref. A not-so-modern epidemic: what 17th-century nuns can teach us about coping with loneliness – https://theconversation.com/a-not-so-modern-epidemic-what-17th-century-nuns-can-teach-us-about-coping-with-loneliness-249487

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: Actually, Gen Z stand to be the biggest winners from the new $3 million super tax

    Source: The Conversation (Au and NZ) – By Brendan Coates, Program Director, Housing and Economic Security, Grattan Institute

    As debate rages about the federal government’s plan to lift the tax on earnings on superannuation balances over A$3 million, it’s worth revisiting why we offer super tax breaks in the first place, and why they need to be reformed.

    Tax breaks on super contributions mean less tax is paid on super savings than other forms of income. These tax breaks cost the federal budget nearly $50 billion in lost revenue each year.

    These tax breaks boost the retirement savings of super fund members. They also ensure workers don’t pay punitively high long-term tax rates on their super, since the impact of even low tax rates on savings compounds over time.

    But they disproportionately flow to older and wealthier Australians.

    Two thirds of the value of super tax breaks benefit the top 20% of income earners, who are already saving enough for their retirement.

    Few retirees draw down on their retirement savings as intended, and many are net savers – their super balance continues to grow for decades after they retire.

    By 2060, Treasury expects one-third of all withdrawals from super will be via bequests – up from one-fifth today.

    Superannuation in Australia was intended to help fund retirements. Instead, it has become a taxpayer-subsidised inheritance scheme.

    The tax breaks aren’t just inequitable; they are economically unsound. Generous tax breaks for super savers mean other taxes (such as income and company taxes) must be higher to make up the forgone revenue. That means the burden falls disproportionately on younger taxpayers.

    The government should go further

    The government’s plan to increase the tax rate on superannuation earnings for balances exceeding $3 million from 15% to 30% is one modest step towards fixing these problems. The tax would only apply to the amount over $3 million, not the entire balance.

    This reform will affect only the top 0.5% of super account holders – about 80,000 people – and save more than $2 billion a year in its first full year.

    Claims that not indexing the $3 million threshold will result in the tax affecting most younger Australians, or that it will somehow disproportionately affect younger generations, are simply nonsense.

    Rather than being the biggest losers from the lack of indexation, younger Australians are the biggest beneficiaries. It means more older, wealthier Australians will shoulder some of the burden of budget repair and an ageing population. Otherwise, younger generations would bear this burden alone.

    The facts speak for themselves: a mere 0.5% of Australians have more than $3 million in their super, and 85% of those are aged over 60.

    Even in the unlikely scenario where the threshold remains fixed until 2055 – or for ten consecutive parliamentary terms – it would still only affect the top 10% of retiring Australians. Treasurer Jim Chalmers has rightly pointed out that it is unlikely the threshold will never be lifted.

    Far from abandoning the proposed $3 million threshold, the government should go further and drop the threshold to $2 million, and only then index it to inflation, saving the budget a further $1 billion a year.

    There is no rationale for offering such generous earnings tax breaks on super balances between $2 million and $3 million.

    At the very least, if the $3 million threshold is maintained, it should not be indexed until inflation naturally reduces its real value to $2 million, which is estimated to occur around 2040.

    Sure, it’s complicated

    Levying a higher tax rate on the earnings of large super balances is complicated by the fact existing super earnings taxes are levied at the fund level, not on individual member accounts.

    And it’s true that levying a 15% surcharge on the implied earnings of the account over the year (the change in account balance, net of contributions and withdrawals) will impose a tax on unrealised capital gains, or paper profits.

    Taxing capital gains as they build up removes incentives to “lock in” investments to hold onto untaxed capital gains, as the Henry Tax Review recognised. But it can create cash flow problems for some self-managed super fund members who hold assets such as business premises or a farm in their fund.

    Yet there are seldom easy answers when it comes to tax changes.

    Most people with such substantial super balances are retirees who already maintain enough liquid assets to meet the minimum drawdown requirements.

    Indeed, self-managed super funds are legally obligated to have investment strategies that ensure liquidity and the ability to meet liabilities.

    In any case, the tax does not have to be paid from super. Australians with large super balances typically earn as much income from investments outside super. And the wealthiest 10% of retirees today rely more on income from outside super than income from super.

    Good policy is always the art of the compromise

    Australia faces the twin challenges of big budget deficits and stagnant productivity. Tax reform will be needed to respond to both.

    Good public policy, like politics, always requires some level of compromise.

    Super tax breaks should exist only where they support a policy aim. And on balance, trimming unneeded super tax breaks for the wealthiest 0.5% of Australians would make our super system fairer and our budget stronger.

    The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    ref. Actually, Gen Z stand to be the biggest winners from the new $3 million super tax – https://theconversation.com/actually-gen-z-stand-to-be-the-biggest-winners-from-the-new-3-million-super-tax-257450

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: Who really benefits from smart tech at home? ‘Optimising’ family life can reinforce gender roles

    Source: The Conversation (Au and NZ) – By Indra Mckie, Postdoctoral Researcher in Collaborative Human-AI Interaction Culture, University of Technology Sydney

    Ashlifier/Shutterstock

    Have you heard of the “male technologist” mindset? It may sound familiar, and you may even know such people personally.

    Design researchers Turkka Keinonen and Nils Ehrenberg
    have defined the male technologist as someone who is obsessed with concerns about energy, efficiency and reducing labour.

    This archetype became apparent in my PhD research when I interviewed 12 families about their use of early domestic robots and smart home devices Amazon Alexa and Google Home. One father over-engineered his smart home so much, his kids struggled to turn the lights on and off.

    The male technologist in the home, as seen in my research, reflects wider trends of the Silicon Valley “tech bro” archetype, the techno-patriarchy, and the growing influence of a tech oligarchy in the Western world.

    The male technologist often complicates and overcompensates with technology, raising the question: are these real problems tech can solve, or just quick fixes masking deeper issues?

    Long-standing patriarchal systems shape the gendered division of domestic labour.
    Andrea Piacquadio/Pexels

    It’s not about making men feel guilty

    The term “male technologist” isn’t about making men feel guilty for using technology to innovate. Anyone can adopt this mindset. It can even apply to institutions that prioritise innovation and efficiency over emotional insight, lived experience or community-based ways of creating change.

    It’s a reflection of how a masculine drive to solve surface-level problems can come before addressing patriarchal systems that have shaped the long-standing gendered division of domestic labour and “mental load”.

    Mental load is the invisible, ongoing effort of planning, organising and managing daily life that often goes unnoticed but is essential to keeping things running.

    Take one of my research participants, Hugo (name changed for privacy). A father of two, Hugo embodies this male technologist mindset by creating “business scenarios” to solve his family’s problems with smart home automation.


    Indra Mckie/The Conversation

    Treating family life like a system to optimise, Hugo noticed his wife looking stressed while cooking. So, he installed a smart clock with Alexa in the kitchen to help her manage multiple timers.

    Hugo saw it as an empathetic solution, tailored to the way she liked to cook. But instead of sharing the load of this domestic task, he “engineered” around it, offloading responsibility to smart devices.

    Smart home tech promises to save time, but it hasn’t solved who does what at home. Instead, it hands more power to those with digital know-how, letting them automate tasks they may never have done or fully understood in the first place.

    Typically, these tend to be men. A recent survey by Kaspersky showed 72% of men are the ones who set up their families’ smart devices, compared to 47% of women.

    Unfortunately, a recent Australian survey found women still do more unpaid domestic work than men. Even in households where women have full-time jobs, they spend almost four hours more on household chores per week than men do.

    Who really benefits in a smart home

    Amazon first released Alexa back in 2014, with Apple and Google quickly following with their own smart home speakers. In the past decade, some people have adopted the hype of the “smart home” to make life easier by controlling technology without needing to get off the couch.

    But smart technology can also affect access to shared spaces, create new forms of control over things and people in the home, and constrain human interactions. And it can be set up to reinforce the existing hierarchy within the household.


    Indra Mckie/The Conversation

    By his own admission, Hugo has over-engineered the home to the point where his children struggle to turn the lights on and off, having disabled the physical switches in favour of voice commands.

    My research looked at how automation is changing care giving and acts of service in the home. With “compassionate automation”, someone could use smart technology to support loved ones in thoughtful ways, such as setting up smart home routines or reminders to make daily life easier.

    But even when it comes from a place of care, tech-based help is not the same as human care. It may not always feel meaningful to the person receiving or providing it. As another participant in my research put it:

    I think there are still human interactions [..] that you probably don’t want AI to mediate for you.


    Indra Mckie/The Conversation

    So what is the alternative to a male technologist mindset? Feminist and queer technology studies offer a different lens. Researchers in these fields argue our interactions with technology are never neutral; they are shaped by gender, power and cultural norms.

    When we recognise this, we can imagine ways of designing and using tech in ways that emphasise care and relationships. Instead of setting up a smart timer in the kitchen, the technologist could ask his wife what she’s cooking and join her, using the voice assistant together to follow a recipe step by step.

    The ultimate fantasy of the male technologist is more toys to solve domestic labour problems at home.
    Gordenkoff/Shutterstock

    Looking ahead to the future of smart homes

    As Alexa+ rolls out later this year with a “smarter” generative AI brain, Google increases Gemini integration into its Home app, and tech companies race to build humanoid robots that can cook dinner and fold laundry, we’re seeing the ultimate fantasy of the male technologist come to life: more toys to presumably solve the problems of domestic labour at home.

    But if men are now taking on more of the digital load, will the mental load finally shift too? Or will they continue to automate the easy, visible tasks while the emotional and cognitive labour still goes unseen and unshared?

    Elon Musk has declared plans to launch several thousand Optimus robots – Tesla’s bid into the humanoid robot race.
    He expects the explosion of a new market of personal humanoid robots, generating US$10 trillion in revenue long-term and potentially becoming the most valuable part of Tesla’s business.

    But as homes get “smarter,” we have to ask: how is this reshaping family dynamics, relationships and domestic responsibility?

    It’s important to consider if outsourcing chores to technology really is about easing the load, or just engineering our way around it without addressing the deeper mental and relational work of household labour.

    Indra Mckie received the UTS Research Excellence Scholarship to complete her PhD research at the University of Technology Sydney.

    ref. Who really benefits from smart tech at home? ‘Optimising’ family life can reinforce gender roles – https://theconversation.com/who-really-benefits-from-smart-tech-at-home-optimising-family-life-can-reinforce-gender-roles-256477

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: Images of Gaza’s starving babies have gone round the world. This is what malnutrition does in the first 1,000 days of life

    Source: The Conversation (Au and NZ) – By Nina Sivertsen, Associate Professor, College of Nursing and Health Sciences, Flinders University

    A 5-month-old diagnosed with malnutrition being treated at Nasser Hospital in Khan Yunis in Gaza, May 2025. Anadolu/Getty

    Last week, the United Nations warned more than 14,000 babies would die of malnutrition in 48 hours if Israel continued to block aid from entering Gaza.

    After the figure was widely reported, that timeline has been walked back, with a UN spokesperson clarifying the projection is for the next 11 months.

    Between April 2025 and March 2026, there will be 71,000 cases of acute malnutrition among children under five, including 14,100 severe cases.

    Severe acute malnutrition means a child is extremely thin and at risk of dying.

    An estimated 17,000 breastfeeding and pregnant women will also require treatment for acute malnutrition during this time.

    Starvation and malnutrition are harmful for anyone. But for infants the impact can be profound and lasting.

    What is malnutrition?

    In infants and young children, malnutrition means they have a height, weight and head circumference that don’t match standard charts, due to a lack of proper nutrition.

    Nutritional deficiencies are especially common among young children and pregnant women.

    The human body needs 17 essential minerals. Deficiencies in zinc, iron and iodine are the most dangerous, linked to a higher risk of infants dying or developing brain damage.

    When malnutrition is acute to severe, infants and young children will lose weight because they’re not getting enough food, and because they’re more susceptible to illness and diarrhoea.

    This leads to wasting.

    A child experiencing wasting has lost significant weight or fails to gain weight, resulting in a dangerously low weight-for-height ratio.

    A persistent lack of adequate food leads to chronic malnutrition, or stunting, where growth and development is impaired.

    Risk of infections and mortality

    Malnourished infants have weakened immune systems. This makes them more vulnerable to developing infections, due to smaller organs and deficits in lean mass. Lean mass is the body’s weight excluding fat and is crucial for supporting healthy growth, strength and overall development.

    When children are starving, they are much more likely to die from common illnesses such as diarrhoea and pneumonia.

    Infections can make it harder to absorb nutrients, creating a dangerous cycle and worsening malnutrition.

    Chronic malnutrition affects the brain

    The human brain develops extraordinarily rapidly during the first 1,000 days of life (from conception to age two). During this time, adequate nutrition is essential.

    Children’s developing brains are more likely to be affected by nutritional deficiencies than adults.

    When prolonged, malnutrition may lead to structural brain changes, including a smaller brain and less myelin – the protective membrane that wraps around nerve cells and helps the brain send messages.

    Chronic malnutrition can affect brain functions and processes such as thinking, language, attention, memory and decision-making.

    These neurological impacts can cause life-long issues.

    Can brain damage be permanent?

    Yes, especially when malnutrition occurs during crucial periods of brain development, such as the first 1,000 days.

    However, some effects are reversible. Early, intensive interventions – such as access to nutrient-rich food and medicines to treat hypoglycaemia (low blood sugar) and fight infections – can help children catch-up on growth and brain development.

    For example, one review of studies involving undernourished preschool children found their cognitive abilities, such as concentration, reasoning and emotional regulation improved somewhat when they were given iron supplements and multivitamins.

    However malnutrition during the crucial window under two years old increases the risk of lifelong disabilities.

    It’s also important to note recovery is more likely in an environment where nutritious food is available and children’s emotional needs are taken care of.

    In Gaza, Israel’s military operations have destroyed 94% of hospital infrastructure and humanitarian aid remains severely restricted. The conditions necessary for children’s recovery are out of reach.

    Pregnant and breastfeeding mothers

    Severe maternal malnutrition can increase the mother and child’s risk of dying or experiencing complications during pregnancy.

    When a breastfeeding mother is malnourished, she will produce less breastmilk and it will be lower quality. Deficiencies in iron, iodine, and vitamins A, D and zinc will compromise the mother’s health reduce the nutritional value of breast milk. This can contribute to poor infant growth and development.

    Starved mothers may experience fatigue, poor health and psychological distress, making it challenging to maintain breastfeeding.

    Other organ impacts

    Data from those born during the Dutch famine of 1944-45 have helped us understand the lifelong health impacts on children conceived and born while their mothers were starving.

    Among this group, malnutrition affected the development and function of many of the children’s organs, including the heart, lung and kidneys.

    This group also had higher rates of schizophrenia, depression and anxiety, and lower performance in cognitive testing.

    They also had a higher risk of developing chronic degenerative diseases (such as cardiovascular disease and kidney failure) and dying prematurely.

    Is the damage irreversible?

    Recovery is possible. But it depends on how severely malnourished the child is, and when and what kind of support they receive.

    Evidence shows children remain vulnerable and have a higher risk of dying even after being treated for complications from severe acute malnutrition.

    Effective interventions include:

    • nutritional rehabilitation (giving the child nutrient-rich foods, specialised feeding, and addressing underlying deficiencies)

    • breastfeeding support for mothers

    • providing rehabilitation and health care in the community (so families and children can return to everyday routines).

    This seems difficult if not impossible in Gaza, where Israel’s blockade on aid and ongoing military operations mean safety and infrastructure are severely compromised.

    Repeated or prolonged episodes of malnutrition increase the risk of lasting developmental harm.

    The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    ref. Images of Gaza’s starving babies have gone round the world. This is what malnutrition does in the first 1,000 days of life – https://theconversation.com/images-of-gazas-starving-babies-have-gone-round-the-world-this-is-what-malnutrition-does-in-the-first-1-000-days-of-life-257462

    MIL OSI AnalysisEveningReport.nz

  • MIL-OSI Global: What’s the difference between abs and core? One term focuses on aesthetics – and the other on function

    Source: The Conversation – Global Perspectives – By Hunter Bennett, Lecturer in Exercise Science, University of South Australia

    Maksim Goncharenok/Pexels

    You’ve probably heard the terms “abs” and “core” used in social media videos, Pilates classes, or even by physiotherapists.

    Given they seem to refer to the same general area of your body, you might have wondered what the difference is.

    When people talk about “abs”, they’re often referring to the abdominal muscles you can see. Conversely, the term “core” is used to describe a broader group of muscles in the context of function, rather than aesthetics.

    While abs and core are often spoken about separately, there’s a lot of overlap between them.

    What are abs?

    The term “abs” is short for abdominal muscles. These are the muscles that run along the front and side of your stomach.

    When someone talks about getting a six-pack, they’re usually referring to toning the rectus abdominis, the long muscle that goes from the bottom of your ribs to the top of your pelvis.

    Your abdominals also include your obliques, which sit on the side of your body, and your transverse abdominis, which sits underneath your other abdominal muscles and wraps around your waist like a belt.

    The term “abs” has been around for a long time, and is perhaps most often used when discussing aesthetics.

    For example, it’s common to see health and wellness publications offering advice on how to achieve “flat” or “six-pack” abs.

    The long muscle that goes from the bottom of your ribs to the top of your pelvis is called the rectus abdominis.
    phoenix creation/Shutterstock

    What about the core?

    When people talk about the “core”, they are often referring to your abdominals, but also the muscles in your back (your spinal erectors), hips, glutes, pelvic floor, and your diaphragm.

    These are the muscles that can stabilise your spine against movement, and aid in the transfer of force between the upper and lower limbs.

    The term “core” wasn’t commonly used until the early 2000s, when it became synonymous with core training.

    While the exact reason for its surge in popularity isn’t clear, it most likely followed a study published in 1998 that suggested people with lower back pain might have impaired function of their deep abdominal muscles.

    From there, the concept of “core training” entered the mainstream, where it was proposed to reduce lower back pain and improve athletic performance.

    ‘Core’ training only entered the mainstream this century.
    nadia_acosta/Shutterstock

    What does the evidence say?

    When we consider all the muscles that make up the core, it seems obvious they would be important – but it might not be for the reasons you think.

    For example, having good core stability doesn’t necessarily prevent lower back pain, as it’s been touted to do.

    There’s evidence suggesting core stability training, which might include exercises such as planks and dead bugs, can help reduce bouts of lower back pain. However it doesn’t appear to be any more effective than other types of exercise, such as walking or weight training.

    Other research suggests there aren’t any differences in how people with and without lower back pain recruit and use their core muscles.

    In a separate study, improvements in core strength and stability after a nine-week core stability training program were not significantly associated with improvements in pain and function, further questioning this relationship.

    The link between core strength and athletic performance is also unclear.

    A 2016 review found some very small associations between measures of core muscle strength and measures of whole body strength, power and balance. However, because of the design of the studies reviewed, we don’t know whether people who have better strength, power and balance simply have stronger core muscles, or whether stronger core muscles increase strength, power and balance.

    An earlier review summarised the effect of core stability training on measures of athletic performance, including jumping, sprinting and throwing. It concluded this type of training is unlikely to provide substantial benefits to measures of general athletic performance such as jumping and sprinting.

    However, this review also suggested that, given the important role of the abs in torso rotation, strengthening these muscles might have merit in improving performance in sports that involve swinging a bat or throwing a ball.

    This is likely to apply to other sports that involve rapid torso movement as well, such as mixed martial arts and kayaking.

    Stronger abdominal muscles could offer an advantage in sports that involve rotation.
    Lino Khim Medrina/Pexels

    How can you exercise your abs and core?

    There’s good evidence that simply getting stronger by lifting weights can help prevent injuries. Training your core to get stronger should have a similar impact, as long as it’s part of a broader training program.

    We also know having weaker muscles makes you more likely to experience functional limitations and disability in older age. So alongside any other potential benefits, improving core strength with the rest of your body could help keep you fit and healthy as you get older.

    There are plenty of exercises you can do to train your core and abs.

    If you’re new to core training, you might want to start off with some lower-level isolation exercises that don’t involve any movement of the core. These include things like planks, bird dogs, and pallof presses. These are unlikely to cause too much muscle soreness, but will train your core muscles.

    Once you feel like these are going well, you can start moving into some more dynamic exercises such as sit ups, Russian twists and leg raises, where you train your abdominals using a full range of motion.

    Hunter Bennett does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. What’s the difference between abs and core? One term focuses on aesthetics – and the other on function – https://theconversation.com/whats-the-difference-between-abs-and-core-one-term-focuses-on-aesthetics-and-the-other-on-function-254582

    MIL OSI – Global Reports

  • MIL-Evening Report: Israel’s new aid delivery system for Gaza is sparking outrage. Why is it so problematic?

    Source: The Conversation (Au and NZ) – By Amra Lee, PhD candidate in Protection of Civilians, Australian National University

    Some 2.1 million Gazans are facing critical hunger levels, with many at risk of famine following Israel’s 11-week blockade on aid intended to pressure Hamas.

    According to the United Nations, 57 children have already died from malnutrition since the aid blockade began on March 2. A further 14,000 children under 5 years old are at risk of severe cases of malnutrition over the next year.

    Last week, Israeli Prime Minister Benjamin Netanyahu permitted a limited number of aid trucks into Gaza amid increasing pressure from allies who have drawn a line at images of starving children.

    However, Israel is controversially planning to transfer responsibility for distributing aid in Gaza through a new system that would sideline the UN and other aid agencies that have been working there for decades.

    UN Secretary-General Antonio Guterres swiftly rejected Israel’s new aid distribution system in Gaza, saying it breaches international law and humanitarian principles.

    In a joint statement, two dozen countries, including the UK, many European Union member states, Australia, Canada and Japan, have supported the UN’s position on the new model. The signatories said it won’t deliver aid effectively at the scale required, and would link aid to political and military objectives.

    The UK, Canada and France have further threatened to take “concrete actions” to pressure Israel to cease its military offence and lift restrictions on aid.

    And in another blow to the credibility of the new system, the head of the newly established Gaza Humanitarian Foundation, which will oversee the distribution of aid, resigned on Monday. He cited concerns over a lack of adherence to “humanitarian principles”.

    So, how will would this new aid delivery system work, and why is it so problematic?

    A military-led system with deep flaws

    Israel has relied on unsubstantiated claims of large-scale aid diversion by Hamas to justify taking control over aid delivery in Gaza. The UN and its humanitarian partners continue to refute such claims, publicly sharing details of their end-to-end monitoring systems.

    Yet, the new aid delivery initiative is vague on important details.

    Several reports have revealed the plan would establish four secure distribution sites for aid under Israeli military control in southern and central Gaza.

    Security would be provided by private military contractors, such as Safe Reach Solutions, run by a former CIA officer, while the Gaza Humanitarian Foundation would oversee the distribution of food.

    There is little clarity beyond this on who is behind the new system and who is funding it.

    The initiative has provoked strong reactions from the UN and the wider humanitarian aid system.

    Senior aid officials have underlined the fact the international aid system cannot support a military-led initiative that would breach international law and be incompatible with humanitarian principles of neutrality, impartiality and independence.

    There are also concerns the four distribution hubs would require individuals to travel long distances to collect and carry heavy packages. This could leave female-headed households, people with disabilities, those who are ill and the elderly at greater risk of exclusion and exploitation.

    In addition, a leaked UN memo reportedly expressed concern over UN involvement in the initiative, saying the organisation could be “implicated in delivering a system that falls short of Israel’s legal responsibilities as an occupying power”.

    There are further concerns the UN could be implicated in atrocity crimes, including a risk of genocide through its participation in the system, setting a dangerous precedent for future crises.

    Tom Fletcher, the UN relief chief, has called the plan “a deliberate distraction” and “a fig leaf for further violence and displacement”.

    Other rights groups have condemned the mandatory collection of biometric data, including facial recognition scans, at the distribution sites. This would make aid conditional on compliance with surveillance. It would also expand Israel’s controversial use of facial recognition technology to track and monitor Palestinians throughout Gaza.

    And famine expert Alex de Waal claims Israel has “taken a page from the colonial war handbooks” in weaponising food aid in pursuit of military victory.

    He argues the planned quantities of food aid will be insufficient and lack the specialised feeding necessary for malnourished children, in addition to clean water and electricity.

    What has not been stated but can be implied from the strong resistance to the new system lacking humanitarian expertise: the lack of good faith on Israel’s part. The Israeli government continues to pursue an elusive military victory at the expense of the rules and norms intended to preserve humanity in war.

    Wider pattern of behaviour

    The UN’s rebuke of the plan should be interpreted through a wider pattern of Israeli government behaviour undermining the international aid system and its role in upholding respect for humanitarian principles.

    These fundamental principles include respect for humanity, neutrality, impartiality and operational independence. As the joint statement by 24 nations on aid to Gaza this month said:

    Humanitarian principles matter for every conflict around the world and should be applied consistently in every war zone.

    International humanitarian law requires member states to respect – and ensure respect – for the rules of war. This includes taking all feasible measures to influence the parties engaged in a conflict to respect humanitarian law.

    Likewise, the Genocide Convention requires member states to take measures to prevent and punish genocide beyond their jurisdictions.

    As Fletcher, the UN relief chief, reminded the UN Security Council earlier this month, this hasn’t been done in past cases of large-scale violations of international human rights, such as in Srebrenica (in the former Yugoslavia) and Rwanda.

    He said reviews of the UN’s conduct in cases like these

    […] pointed to our collective failure to speak to the scale of violations while they were committed.

    While humanitarians are best placed to deliver aid, greater collective political action is what’s needed. Pressure now falls on all UN member states use their levers of influence to protect civilians and prevent the further weaponisation of aid at this critical time.

    Amra Lee does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Israel’s new aid delivery system for Gaza is sparking outrage. Why is it so problematic? – https://theconversation.com/israels-new-aid-delivery-system-for-gaza-is-sparking-outrage-why-is-it-so-problematic-257347

    MIL OSI AnalysisEveningReport.nz

  • MIL-OSI Global: Rock art and tomb discoveries in Morocco reveal ancient connections to the wider world

    Source: The Conversation – Africa – By Hamza Benattia, Prehistory, Universitat de Barcelona

    When people think of ancient burials in North Africa, they often picture Egypt’s pyramids and monuments. But new discoveries show that north-western Africa also has a deep and fascinating prehistoric past.

    Morocco’s Tangier Peninsula is particularly interesting. The peninsula sits at Africa’s north-western edge, where the Mediterranean Sea meets the Atlantic Ocean. At just 14 kilometres from Europe across the Strait of Gibraltar, this area has long been a natural crossroads between continents and cultures.

    I’m an archaeologist and PhD student who specialises in north Africa’s later prehistoric periods, between 3800 BC and 500 BC. My research explores how ancient communities responded to environmental changes, and how they moved and connected with other communities across regions.

    The assumption to date has been that the Tangier Peninsula was uninhabited and isolated in late prehistoric times. As part of my PhD research I wanted to explore whether this was true, or whether the area had simply been overlooked by previous archaeological work.

    Through the Kach Kouch and Tahadart Archaeological Projects, we studied both the Atlantic and Mediterranean zones of the peninsula.

    Our goal was to revisit the region using modern archaeological methods and technologies, including radiocarbon dating. To understand how this region may have been connected to the wider world in prehistoric times, we used Geographic Information System software to model possible ancient communication routes and surveyed the landscape through satellite and drone imagery. At a later stage, alongside a team of early career Moroccan archaeologists from the National Institute of Archaeology and Heritage, we carried out field surveys and excavations.

    What we discovered exceeded all expectations. Far from being empty and isolated, the Tangier Peninsula is filled with evidence that people lived, died and held ceremonies there over thousands of years.

    Our hope is that our findings will reframe north-western Africa as a cultural crossroads that has connected regions for thousands of years. This region could reshape our understanding of later prehistory across the Atlantic and Mediterranean worlds.

    A prehistoric ritual and funerary landscape

    Our study, published in African Archaeological Review, presents the discovery of dozens of new archaeological sites, including prehistoric burials, rock art sites and standing stones.

    Until now, research on rock art and burials in north Africa focused on areas like the Nile Valley, the Sahara or the Atlas Mountains. Our discoveries reveal that Morocco’s north-western coast was a major cultural hub in the Bronze Age, over 4,000 years ago.

    The diversity of burial practices, ritual sites, symbolic rock art and unique megalithic monuments reflect a rich prehistoric heritage that transcends modern geographic, political and cultural boundaries. It also highlights the longstanding exchanges and contacts of this region with the Mediterranean, the Atlantic and the Sahara.

    One of the most remarkable sites we excavated is at Daroua Zaydan, near modern-day Tangier. There we uncovered a cist burial, a small stone chamber made from four upright stone slabs covered by a larger stone slab. A crescent-shaped arrangement of stones likely marked the access to the burial chamber.

    Although the grave had been looted in the past, we recovered several human bones outside the cist. One of them was radiocarbon dated to 2118–1890 BC. This date aligns with similar burial traditions across the Strait of Gibraltar in Iberia, and with Early Bronze Age settlement activity at Kach Kouch, about 65km south-east of Daroua Zaydan.

    Cist cemeteries had been documented in the region before, but most were excavated during the early to mid-20th century. At the time, archaeologists didn’t have the methods that can now shed light on important details such as how they were built and when they were used. Daroua Zaydan marks the first radiocarbon-dated cist burial in north-west Africa.

    Monuments, ritual deposits and Atlantic connections

    Our findings suggest the existence of a complex prehistoric ritual landscape at the Tangier Peninsula. This landscape was likely connected to other areas of the Atlantic and Mediterranean through a shared ritual and symbolic “language”.

    One clue is a Bronze Age sword found in the 1920s in the Loukkos river. It was likely made in Britain or Ireland and may have arrived in Africa through Atlantic exchange networks. The sword was likely deliberately thrown to the river — a ritual practice documented along rivers in Atlantic Europe. This suggests that communities in northern Morocco were part of a broader cultural and symbolic world that connected the late prehistoric Atlantic.

    Another example is the stone circle at Mzoura, made up of 176 standing stones. This site, excavated in the 1930s, is unique in north Africa. But it closely resembles other stone circles in Atlantic Europe like Stonehenge. During our fieldwork we also discovered new standing stones and rock art, located along prehistoric communication routes. This suggests they may have been used as territorial markers or ritual sites.

    Before our research, a single painted rock shelter, that of Magara Sanar, was known in north-western Morocco. We have now documented 17 painted and 5 engraved rock shelters.

    The variety of symbols and scenes includes dotted patterns, geometric lines and human-shaped figures. They suggest strong links to Iberian, Atlantic and Saharan prehistoric art.

    Why this matters

    Our research does more than just fill a blank spot on the archaeological map. It opens up new avenues for archaeological exploration in the region. The Tangier Peninsula is home to a rich and largely undocumented late prehistoric heritage. It deserves more attention from researchers, policymakers and the wider public.

    Further protection measures are necessary as the region is undergoing rapid urban development. Tourism is growing and there’s been extensive looting. We hope our work will lead to more archaeological investigations, including new excavations and radiocarbon dating of key sites.

    Hamza Benattia, director of the Tahadart Archaeological Project, received funding from the National Institute of Archaeology and Heritage of Morocco (INSAP), the Prehistoric Society Research Fund, the Stevan B. Dana Grant of the American Society of Overseas Research, the Mediterranean Archaeological Trust Grant, the Barakat Trust Early Career Award, the Centre Jacques Berque Research Grant, the Institute of Ceutan Studies Research Fund and the University of Castilla La Mancha.

    ref. Rock art and tomb discoveries in Morocco reveal ancient connections to the wider world – https://theconversation.com/rock-art-and-tomb-discoveries-in-morocco-reveal-ancient-connections-to-the-wider-world-256931

    MIL OSI – Global Reports

  • MIL-OSI Global: Do you live near a dam holding mine waste? 6 questions to ask

    Source: The Conversation – Africa – By Charles MacRobert, Associate Professor, Stellenbosch University

    Mining is essential to modern lifestyles. Copper, iron and other mined products are vital to the products many people take for granted, like electronic devices. Being able to buy these goods quite easily may give a person a false sense of how difficult it is to extract the elements they’re made of.

    Mining involves the removal of mineral-rich rock from the ground and processing it to extract the high-value minerals. Depending on the mineral, this quantity can be as low as a few grams in a tonne of rock.

    For example, removing a tiny quantity of platinum from rock requires finely grinding the rock. The fine material that remains once the platinum is removed is known as tailings.

    Every mining operation produces tailings. This can be coarse, like instant coffee granules, or fine, like cocoa powder. Tailings are typically mixed with water to form a liquid slurry that can be pumped and transported easily.

    Slurry is kept in specially designed tailings dams. The designs are unique and depend on what is being mined and the local area.

    Unfortunately, the history of mining is stained with examples of poorly managed dams that collapse, spilling the slurry, which is sometimes toxic. This can cause serious environmental, social and economic damage.

    One such mine disaster happened in February 2025 in Zambia at the Sino-Metals Leach Zambia copper mine. Over 50 million litres of toxic waste flowed over the dam’s wall into the Mwambashi River. From there it flowed into one of the largest and longest Zambian rivers, the Kafue.

    The pollution travelled further than 100km from the dam, contaminating the river, and killing fish and livestock on nearby farms. The Zambian government had to shut down municipal water to the city of Kitwe to protect residents from consuming the polluted water.

    This should not have happened, because steps have been taken to ensure proactive management of dams. In 2020, the Global Industry Standard on Tailings Management introduced a new set of safety measures and standards.

    Many mines are proactively embracing these standards. This enhances community trust in tailings dams. But other mines are not engaging with communities that might be affected by dams. Or communities may feel unsure what to ask the mines.

    We are geotechnical engineers who have studied tailings dam collapses. Here, we outline six questions people living near mines should ask mine management to ensure they understand the key hazards and risks in their communities.

    1. How far will the slurry flow?

    Each tailings dam has a zone of influence. This is determined by analysing what would happen if the slurry breached the dam walls and started to flow out. It is an estimate of the area which would be swamped by tailings if the dam failed.

    Generally, tailings disasters have caused significant damage up to a distance of 5km from the dam. If the tailings slurry gets into a river, it can flow hundreds of kilometres downstream.




    Read more:
    Burst mining dam in South Africa: what must be done to prevent another disaster


    Zones of influence are often determined for extreme events, like once in a lifetime storms or large earthquakes. But zones of influence could also include places affected by dust or water pollution from the mine.

    If you can see a tailings dam from where you live or work you should consider yourself within the zone of influence.

    2. Who is responsible for the dam?

    Clearly defined roles and responsibilities for day-to-day operation should be in place in every mine. There should be suitably qualified engineers appointed to carry out monitoring and maintenance of the dam. There need to be enough qualified people to cope with the size of the dam.

    The management structure should set out how day-to-day issues related to the tailings dam are discussed between workers on the ground in mines and top management, and how solutions are found. Mines should also keep audit and inspection reports on their tailings dams, and records should be kept over the long term (because tailings dams are often operational for several decades).

    3. What about the environment?

    Mines should have plans to reduce the impact that tailings dams have on the environment. These would have been informed by public participation. The plans must state what monitoring is in place to measure the impacts of dust and water (groundwater and surface water).

    The true extent of impacts only becomes apparent once the mine starts operating. So, the public should hold mines accountable for commitments made. Mines should satisfy communities that monitoring is continuing to identify and track the dam’s environmental impacts.

    Closure plans should also be continuously communicated to mining-affected communities. This will assure the community that when the miners leave, they won’t be left with a dangerous dam near their homes, with no one to look after it.

    4. Will the tailings dam be safe when it rains?

    A common way that tailings dams fail is when water or slurry washes over the dam sidewalls. This washes away the support. It is known as overtopping, and can happen in storms or if too much tailing is pumped into the dam.

    Overtopping is best managed by keeping the water a certain distance below the dam wall. Mine management must measure this regularly and control how much tailing they pump to the dam. Their task is to make sure that even in a severe storm the level will stay well below the top of the dam wall.

    5. Has the dam always behaved as expected?

    Small failure incidents such as sloughs, slides and bulges where dam walls move but no slurry is released can occur. Mines should investigate and report these, detailing likely causes and mitigation measures implemented.

    Publicly available satellite imagery can easily show where mine tailings dams are becoming unstable. Mines should be transparent and provide explanations for these to avoid any speculation over whether the dam is stable or not.

    6. What alterations have been made?

    Sometimes dams must be changed to accommodate changes in mining or the extraction process. These changes could include how fast the dam is being built, moving the position of the dam wall, or placing material at the base of the wall to stabilise it.

    The unexpected consequences of alterations to a tailings dam could be water seeping out and creating damp spots, leading to dam walls sagging or cracking. If left unchecked this can lead to structural failure.

    When substantial changes are made to a dam’s design, mines need to demonstrate that sufficient consideration has gone into making these changes.

    The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    ref. Do you live near a dam holding mine waste? 6 questions to ask – https://theconversation.com/do-you-live-near-a-dam-holding-mine-waste-6-questions-to-ask-256517

    MIL OSI – Global Reports

  • MIL-OSI Global: Promoting social inclusion through pet companionship

    Source: The Conversation – Canada – By Renata Roma, Postdoctoral Fellow, Center of Behavioural Sciences and Justice Studies/Pawsitive Connections Lab, University of Saskatchewan

    The benefits of pet companionship have been widely researched and celebrated.

    Pets can improve our mood and immune system. They can also encourage staying active and fit, offer emotional comfort and companionship, and foster social connections. Pets can even increase life expectancy.

    Unfortunately, pet companionship is not always easily accessible to everyone. Several groups face hurdles when it comes to sharing time or living with a pet. Some of the hurdles that people can face when accessing pets include the lack of pet-friendly housing and financial resources to afford pet food and veterinary care.

    There can also be more concrete barriers to pet companionship, such as no-pet clauses in rental agreements or no-pet policies in retirement homes.

    As we strive for social equality, it is essential to address hurdles that prevent some people from experiencing the known benefits of spending time or living with a pet.

    Challenges and misconceptions

    Several factors can make pet companionship less accessible. Some of these factors include lack of appropriate housing and lack of financial resources for pet food and pet-related veterinary services. A Canadian survey found that new immigrants and young people aged 18 to 34 years are the groups most affected by these factors and, often, elderly people experience housing-related and financial challenges.

    For pet guardians, the inability to pay for grooming services, food or health-care services can create feelings of distress and, for their pets, this can lead to a reduced quality of life. In this case, we see that the well-being of both pet guardians and their beloved pets can be compromised.

    Moreover, some studies link higher income to an increased likelihood of living with companion animals. When it comes to economic factors, it is concerning that some believe certain groups of people should not be pet guardians. The Michelson Found Animals Foundation highlights several misconceptions about living with companion animals, which are often associated with financial hardships.

    For example, some people believe that people who live in apartments, rather than homes with backyards and green space, should only have small dogs as pets. However, this belief ignores a dog’s energy level as some small dogs are highly energetic while some big dogs are less energetic. This belief also does not consider the guardian’s ability to provide mental and physical stimulation for their dog.

    Still other people believe that if someone cannot afford the costs associated with caring for a pet, they should not have a one. This belief only reinforces social inequalities and reflects a deeper form of discrimination.

    Financial problems and housing restrictions may force people to give up their pets, and this is an emotionally difficult decision. Research by Christine Yvette Tardif-Williams, one of the authors of this story, with childhood and youth researcher Rebecca Raby and graduate students at Brock University shows how homeless children often navigate feelings of emotional intimacy towards their pets alongside feelings of loss and grief. In this research, homeless children shared stories about missing or losing companion animals either through separation or death.

    Research also shows that most people experiencing homelessness are responsible pet guardians, and that their pets are often very healthy and that they too benefit from human companionship — it’s a mutually beneficial, two-way emotional connection.

    A more equitable future in pet companionship

    Pet companionship and systemic inequalities are interconnected. For instance, many socioeconomically disadvantaged and marginalized families and communities — including, but not limited to, racialized, Indigenous, homeless, immigrant and refugee families and their children — face barriers to pet companionship.

    We need targeted strategies and policies to reduce the barriers faced by these families and communities. It is important to create more opportunities for people and pets to live together. This can help us to address social inequality in pet companionship among diverse groups.

    Some studies highlight the need for increasing access to free or low-cost veterinary care. Making shelters and housing more pet-friendly is also essential. Promoting campaigns to reduce misconceptions about pet companionship among diverse groups of people is another key strategy.

    One example of a program that helps make pet companionship more accessible is Community Veterinary Outreach (CVO).This is a registered charity located across different provinces in Canada. They provide health care for people and preventive care for pets. They also run education programs covering topics such as animal behaviour, nutrition, and dental care. Together, these services help to support vulnerable populations living with pets.

    Another example is the PetCard program, a Canadian financing program that offers flexible options for people to split the payment of veterinary-related services.




    Read more:
    How ‘One Health’ clinics support unhoused people and their pets


    However, we need more consistent collaborative work that begins by raising awareness about the importance of pet companionship for diverse groups of people. Expanding this discussion can help us design fairer policies about pet companionship, foster social justice and bring communities together.

    Overlooking the relevance of this discussion can reinforce discriminatory views around pet companionship.

    Supporting pet companionship

    It is problematic when access to pet companionship is restricted due to a family’s economic status or housing opportunities, since it means they’re less likely to experience the well-being benefits of pet companionship. In this way, pet-related benefits are limited to a select and privileged group.

    We can help people and animals build meaningful bonds by promoting equitable access to companionship. The needs of pets must also be prioritized in any effort to increase access to pet companionship. This means making sure pets’ physical and emotional needs are met and that they also benefit from the human-pet bond. Pets’ well-being and rights should always come first when making pet companionship more accessible.

    To create a fair approach to supporting pet companionship among diverse populations, we need to balance human and pet needs and ensure the well-being of both humans and their pets.

    The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    ref. Promoting social inclusion through pet companionship – https://theconversation.com/promoting-social-inclusion-through-pet-companionship-255089

    MIL OSI – Global Reports

  • MIL-OSI Global: Russia is facing fresh sanctions, but Putin is used to dealing with a struggling economy

    Source: The Conversation – UK – By Yerzhan Tokbolat, Lecturer in Finance, Queen’s University Belfast

    The UK and the EU have agreed to hit Russia with a raft of new economic sanctions after hopes of a ceasefire with Ukraine came to nothing. One French minister commented that it is time to “suffocate” the Russian economy.

    Since the country’s fullscale invasion of Ukraine in 2022, that economy has certainly suffered. Sanctions on Russia have already led to a depreciation of the rouble, high inflation, very high interest rates and a stagnating economy.

    But it remains unclear what effect any new measures will have. And Vladimir Putin has a history of riding out economic hardship.

    When he became president of Russia just over 25 years ago, the country’s economy was in dire straits. Attempts by his predecessors Mikhail Gorbachev and Boris Yeltsin to build a more open and capitalist system had not worked well for most Russian citizens.

    Instead, a rapid wave of privatisations, which reformers hoped would build strong institutions, had mostly benefited a small group of oligarchs who exploited a weak and corrupt state to seize key oil, gas and mineral assets.

    Those oligarchs resisted legal reform, moved wealth abroad, failed to invest in the domestic economy, and gradually gained control of major corporations and media, expanding their political influence. By 1995, nearly half of Russians were living in poverty.

    The 1998 crisis worsened the situation, as a global recession and falling commodity prices led to fiscal imbalances and doubts about Russia’s ability to service its debt and uphold the fixed exchange rate. The central bank raised interest rates to 150% to try and stabilise the rouble, but this failed.

    It eventually allowed the rouble to float, and the currency lost about two-thirds of its value. When he came to power in 2000, Putin was then confronted with the challenge of rebuilding the Russian economy.

    Luckily for him, between 2000 and 2008, an oil and gas boom drove GDP growth, increasing incomes, and allowing for early repayment of national debts. Putin – and national pride – received a boost.

    Rising energy revenues helped stabilise the economy and enabled the state to tighten its grip on the energy sector. By 2006, Gazprom accounted for 20% of government tax revenue.

    Putin then shifted his focus to Europe. With German support, the Nord Stream pipeline was completed in 2011, enabling direct gas exports to western Europe while bypassing Ukraine. This increased European dependence on Russian energy.

    But Putin’s oil and gas-driven economic model struggled to sustain growth, and by 2013, his approval ratings had fallen to their lowest point since 2000.

    The annexation of Crimea in 2014, along with a very expensive Winter Olympics in the Black Sea resort city of Sochi, temporarily boosted his popularity.

    Running on empty

    However, these accomplishments did little to address Russia’s core economic problems, particularly its failure to build a diversified economy.

    By 2018, Russia’s economy was again stagnant, with a weak currency and declining living standards, and Putin’s popularity fell in part due to unpopular budget-saving reforms, including raising the retirement age.

    There was widespread doubt about Putin’s model of lasting prosperity, which relied on state-led growth, but was marked by instability, resource dependence and growing geopolitical ambition.

    In this light, Putin’s full-scale invasion of Ukraine in 2022 appeared to be a familiar tactic to boost support. Indeed, his approval jumped to 83% after invading Ukraine, matching levels seen after the 2014 Crimea annexation. His ratings have remained high since, with recent polls still showing approval levels above 80%.

    But the Russian economy will still be a worry. Sustaining a “war economy”, where manufacturing and investment are focused on conflict cannot go on forever, particularly as the manufacturing product is being rapidly depleted as the Russian military uses it the field. And reliance on commodities has amplified the impact of sanctions, hitting key banks and energy firms such as Gazprom and Rosneft.

    Meanwhile, the US has significantly expanded its presence in Europe’s energy market, supplying nearly 50% of the EU’s liquid natural gas imports after tripling exports between 2021 and 2023.

    Major Russian pipeline projects such as Nord Stream 2 and Power of Siberia 2 remain in limbo. And the decline in oil prices in April 2025, the biggest since November 2021, poses further risks.

    If a ceasefire is agreed, a pause in the war could offer Russia the chance to regroup and recover economically. Sanctions are often temporary, and global demand for oil and gas remains strong. Some countries may re-engage in trade.

    But future economic stagnation could once again fuel aggression. Unless Russia undertakes structural reforms and redefines its role in the global economy by reducing reliance on resource exports and engaging more constructively with global markets, the cycle of confrontation may repeat itself, with far-reaching global consequences.

    The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    ref. Russia is facing fresh sanctions, but Putin is used to dealing with a struggling economy – https://theconversation.com/russia-is-facing-fresh-sanctions-but-putin-is-used-to-dealing-with-a-struggling-economy-255732

    MIL OSI – Global Reports

  • MIL-OSI Global: Freeze branding: the new body modification technique causes serious and irreversible harm

    Source: The Conversation – UK – By Adam Taylor, Professor of Anatomy, Lancaster University

    If you’re a fan of the TV show Yellowstone, you’ll know the deal – you earn your place on the ranch by being branded. On the show, this means having a red-hot iron pressed into your flesh, leaving a permanent scar of loyalty to Yellowstone Dutton Ranch and its patriarch, John Dutton.

    In life imitating art, people are getting themselves branded, but instead of using heat, they are using freeze branding. The branding iron is cooled using dry ice, isopropyl alcohol or liquid nitrogen, and then pressed against the skin to leave a permanent mark.

    In 1966, Dr R. Keith Farrell at Washington State University developed freeze branding (also known as CryoBranding) as a less painful way to mark animals for identification. Aside from being less painful, it also produces less scarring than hot branding.

    Cattle skin is much thicker than human skin and can take more punishment. Scratches that would cause pain and bleeding in humans would barely mark the surface of cattle. Horse and cattle skin is anywhere between two and four times thicker than human skin.


    Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.


    When a person is freeze branded, the super cold causes ice crystals to form inside skin cells. As the water inside the cells freezes, it expands and breaks the cells’ walls. This kills the cells and stops them from making melanin, the pigment that gives your skin and hair colour.

    Because of the relative thinness of human skin (2mm), it’s more likely to get badly burned from extreme cold. It can take as little as 20 seconds for liquid nitrogen to cause second, third and even fourth degree burns.

    These burns can lead to serious problems, such as infection, frostbite or even loss of fingers or limbs.

    Second, third and fourth degree burns can go deep enough to damage muscles, tendons and even bones. As these deeper tissues heal, scarring can form and cause long-term problems called contractures – a medical condition in which muscles, tendons or other soft tissues permanently tighten or shorten, causing restricted movement.

    This is a bigger risk if the branding is done near the arms or legs, and it might need physiotherapy or even surgery to fix.

    Like any serious burn, freeze-branding also increases the risk of dehydration. That’s because burns damage the skin’s protective barrier, and your body loses fluid while trying to heal from the trauma.

    As mentioned above, freeze branding destroys melanocytes, special skin cells that give your skin its colour.

    When you are exposed to sunlight – or the UV rays from a tanning bed – these cells produce more melanin to protect your skin. They pass this melanin to nearby skin cells, where it forms a kind of shield around the cell’s DNA to help prevent damage from UV rays. That’s why your skin tans after time in the sun. It’s your body’s way of protecting itself.

    If you permanently damage your melanocytes, this protective shield is lost. People with albinism, who don’t produce melanin, have a much higher risk of skin cancer for this reason. We don’t yet know all the long-term risks of losing melanocytes – but they could be serious.

    You’re not a cow

    There are strict safety protocols for branding animals. There are zero for humans. And in the UK, it’s illegal to brand people – whether with heat or cold.

    So if you’re looking for a statement piece, stick with tattoos or body art that has been tested and regulated and won’t put you at risk of burns, nerve damage or some types of cancer.

    Your skin is your largest organ with many important roles, including protecting your internal structures from germs and helping synthesise key vitamins. Don’t treat it like livestock.

    Adam Taylor does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Freeze branding: the new body modification technique causes serious and irreversible harm – https://theconversation.com/freeze-branding-the-new-body-modification-technique-causes-serious-and-irreversible-harm-255786

    MIL OSI – Global Reports

  • MIL-OSI Global: History shows that Donald Trump is making a serious error in appeasing Vladimir Putin

    Source: The Conversation – UK – By Tim Luckhurst, Principal of South College, Durham University

    The policy of appeasement – strategic concessions to an aggressor that are designed to avoid war – is generally most closely associated in the UK with the Conservative leader Neville Chamberlain, prime minister between May 1937 and May 1940.

    When Chamberlain moved into 10 Downing Street, Adolf Hitler’s willingness to ignore international agreements was already apparent, having broken the Versailles treaty with a massive expansion of Germany’s armed forces, the occupation of the Rhineland.

    Faced with the prospect of Germany moving on Czechoslovakia, Chamberlain continued to work to appease Hitler by agreeing to territorial concessions in his favour. He believed that by appeasing the Führer, Europe could avoid war and save lives.

    Chamberlain’s failure, and the subsequent outbreak of the second world war after Germany’s invasion of Poland in September 1939, are recognised as evidence that the appeasement of expansionist nationalists always fails. Such leaders will simply take all that is offered and demand more.


    Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.


    There are parallels with the relationship between the current US president, Donald Trump, and the Russian president, Vladimir Putin. Trump and his senior officials have also repeatedly suggested that Ukraine should secure a peace deal by acquiescing to Putin’s demands, including for sovereign Ukrainian territory and assurances that Ukraine won’t be allowed to join Nato.

    This makes it seem as if Trump believes that peace can be achieved by appeasing Putin. Like Chamberlain at Munich, Trump has suggested offering the sovereign territory of an independent nation to appease a bully.

    Trump is not the first American president to make this mistake. Franklin D. Roosevelt, who served between March 1933 and April 1945, also tried to appease Hitler. The historian Frederick W. Marks III notes that “the keynote of his approach … beginning in 1933 was appeasement”.

    Before he was inaugurated, Roosevelt sought to persuade Sir Ronald Lindsay, the British ambassador to the US between 1930 and 1939, that Poland should be persuaded to concede the Polish Corridor to Germany. When German troops seized the Rhineland, Roosevelt’s White House made no protest.

    Between 1935 and 1937, Roosevelt made speeches condemning autocracy – but his actions did not match his words. In 1938, he appointed the appeaser Joseph Kennedy as US ambassador to the UK. Kennedy assured the German ambassador in London that he “sympathised not only with Germany’s racial policy but also with her economic goals”.

    In Berlin, the US ambassador, Hugh Wilson, insisted that defence of Czechoslovakia’s borders would be unrealistic. The Czechs should surrender the Sudetenland to Germany. Roosevelt continued his efforts to arrange a compromise peace when German forces seized Poland in September 1939.

    Echoes of the past

    The parallels continue. Confronted by Russia’s invasion of its democratic neighbour and relentless attacks on Ukrainian towns and cities, Trump’s response, shortly after taking office, was to bully the Ukrainian president, Volodymyr Zelensky, and negotiate directly with Russia. This approach signally failed and the killing continued and even intensified.

    Now, following his two-hour conversation with Putin on Monday, Trump has abandoned his insistence on an unconditional 30-day ceasefire. He now insists that the war is not his to fix. The US will step back. It is another hard blow to Ukrainian hopes for negotiation and compromise.




    Read more:
    After another call with Putin, it looks like Trump has abandoned efforts to mediate peace in Ukraine


    To a much greater extent than Roosevelt, Trump appears to treat weakness as evidence of moral inadequacy. In a recent essay, Ivan Mikloš, the former deputy prime minister of Slovakia who has advised successive Ukrainian governments in various capacities, writes of what he sees as Trump’s “affinity for the Kremlin boss”. Miklos believes that Trump admires Putin, and concludes that:

    President Putin, of course, sees that Mr Trump has a soft spot for him. This does not deter him in his maximalist demands, it encourages him even more.

    The US president’s treatment of Zelensky in the Oval Office at the end of February, and repeated statements since, suggest he lacks the patience for diplomacy – a concern that has been widely reported. Trump is said to admire Putin because the Russian president exercises power with minimal restraint.

    Meanwhile, Zelensky must plead for the military and financial support he requires to continue fighting a foe with a population four times larger.

    Lessons from history

    There is scant evidence that Trump pays attention to history. He should, because for Putin, history is central to strategy. A graduate of law who studied at Leningrad State University, graduating in 1975, Putin appears to have embraced an idealist version of his homeland as it operated in his youth as the Soviet Union – under the hardline leadership of Leonid Brezhnev, Yuri Andropov and Konstantin Chernenko.

    That Soviet Union included all of the territory of modern Ukraine. Putin aspires to recapture it. His vision is a Russia restored to a status comparable to that of the Soviet Union during the cold war years of his youth.

    Trump appears to forget that throughout the cold war, the Soviet Union’s powerful armed forces and ideological hostility to democracy cost the US an average of 3.6% of its GDP in defence spending each year. It’s one thing for Trump to demand that the European members of Nato must increase their defence budgets. It’s another to imagine that Nato can immediately provide a reliable deterrent to Russian aggression without US involvement.

    Trump’s newly appointed defense secretary, Pete Hegseth, suggested at a meeting of the Ukraine Defence Contact Group in Brussels in February that the US would reorientate its security policy away from Europe, saying Europe must “take ownership of conventional security on the continent”.

    This is essential, Hegseth said, because China is the real threat, and the US lacks the military resources to face in two directions simultaneously. It was a confession of weakness that places both America and Europe at increased risk.

    The philosopher George Santayana is credited with the warning: “Those who cannot remember the past are condemned to repeat it.”. Chamberlain’s version of appeasement failed to prevent Adolf Hitler’s aggression in the 20th century. Trump’s version appears equally incapable of deterring Vladimir Putin’s territorial ambitions in the 21st.

    Tim Luckhurst has received funding from News UK and Ireland Ltd. He is a fellow of the Royal Society of Arts and a member of the Society of Editors and the Free Speech Union

    ref. History shows that Donald Trump is making a serious error in appeasing Vladimir Putin – https://theconversation.com/history-shows-that-donald-trump-is-making-a-serious-error-in-appeasing-vladimir-putin-257252

    MIL OSI – Global Reports

  • MIL-OSI Global: We found a germ that ‘feeds’ on hospital plastic – new study

    Source: The Conversation – UK – By Ronan McCarthy, Professor in Biomedical Sciences, Brunel University of London

    Amparo Garcia/Shutterstock.com

    Plastic pollution is one of the defining environmental challenges of our time – and some of nature’s tiniest organisms may offer a surprising way out.

    In recent years, microbiologists have discovered bacteria capable of breaking down various types of plastic, hinting at a more sustainable path forward.

    These “plastic-eating” microbes could one day help shrink the mountains of waste clogging landfills and oceans. But they are not always a perfect fix. In the wrong environment, they could cause serious problems.

    Plastics are widely used in hospitals in things such as sutures (especially the dissolving type), wound dressings and implants. So might the bacteria found in hospitals break down and feed on plastic?


    Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.


    To find out, we studied the genomes of known hospital pathogens (harmful bacteria) to see if they had the same plastic-degrading enzymes found in some bacteria in the environment.

    We were surprised to find that some hospital germs, such as Pseudomonas aeruginosa, might be able to break down plastic.

    P aeruginosa is associated with about 559,000 deaths globally each year. And many of the infections are picked up in hospitals.

    Patients on ventilators or with open wounds from surgery or burns are at particular risk of a P aeruginosa infection. As are those who have catheters.

    We decided to move forward from our computational search of bacterial databases to test the plastic-eating ability of P aeruginosa in the laboratory.

    We focused on one specific strain of this bacterium that had a gene for making a plastic-eating enzyme. It had been isolated from a patient with a wound infection. We discovered that not only could it break down plastic, it could use the plastic as food to grow. This ability comes from an enzyme we named Pap1.

    Biofilms

    P aeruginosa is considered a high-priority pathogen by the World Health Organization. It can form tough layers called biofilms that protect it from the immune system and antibiotics, which makes it very hard to treat.

    Our group has previously shown that when environmental bacteria form biofilms, they can break down plastic faster. So we wondered whether having a plastic-degrading enzyme might help P aeruginosa to be a pathogen. Strikingly, it does. This enzyme made the strain more harmful and helped it build bigger biofilms.

    To understand how P aeruginosa was building a bigger biofilm when it was on plastic, we broke the biofilm apart. Then we analysed what the biofilm was made of and found that this pathogen was producing bigger biofilms by including the degraded plastic in this slimy shield – or “matrix”, as it is formally known. P aeruginosa was using the plastic as cement to build a stronger bacterial community.

    Pathogens like P aeruginosa can survive for a long time in hospitals, where plastics are everywhere. Could this persistence in hospitals be due to the pathogens’ ability to eat plastics? We think this is a real possibility.

    Many medical treatments involve plastics, such as orthopaedic implants, catheters, dental implants and hydrogel pads for treating burns. Our study suggests that a pathogen that can degrade the plastic in these devices could become a serious issue. This can make the treatment fail or make the patient’s condition worse.

    Thankfully, scientists are working on solutions, such as adding antimicrobial substances to medical plastics to stop germs from feeding on them. But now that we know that some germs can break down plastic, we’ll need to consider that when choosing materials for future medical use.

    Ronan McCarthy receives funding from the BBSRC, NC3Rs, Academy of Medical Sciences, Horizon 2020, British Society for Antimicrobial Chemotherapy, Innovate UK, NERC and the Medical Research Council. He is also Director of the Antimicrobial Innovations Centre at Brunel University of London.

    Rubén de Dios receives funding from the BBSRC and the Medical Research Council.

    ref. We found a germ that ‘feeds’ on hospital plastic – new study – https://theconversation.com/we-found-a-germ-that-feeds-on-hospital-plastic-new-study-256945

    MIL OSI – Global Reports