Category: Analysis

  • MIL-OSI Global: How to create a thriving forest, not box-checking ‘tree cover’

    Source: The Conversation – UK – By Thomas Murphy, Lecturer in Environmental Sciences, University of Plymouth

    A Chinese proverb says that the best time to plant a tree was 20 years ago, and the second best time is today. But it’s not easy to ensure the trees of today actually become the healthy, functioning forests of tomorrow.

    This is a key issue in the UK, which recently announced it will plant 20 million trees to create a new “national forest” in the west of England. Given the UK is one of the least forested countries in Europe, and one of the most nature-depleted in the world, more trees are definitely needed.

    But I know from years of trying to research and restore native forest on Dartmoor in the south west of England, that creating healthy forests requires attention to detail. Unless we are careful, these new woodlands might damage rather than improve the environment: 20 million non-native conifers (or any single tree species), densely planted row on row is not a recipe for a healthy or resilient forest.

    So what could a successful forest expansion look like – and how could the UK get there?

    Forests for the future

    When planting a sapling, we are starting a journey not reaching a destination.
    The aim isn’t to just grow dense forests everywhere, but to create a diverse “treescape” that includes woodland, pasture, orchards and hedgerows. Including glades and clearings allow plants and animals from the surrounding landscape to move in, helping to create a richer, more complex forest over time.

    A wild pony hangs out in a glade in the New Forest in southern England.
    Helen Hotson / shutterstock

    In this ideal future, Britain’s bigger, more diverse, and better joined-up forests would have a higher chance of coping with the hotter summers, wetter winters and other climate changes including extreme weather. That’s because these larger more connected forests limit whats is known as the “edge effect” where the benefits of the forest’s microclimate is reduced. Having more different tree species – mostly native but not always – would help these woodlands cope with, and adapt to, the projected increase in pests, disease and other environmental stresses.

    These larger more biodiverse woodlands would also store more carbon in trees, soils and decaying wood. Research I published with colleagues showed new native forests can alleviate flood risk rather quickly too. Over time, many could also provide timber for low-carbon construction, and charcoal-like “biochar”.

    Where to grow a forest – and how

    Creating woodland for biodiversity and these wider benefits requires planning and management. This can be done by studying the land beforehand – looking at habitats, soils and the animals that graze there, but importantly considering the wider landscape. Digital tools can model a combination of land features, climate and other data to help planners decide where trees should be targeted for the biggest wins, especially as the climate changes.

    The idea is to support, not replace, Britain’s many existing ancient trees. Some new forests would help buffer woodlands from damage at their edges, while others help connect isolated forest fragments and lone trees.

    For example, in Britain’s wet valleys where temperate rainforests could grow, saplings planted in the 2020s might provide new homes for rare lichens and mosses. This will help shield highly vulnerable sites such as Wistmans Wood on Dartmoor from changes in climate.

    Restoring these rainforests will usually require active control of grazing animals. One promising solution is to plant small, carefully chosen patches of diverse tree species and protect them at first from the sheep, cattle, ponies and deer that eat young trees. Over time, through a process known as “applied nucleation”, these patches could help trees naturally spread, creating a mix of woodland and pasture.

    On Britain’s moorlands, hungry animals eat saplings before they can turn into fully-grown (and less tasty) trees.
    Digital Wildlife Scotland / shutterstock

    It’s true that sapling-munching deer have surged to unsustainable levels, and many uplands areas in particular are overgrazed by sheep. However, when moderated and managed carefully, these animals are essential ingredients for dynamic forests. Grazing, browsing and rootling (pigs and wild boar) animals create glades and clearings, and support natural processes. Trees and forests in return provide animals with forage, shade, shelter and more.

    We should embrace the potential for mutual benefit between animals and forests. By integrating more trees and forests into agricultural areas we may even make both our forests more dynamic and our agricultural areas more resilient.

    Local leadership and community roots

    The public generally considers tree planting a positive thing, but local people often feel left out of the process and its benefits. Getting them onboard and involved is critical. That’s particularly the case in Britain’s northern and western uplands, where few trees are left and many people feel threatened by national woodland policies that might affect how they use the land.

    Moor Trees community tree nurseries on Dartmoor, or collectively owned and community forests in 15 regions of England show there are ways to get locals involved and empowered.

    Larger forests near towns and cities would offer more space for recreation and education, taking pressure off smaller and more fragile woodlands. In the urban areas themselves, we could grow more micro “Miyawacki” forests. These are tennis court-sized areas of diverse and densely packed native trees, which allow children to connect with nature every day in their school grounds (the UK already has more than 280 such forests).

    Tree planting is only a start

    This is a rather optimistic vision for the future, of course. To get there, we’ll have to learn from experience. That means tracking what works and involving local people in citizen science. These projects not only help gather valuable data, they also give volunteers a meaningful experience and support their appreciation of the natural world.

    There are plenty of recommended guidelines for forest restoration, but turning young trees into healthy resilient woodlands isn’t about following a strict rulebook. Instead, success will come from using a range of strategies – working with local communities, supporting natural processes and adapting over time based on what is shown to work.

    Thomas Murphy does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. How to create a thriving forest, not box-checking ‘tree cover’ – https://theconversation.com/how-to-create-a-thriving-forest-not-box-checking-tree-cover-254160

    MIL OSI – Global Reports

  • MIL-OSI Global: Trying for a baby? Here’s why the father’s health is just as important as the mother’s

    Source: The Conversation – UK – By Aleksander Giwercman, Professor of Reproductive Medicine, Lund University

    A man’s health and lifestyle in the preconception period can be important. Ground Picture/ Shutterstock

    Many mothers-to-be understand how it important it is to look after their health — even before becoming pregnant. A mother’s health and lifestyle during the preconception period (the time before becoming pregnant) is not only linked with her health during pregnancy, but also how healthy the baby will be throughout their life.

    But a recent viral TikTok claims a father-to-be’s health in the preconception period is just as important when it comes to both the baby’s wellbeing and the mother’s pregnancy outcomes.

    In the video, the young man states that he thinks men should have to spend the nine months before trying for a baby getting into the “best physical shape of their lives”. He asserts that pre-eclampsia and morning sickness are both linked to men. He also claims that 50-60% of the baby’s epigenetic makeup comes from the father.

    While there was plenty of scepticism in the video’s comment section, this is actually a rare instance where most of the influencer’s health claims are backed by scientific evidence.

    Research shows us that a man’s lifestyle during the preconception period is clearly associated with the risk of negative pregnancy outcomes in their partner – as well as with the health of their children.

    For instance, research has found a link between a father’s health and lifestyle during the preconception period and a woman’s risk of pre-eclampsia. This is a common and serious medical condition that can occur around midway through pregnancy. Pre-eclampsia causes high blood pressure, swelling, headaches and blurred vision.

    The study found that there was a significant association between fathers who had a chronic disease during the preconception period (particularly metabolic disorders, such as obesity, high blood pressure and high blood sugar) and their partner’s subsequent risk of experiencing pre-eclampsia during her pregnancy.

    Research has also found lower risk of birth defects in the children of men who regularly exercised prior to their conception. But fathers who smoked or were overweight during the preconception period were more likely to have children born with a birth defect. The children of fathers who smoked in the months before their conception were also found to have an increased risk of cancer.

    Age also plays a role here, just as it does for mothers. Babies born to fathers who were aged 45 and older during the preconception period had a greater risk of being born prematurely or with a low birth weight.

    Lifestyle and epigenetics

    The concept of epigenetics is key to understanding how a man’s health during the preconception period is related to pregnancy outcomes and their child’s health.

    CAPTION.
    Oteera/ Shutterstock

    Epigenetics means “on top of genetics.” It’s about modifications of the genome that do not change the genetic code. Epigenetic modifications are instead about how the genes are read and which genes are turned on or off – and when.

    Epigenetics represents a link between genetics and environment. Various environmental and lifestyle factors, as well as diseases and even prescription drugs, can induce epigenetic changes. These changes can all lead to the function of certain genes being enhanced – and other genes being completely or partially switched off.

    Although only a very small portion of the epigenetic alterations in the fetus are directly derived from the mother or the father, these can still have a significant impact on the baby’s development and their health. But it’s worth noting here that the TikTok creator’s claim that 50-60% of the baby’s epigenetic makeup comes from the father is not true.




    Read more:
    Four ways men and women can improve their health before trying to conceive


    There’s now solid evidence indicating that lifestyle-related factors (such as smoking, chronic stress and high blood sugar) and diseases (such as obesity) can lead to epigenetic alterations in sperm that affect how the placenta functions. These epigenetic alterations of placental function have subsequently been linked with pre-eclampsia risk and a child’s health and development

    My own research has also shown that sperm which have a chromosome break (which is related to epigenetics) can double the risk of pre-eclampsia and low birth weight in the child. Many of the same lifestyle factors which induce the same epigenetic alterations in sperm that affect placental function have also been linked with higher likelihood of chromosome breaks occurring. Measuring chromosome breaks in sperm could provide an easy and rapid way of identifying high-risk pregnancies.

    So what can we do about this?

    Unfortunately, despite the clear connection between the father’s health in the preconception period with both pregnancy outcomes and their future child’s health, we lack studies that clearly demonstrate changing lifestyle or better managing chronic diseases has a positive influence on these outcomes.

    Still, even if such things have not yet been demonstrated, I believe that we can agree with the TikTok’s message. Quitting smoking, reducing excessive alcohol consumption, exercising and taking control of any metabolic diseases will not only leave would-be fathers in better health for their partner and child, but also a greater chance of succeeding in getting pregnant.

    Aleksander Giwercman receives funding from EU-Interreg program and from Ferring Pharmaceuticals.

    ref. Trying for a baby? Here’s why the father’s health is just as important as the mother’s – https://theconversation.com/trying-for-a-baby-heres-why-the-fathers-health-is-just-as-important-as-the-mothers-249546

    MIL OSI – Global Reports

  • MIL-OSI Global: Cartoon Network changed animation forever – Warner Bros shouldn’t let it die

    Source: The Conversation – UK – By Jacqueline Ristola, Lecturer in Digital Animation, University of Bristol

    Many people – myself included – remember Cartoon Network as their favourite TV channel to watch after school. Launched in 1992, Cartoon Network became a global cable brand, available in over 180 countries.

    But while the channel had international recognition and commercial success with original hits such as The Powerpuff Girls (1998-2005) and Adventure Time (2010-2018), lately its iconic status has been diminished in the backdrop of the streaming platform wars.

    In fact, Cartoon Network is an excellent case study for how the conditions of media conglomeration shape how media is made and curated. And in making a wide variety of animation available, Cartoon Network also helped make audiences think differently about animation.

    The network’s story began in 1991, when media mogul Ted Turner bought the animated television titan Hanna-Barbera Productions. From the 1960s to the 1980s, the studio created more than 100 animated television series that dominated Saturday morning programming.

    Turner bought Hanna-Barbera not for the studio itself, but for its impressive content library – which provided much of Cartoon Network’s initial programming. But while Cartoon Network began as a rerun channel, its programmers were ambitious for something more.


    This article is part of our State of the Arts series. These articles tackle the challenges of the arts and heritage industry – and celebrate the wins, too.


    In 1993, they went to Turner asking for money to produce original programming. Turner turned them away, telling them: “I bought you a library, now utilise it.”

    So, in the face of these corporate budget restrictions, Cartoon Network programmers innovated. By reusing the corporate library of Hanna-Barbera cartoons, they created their first fully original television series, Space Ghost: Coast to Coast (1994-2008).

    This series skewered the conventions of late-night talk shows through its characters’ surreal scenes and bizarre behaviour. It was made from the Hanna-Barbera content library itself, remixing the animations with new voices.

    In my research, I argue that the series enabled Cartoon Network programmers to reflect on their own precarious place within Turner’s giant corporation. The series made fun of television conventions, with characters sometimes discussing the process of making television while working for a major media conglomerate.

    The first episode of Space Ghost: Coast to Coast.

    Space Ghost: Coast to Coast is the first example of how Cartoon Network’s conglomerate ownership shaped its forms of production.

    Cartoon Network continued to make original programming, beginning with What a Cartoon! in 1995. Created by former MTV executive Fred Seibert, the series comprised animated shorts, with the most popular ones then being green-lit to series. The show launched several original series, starting with Dexter’s Laboratory in 1996. These were precursors for the groundbreaking, adult-oriented cartoon series and brand, Adult Swim, in 2001.

    Through this innovative approach, Cartoon Network helped revive television animation in the 1990s, giving emerging animators a platform to share their work.


    Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.


    Animation for kids and adults

    While the channel was initially aimed at kids, many of its series challenged typical expectations of children’s television.

    Samurai Jack (2001-2004 and 2017) blended sophisticated storytelling with a unique aesthetic. Later series such as Steven Universe (2013-2019) and Infinity Train (2019-2021) blended heady science fiction and fantasy with deep, emotional stories.

    And many series were just really, really funny. Johnny Bravo (1997-2004), for example, subtly undermined patriarchal norms through slapstick comedy.

    Cartoon Network series also paved the way for queer representation in children’s media. Adventure Time and Steven Universe featured both implicit and explicit queer representation throughout. These series were immensely popular with children and adults alike, and paved the way for other series to represent queerness in animation.

    Since its debut, Cartoon Network has always attracted a broad audience of adults. This is what prompted the launch of Adult Swim in 2001 – an adult-oriented programme block with edgy and subversive series, many of which were animated. Adult Swim pushed the envelope, creating animation that was crass, crude – and sometimes profound.

    Much of the humour of early Adult Swim series was predicated on the contrast between the assumption that animation is “for kids” and the crass material depicted. At the same time, they helped push animation to be considered as a form for everyone, regardless of age.

    Lost in the shuffle of media conglomeration

    Built through the resources of Turner’s media conglomerate, Cartoon Network established itself in a competitive cable marketplace – and such corporate conglomeration has continued to shape the channel, its content and brand. But the sale of Warner Bros. to Discovery in 2022 and subsequent corporate strategy shifts has left the channel and its content lost in the shuffle.

    Characters like the Powerpuff Girls have been firm fan favourites for years.
    Jamaica Parambita/Dupe

    During AT&T’s ownership of Warner Bros. (2018-2022), Cartoon Network was positioned as the central brand to reach kids and family audiences worldwide.

    But in 2022, AT&T sold the company to Discovery, creating Warner Bros. Discovery (WBD). This merger produced turmoil in the media industry, as the newly formed conglomerate quickly announced layoffs and cut content, including animated content.

    While WBD publicly committed to reaching family audiences, several animated works (kid-focused or otherwise) got the axe. These apparent discrepancies between the company’s content and business strategies have arguably produced brand confusion, with Cartoon Network caught in the middle.

    Since 2024, most of Cartoon Network’s content has been cut from streaming libraries. What was once a prominent brand in the Warner Bros. portfolio seems forgotten. But as industry analysts note, kids content, animated or otherwise, remains an important component in any media portfolio. WBD should recognise the value Cartoon Network offers with its great animation and unique history.

    Jacqueline Ristola receives funding from ASIFA-Hollywood’s Animation Educators Forum.

    ref. Cartoon Network changed animation forever – Warner Bros shouldn’t let it die – https://theconversation.com/cartoon-network-changed-animation-forever-warner-bros-shouldnt-let-it-die-257173

    MIL OSI – Global Reports

  • MIL-OSI Global: Golden Dome: An aerospace engineer explains the proposed US-wide missile defense system

    Source: The Conversation – USA – By Iain Boyd, Director of the Center for National Security Initiatives and Professor of Aerospace Engineering Sciences, University of Colorado Boulder

    Posters that President Donald Trump used to announce Golden Dome depict missile defense as a shield. AP Photo/Mark Schiefelbein

    President Donald Trump announced a plan to build a missile defense system, called the Golden Dome, on May 20, 2025. The system is intended to protect the United States from ballistic, cruise and hypersonic missiles, and missiles launched from space.

    Trump is calling for the current budget to allocate US$25 billion to launch the initiative, which the government projected will cost $175 billion. He said Golden Dome will be fully operational before the end of his term in three years and will provide close to 100% protection.

    The Conversation U.S. asked Iain Boyd, an aerospace engineer and director of the Center for National Security Initiatives at the University of Colorado Boulder, about the Golden Dome plan and the feasibility of Trump’s claims. Boyd receives funding for research unrelated to Golden Dome from defense contractor Lockheed Martin.

    Why does the United States need a missile shield?

    Several countries, including China, Russia, North Korea and Iran, have been developing missiles over the past few years that challenge the United States’ current missile defense systems.

    These weapons include updated ballistic missiles and cruise missiles, and new hypersonic missiles. They have been specifically developed to counter America’s highly advanced missile defense systems such as the Patriot and the National Advanced Surface-to-Air Missile System.

    For example, the new hypersonic missiles are very high speed, operate in a region of the atmosphere where nothing else flies and are maneuverable. All of these aspects combined create a new challenge that requires a new, updated defensive approach.

    Russia has fired hypersonic missiles against Ukraine in the ongoing conflict. China parades its new hypersonic missiles in Tiananmen Square.

    So it’s reasonable to think that, to ensure the protection of its homeland and to aid its allies, the U.S. may need a new missile defense capability.

    Ukrainian forces are using the U.S.-made Patriot missile defense system against Russian ballistic missiles.

    What are the components of a national missile defense system?

    Such a defense system requires a global array of geographically distributed sensors that cover all phases of all missile trajectories.

    First, it is essential for the system to detect the missile threats as early as possible after launch, so some of the sensors must be located close to regions where adversaries may fire them, such as by China, Russia, North Korea and Iran. Then, it has to track the missiles along their trajectories as they travel hundreds or thousands of miles.

    These requirements are met by deploying a variety of sensors on a number of different platforms on the ground, at sea, in the air and in space. Interceptors are placed in locations that protect vital U.S. assets and usually aim to engage threats during the middle portion of the trajectory between launch and the terminal dive.

    The U.S. already has a broad array of sensors and interceptors in place around the world and in space primarily to protect the U.S. and its allies from ballistic missiles. The sensors would need to be expanded, including with more space-based sensors, to detect new missiles such as hypersonic missiles. The interceptors would need to be enhanced to enable them to address hypersonic weapons and other missiles and warheads that can maneuver.

    Does this technology exist?

    Intercepting hypersonic missiles specifically involves several steps.

    First, as explained above, a hostile missile must be detected and identified as a threat. Second, the threat must be tracked along all of its trajectory due to the ability of hypersonic missiles to maneuver. Third, an interceptor missile must be able to follow the threat and get close enough to it to disable or destroy it.

    The main new challenge here is the ability to track the hypersonic missile continuously. This requires new types of sensors to detect hypersonic vehicles and new sensor platforms that are able to provide a complete picture of the hypersonic trajectory. As described, Golden Dome would use the sensors in a layered approach in which they are installed on a variety of platforms in multiple domains, including ground, sea, air and space.

    These various platforms would need to have different types of sensors that are specifically designed to track hypersonic threats in different phases of their flight paths. These defensive systems will also be designed to address weapons fired from space. Much of the infrastructure will be multipurpose and able to defend against a variety of missile types.

    In terms of time frame for deployment, it is important to note that Golden Dome will build from the long legacy of existing U.S. missile defense systems. Another important aspect of Golden Dome is that some of the new capabilities have been under active development for years. In some ways, Golden Dome represents the commitment to actually deploy systems for which considerable progress has already been made.

    Is near 100% protection a realistic claim?

    Israel’s Iron Dome air defense system has been described as the most effective system of its kind anywhere in the world.

    But even Iron Dome is not 100% effective, and it has also been overwhelmed on occasion by Hamas and others who fire very large numbers of inexpensive missiles and rockets at it. So it is unlikely that any missile defense system will ever provide 100% protection.

    The more important goal here is to achieve deterrence, similar to the stalemate in the Cold War with the Soviet Union that was based on nuclear weapons. All of the new weapons that Golden Dome will defend against are very expensive. The U.S. is trying to change the calculus in an opponent’s thinking to the point where they will consider it not worth shooting their precious high-value missiles at the U.S. when they know there is a high probability of them not reaching their targets.

    CBS News covered President Donald Trump’s announcement.

    Is three years a feasible time frame?

    That seems to me like a very aggressive timeline, but with multiple countries now operating hypersonic missiles, there is a real sense of urgency.

    Existing missile defense systems on the ground, at sea and in the air can be expanded to include new, more capable sensors. Satellite systems are beginning to be put in place for the space layer. Sensors have been developed to track the new missile threats.

    Putting all of this highly complex system together, however, is likely to take more than three years. At the same time, if the U.S. fully commits to Golden Dome, a significant amount of progress can be made in this time.

    What does the president’s funding request tell you?

    President Trump is requesting a total budget for all defense spending of about $1 trillion in 2026. So, $25 billion to launch Golden Dome would represent only 2.5% of the total requested defense budget.

    Of course, that is still a lot of money, and a lot of other programs will need to be terminated to make it possible. But it is certainly financially achievable.

    How will Golden Dome differ from Iron Dome?

    Similar to Iron Dome, Golden Dome will consist of sensors and interceptor missiles but will be deployed over a much wider geographical region and for defense against a broader variety of threats in comparison with Iron Dome.

    A second-generation Golden Dome system in the future would likely use directed energy weapons such as high-energy lasers and high-power microwaves to destroy missiles. This approach would significantly increase the number of shots that defenders can take against ballistic, cruise and hypersonic missiles.

    Iain Boyd receives funding from the U.S. Department of Defense and Lockheed-Martin Corporation, a defense contractor that sells missile defense systems and could potentially benefit from the implementation of Golden Dome.

    ref. Golden Dome: An aerospace engineer explains the proposed US-wide missile defense system – https://theconversation.com/golden-dome-an-aerospace-engineer-explains-the-proposed-us-wide-missile-defense-system-257408

    MIL OSI – Global Reports

  • MIL-OSI Global: Not just talk: how dialogue can help address complex problems

    Source: The Conversation – Africa – By Ralph Hamann, Professor, University of Cape Town

    Societies around the world are confronted with complex problems that defy resolution by any single actor, even well-resourced governments or corporations. Problems like food security, climate change, or biodiversity loss involve a lot of elements and dynamics. A variety of stakeholders need to be involved in creating effective responses to such problems.

    The difficulty is not only in creating coordinated responses. There is often also a need to develop a shared understanding of what the problem and its underlying causes actually are.

    To foster a shared understanding and coordinated, innovative action, it can help to convene key players in multi-stakeholder dialogue processes.

    A first step is to identify and enrol the actors that are either influential in – or directly affected by – the focal problem. These people are then invited to engage in dialogue with each other in a carefully designed, structured process.

    Processes can take a variety of forms. But a common feature is that participants have enough time and support to look at the problem from different angles, to interact in ways that break down stereotypes, and to think afresh about new ways of acting.

    Fifteen years ago, we were involved in establishing a platform for multi-stakeholder dialogue with a focus on the problem of hunger and food insecurity. It is called the Southern Africa Food Lab. Recently, we analysed the numerous dialogue processes hosted by this initiative over the years to better understand when and how they can make a positive difference.

    We found that even though some dialogue processes don’t seem to be obviously successful, they can play an important role in enabling subsequent dialogues to have far-reaching impacts. And for dialogue to have an impact, it needs to involve a “deeper” kind of participant interaction, beyond formal roles, polite facades, and adversarial debate.

    What does success look like, and when is it achieved?

    Participants and funders are unlikely to remain committed to a dialogue process if they feel it is little more than a series of “talk-shops”. We wanted to achieve tangible changes in government policies and corporate strategies, or collaborative actions that combine resources from different organisations.

    Because we had hosted numerous dialogue initiatives over the 15-year lifespan of the Food Lab, in our analysis we were able to compare different processes in terms of their impacts.

    We found that some of the dialogue processes – especially the early ones – had relatively limited impacts. Though the participants said they’d gained new insights and formed new relationships, there were few changes in organisational policies or practices.

    For example, early on in the initiative, we hosted a dialogue on supporting smallholder farmers. Participants emphasised that they learnt important lessons during this process. During field trips in different parts of the country, they came to appreciate the diverse difficulties encountered by smallholder farmers. And government officials appreciated academics’ analysis of the different kinds of smallholder farmers and corresponding support needs. But these insights and experiences did not yet result in changes in organisational behaviours or strategies.

    Other initiatives were more obviously successful in creating new and influential responses to the hunger problem. For example, we convened a second dialogue focused on smallholder farmers 18 months after the first one. It included some of the same participants as the first process, as well as others. This process resulted in more far-reaching changes.

    For instance, retail companies agreed to revise their supplier standards so that smallholder farmers’ diverse needs and challenges were better accounted for. Government officials used the dialogue to redesign their agricultural extension services. A farmer training programme was established with links to a more context-sensitive and supportive certification system.

    In our analysis, we considered many different explanations for why some dialogue processes were more successful than others. We discovered a pattern: our early dialogue processes were less likely to have impact than subsequent, follow-up dialogues.

    The early dialogues played a crucial role, however, in preparing the ground for the subsequent dialogues to be more effective. They helped participants develop the insights and relationships that enabled the deeper engagement necessary to create real changes.

    What kind of dialogue is needed?

    To create meaningful change, a dialogue needs to move from what we call “shallow” to “deep” dialogue. Shallow dialogue is the more common kind. It is what happens when different people are invited to a workshop and their interactions are shaped by their established views of themselves, the problem at hand, and other actors. Often they hide behind polite facades or blame each other.

    Deep dialogue, in contrast, has a distinct flavour and temperament. Participants gain a more multi-faceted understanding of each other. Thabo is not just a government official but also passionate about nature-based farming. John is not just a corporate manager but also volunteers for animal rights.

    Participants’ focus shifts from defending their personal views or organisational interests to a more expansive, genuine interest in learning from each other, and to exploring new ways to understand the focal problem and possible responses.

    How can this kind of dialogue be achieved?

    First, the potential for multi-stakeholder dialogue needs to be carefully assessed and motivated. Participants and funders need to agree that the problem is complex and in need of fresh responses. This rationale needs to be continuously reviewed and communicated to maintain commitment and engagement.

    Second, it is important to get the “right people” to participate in the process. This includes actors with influence, such as government officials or leaders. But it also includes people who are most directly affected by the focal problem, not least because they have unique knowledge about it.

    Third, convening and facilitating dialogue requires a range of commitments, resources and skills. For a start, as university-based researchers we had some degree of convening power. Participants perceived us to have at least some degree of neutrality. We needed to maintain this perception as much as possible, for example by being careful about what funding to accept. This was important given the controversies in the food security field.

    We also had to make sure we had the necessary facilitation competencies. Especially in the early years, we benefited from facilitators who had a lot of experience in this kind of thing. A facilitator needs to be able to make participants feel comfortable but, when necessary, challenge them to move beyond their “comfort zone”.

    Finally, it is helpful to recognise the cyclical and longer-term nature of dialogue – earlier processes create the “groundwork” for subsequent ones. This means that, as conveners, we needed to find ways of keeping the initiative alive in the periods in between dialogue processes, even if there was no funding available. In our case, it helped that we were university researchers who did not rely on consulting fees. More generally, conveners and funders should budget for “bridging” resources to enable the longer-term unfolding of dialogue’s true impact.

    Rebecca Freeth is a co-author of this article. She is a senior consultant with Reos Partners (Africa office).

    Ralph Hamann’s work with the Southern Africa Food Lab has benefited from funding from the African Climate and Development Institute, the University of Cape Town, and the National Research Foundation. The Food Lab’s funders are listed on its website.

    Scott Drimie co-directs the Southern Africa Food Lab.

    Warren Nilsson is affiliated with the University of Vermont and the Institute for Collective Wellbeing.

    ref. Not just talk: how dialogue can help address complex problems – https://theconversation.com/not-just-talk-how-dialogue-can-help-address-complex-problems-256825

    MIL OSI – Global Reports

  • MIL-OSI Global: Sugary drinks, processed foods, alcohol and tobacco are big killers: why the G20 should add its weight to health taxes

    Source: The Conversation – Africa – By Karen Hofman, Professor and Programme Director, SA MRC Centre for Health Economics and Decision Science – PRICELESS SA (Priority Cost Effective Lessons in Systems Strengthening South Africa), University of the Witwatersrand

    By 2030, non-communicable diseases will account for 75% of all deaths annually. Eight percent of these will be in the global south. Most of these diseases are what we call silent killers: type 2 diabetes, high blood pressure and heart disease, as well as certain types of cancer at increasingly younger ages.

    The consumption of sugary drinks and processed foods high in sugar, salt and saturated fats is fuelling these pandemics. And increasingly advertising is being seen as the means by which the consumption of unhealthy products is promoted. This translates into the growth of non-communicable diseases in populations across the globe. This rising threat is driven largely by the way in which markets and industries are organised, which, in turn, shapes social norms towards consumption of tobacco, alcohol, food and sugary beverages.

    This process is what’s known as commercial determinants of health.

    Products that top the list in terms of their risk to health are tobacco, sugary beverages, ultra processed food and alcohol.

    These products are heavily advertised. For example, in South Africa from 2013 to 2019, sugary beverage manufacturers spent US$191 million (R3.7 billion) to advertise their products. Many of the TV advertisements for sugary drinks were placed during child and family viewing time, between 3pm and 7pm.

    Over the past decade a number of countries have introduced policies in a bid to limit the use and intake of harmful food and beverages. These have ranged from taxes on certain products, such as sugar, alcohol and tobacco, to bans on advertising. Many have proved effective. But there are still big gaps in policies to control these harmful products.

    As academics who have researched this field for three decades we believe that the G20 can play a significant role in plugging these gaps. The countries under the G20 umbrella, which represent two thirds of the world’s population, have reason to act: all are experiencing a mounting burden of obesity-related illness such as diabetes, high blood pressure and cancer at ever-younger ages.

    One of South Africa’s G20 presidency health priorities is “stemming the tide of non-communicable diseases”. In our view this is an invitation for the G20 to pledge to combat the drivers of non-communicable diseases.

    The G20 can acknowledge that these diseases are part of a pathological system in which commercial actors are causing ill health. And G20 leaders can acknowledge that progress enacting health taxes has stagnated in most countries.

    By galvanising attention in this way, the G20 can give impetus to a high level United Nations meeting in 2025 at which a new vision for the control and prevention of non-communicable diseases is due to be set. Health taxes and bans on marketing are focus areas.

    What stands in the way of progress

    Efforts by various countries to curb consumption of these harmful products have shown one thing clearly: there’s no silver bullet.

    Nevertheless, evidence shows that consumers are responsive to price. This points to the fact that taxes are a key tool for decreasing demand, especially for young consumers.




    Read more:
    Sugary drinks are a killer: a 20% tax would save lives and rands in South Africa


    There is also mounting evidence that health taxes are progressive for health at a population level – in other words they lead to better health outcomes. Research also shows that they scarcely affect overall employment, if at all.

    But advances on alcohol and tobacco taxes are slow. And there has been little progress on taxes on sugary beverages.

    These taxes remain far too low because health promotion taxes face tough resistance from industry. When any health promotion taxes are proposed, industries deny harms, promote doubt, divert attention, spread disinformation, create front organisations, and varnish their reputations through corporate social responsibility initiatives.

    When taxes do proceed through the legislative or regulatory process, industries influence proposals to make them less effective. They also offer to replace legislation with voluntary commitments. Evidence shows that voluntary commitments do not work.

    What would be gained

    In 2024, a report by a panel of experts showed that US$3.7 trillion in additional revenue could be generated over five years if all countries increased prices of tobacco, alcohol and sugary beverages by 50%.

    This money is sorely needed to boost healthcare. Non-communicable diseases disproportionately affect the most poor and vulnerable and healthcare systems are increasingly unable to cope. Screening, diagnosis, medications and treatment are very expensive for both ministries of finance and at the household level, where health needs can result in catastrophic expenditure.

    And taxes that generate a 50% increase in real prices of tobacco, alcohol and sugary beverages would save 50 million lives globally over 50 years.

    Where to begin

    We believe the G20 platform is a sound one on which to champion efforts to curb the consumption of harmful products. This is because half of the countries in the group have one or two policies for food such as taxes on sweetened beverages. Their experiences can therefore inform debates about how to protect the public from the fatal effects of diet-influenced diseases.

    But building a solid foundation won’t be easy. What’s needed is for the G20 to put its weight behind these key points:

    • Promoting good health before people get sick should be an imperative because the cost of inaction in financial and human terms is just too high.

    • Promoting the case for raising tobacco taxes, because tobacco continues to cause the most death and illness. But taxation has stalled. Approximately 90% of smokers live in countries where cigarettes were equally or more affordable in 2022 than they were five years earlier.

    • A renewed focus on alcohol taxes, which have shown little improvement in the last decade. Alcohol excise taxes are not being used effectively.

    • Fresh impetus behind increasing the level of taxes as a percentage of the cost of sugar sweetened beverages. Evidence suggests that to be effective, taxes on sugar sweetened beverages should increase product prices by at least 20%.

    • Champion nutrition regulation when navigating the trade and nutrition policy environment. Trade policies can be inconsistent with health policies.

    • Lastly, push for stronger global monitoring frameworks to track corporate accountability in health. This should include clear conflict of interest policies, information management, and exposing when corporations try to shape their own evidence-base or discredit research that would be supportive of public health policies.

    Susan Goldstein receives funding from the SAMRC, the NIHR and UNICEF. She is a Board Member of the Southern African Alcohol Policy Alliance: South Africa,

    Karen Hofman does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Sugary drinks, processed foods, alcohol and tobacco are big killers: why the G20 should add its weight to health taxes – https://theconversation.com/sugary-drinks-processed-foods-alcohol-and-tobacco-are-big-killers-why-the-g20-should-add-its-weight-to-health-taxes-256024

    MIL OSI – Global Reports

  • MIL-OSI Global: Queer country: LGBTQ+ musicians are outside the spotlight as Grand Ole Opry turns 100

    Source: The Conversation – USA – By Tanya Olson, Associate Teaching Professor, University of Maryland, Baltimore County

    The iconic circle in the Grand Ole Opry stage. Who gets to stand in it? Timothy Wildey/Flickr, CC BY-NC

    On March 15, 1974, the Grand Ole Opry country music radio show closed its run at the Ryman Auditorium in Nashville, Tennessee, with Johnny and June Carter Cash leading the song “Will the Circle Be Unbroken.” After that final show, a six-foot circle of wood was cut from the Ryman stage and moved to the new Grand Ole Opry House.

    The next night, Roy Acuff opened the first show at the new venue. A video of Acuff singing in the 1940s played before the screen lifted to reveal Acuff himself, singing live in the same spot. The message was clear: Though the stage had changed, the story continued. The circle had not been broken.

    The Opry began on WSM on Nov. 28, 1925, and is celebrating its centennial with a series of concerts and tributes under the name Opry 100. On March 19, 2025, Reba McEntire stepped onto the iconic circle on the Grand Ole Opry stage and kicked off NBC’s Opry 100 celebration with a verse of “Sweet Dreams.”

    The final song of the night was “Will the Circle Be Unbroken,” performed by country legends like Bill Anderson and Jeannie Seely alongside newcomers like Lainey Wilson and Post Malone. It was a moment meant to celebrate 100 years of country music tradition and connection with a stage full of voices harmonizing across generations. A circle, unbroken.

    But that night in March, one group of country performers was missing. Not a single openly gay, lesbian or bisexual artist appeared onstage during the anniversary celebration. In a moment designed to honor the full sweep of the genre’s past and future, a long line of country musicians was left standing outside the spotlight once again.

    Wilma Burgess’ sexuality was common knowledge in music industry circles in the 1960s and ‘70s.

    A slowly opening circle

    Country music has never been without queer voices, but it regularly refuses to acknowledge them.

    From 1962 to 1982, Wilma Burgess had 15 songs on Billboard’s Hot Country chart and two Grammy Award nominations. She recorded with legendary producer Owen Bradley and had Top 10 hits like “Misty Blue.” Despite this success, Burgess never played the Opry. Though Burgess was never publicly out, her sexuality was common knowledge in recording circles. In the 1980s, she left music and opened The Hitching Post, Nashville’s first lesbian bar. Like so many queer country artists, Burgess had to build her legacy outside the circle.

    In the 1980s and 90s, k.d. lang and Sid Spencer expanded the presence of queer artists in country music. Lang won two Grammys and performed at the Opry, but she was labeled “cowpunk” and left the genre before coming out in 1992. Spencer released albums and toured widely within the gay rodeo circuit, but he was never recognized by mainstream country before his 1996 death from AIDS-related complications.

    The 2000s offered small openings. Mary Gauthier became the first openly queer artist to perform on the Opry stage in 2005. Chely Wright had a No. 1 country single before coming out in 2010, but didn’t return to the Opry until 2019. Ty Herndon charted 17 singles before coming out in 2014. He wouldn’t appear at the Opry again until 2023.

    These artists established themselves first and came out later, at great professional cost. The Opry hosts 5–6 shows a week, featuring 6–8 artists each night. In that context, a nine-year absence isn’t just a scheduling gap. In addition, the Grand Ole Opry currently has 76 members, a special designation indicating a level of success in country music. None of them identify as LGBTQ+.

    Today, there are signs of change. Lily Rose, who has been openly queer since the beginning of her career, receives radio play, has songs on the charts and tours widely. But she remains the exception, not the rule. Other openly LGBTQ+ artists like Paisley Fields, Mya Byrne and Amythyst Kiah are recording, performing and building loyal audiences, but they are still rarely featured on country radio or invited onto the Opry stage. The circle may be widening, but for many queer artists, it’s still just out of reach.

    The importance of the circle

    In country music, visibility isn’t just symbolic. If you’re not on the radio, you don’t chart. If you don’t chart, you don’t tour. Without that platform, you can’t build a legacy.

    Country radio and the Opry stage serve as gatekeepers of who counts. In 2015, a radio consultant infamously compared women artists to “tomatoes in the salad,” stating a few were fine, but they shouldn’t dominate. That same logic has long applied to queer artists; they can be tolerated at the edges but are rarely treated as essential.

    Genre labeling becomes another barrier. Brandi Carlile and Brandy Clark both openly identify as lesbians and have been embraced by country audiences and critics alike, but they are routinely categorized as Americana artists. That rebranding often functions as a fence that keeps artists close enough to celebrate, but far enough to exclude.

    Gina Venier is one of today’s many openly gay country artists.

    Reimagining the circle

    The Opry’s centennial celebrations are scheduled to continue through the end of 2025 with a concert at London’s Royal Albert Hall and a final anniversary show in Nashville on Nov. 28. Perhaps openly queer artists will take the stage at those events. If they do, it won’t just be symbolic; it will be a long overdue acknowledgment of artists who have always been here, even if they weren’t always seen.

    Country music’s strength lies in how it braids together American traditions: gospel and blues, Black and white, rural and urban, old and new. It’s not a genre built on purity, but one that relies on the mix. That mix is what makes country music American – and what makes it endure.

    If the circle on the Opry stage is meant to stand for country music itself, then I hope it will be like the music: honest and able to grow. If “Will the Circle Be Unbroken” is more of a promise than just a closing number, the future of country music depends on who’s allowed in the circle to sing it next.

    Tanya Olson does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Queer country: LGBTQ+ musicians are outside the spotlight as Grand Ole Opry turns 100 – https://theconversation.com/queer-country-lgbtq-musicians-are-outside-the-spotlight-as-grand-ole-opry-turns-100-251892

    MIL OSI – Global Reports

  • MIL-OSI Global: Could a bold anti-poverty experiment from the 1960s inspire a new era in housing justice?

    Source: The Conversation – USA – By Deyanira Nevárez Martínez, Assistant Professor of Urban and Regional Planning, Michigan State University

    Model Cities staff in front of a Baltimore field office in 1971. Robert Breck Chapman Collection, Langsdale Library Special Collections, University of Baltimore, CC BY-NC-ND

    In cities across the U.S., the housing crisis has reached a breaking point. Rents are skyrocketing, homelessness is rising and working-class neighborhoods are threatened by displacement.

    These challenges might feel unprecedented. But they echo a moment more than half a century ago.

    In the 1950s and 1960s, housing and urban inequality were at the center of national politics. American cities were grappling with rapid urban decline, segregated and substandard housing, and the fallout of highway construction and urban renewal projects that displaced hundreds of thousands of disproportionately low-income and Black residents.

    The federal government decided to try to do something about it.

    President Lyndon B. Johnson launched one of the most ambitious experiments in urban policy: the Model Cities Program.

    As a scholar of housing justice and urban planning, I’ve studied how this short-lived initiative aimed to move beyond patchwork fixes to poverty and instead tackle its structural causes by empowering communities to shape their own futures.

    Building a great society

    The Model Cities Program emerged in 1966 as part of Johnson’s Great Society agenda, a sweeping effort to eliminate poverty, reduce racial injustice and expand social welfare programs in the United States.

    Earlier urban renewal programs had been roundly criticized for displacing communities of color. Much of this displacement occurred through federally funded highway and slum clearance projects that demolished entire neighborhoods and often left residents without decent options for new housing.

    So the Johnson administration sought a more holistic approach. The Demonstration Cities and Metropolitan Development Act established a federal framework for cities to coordinate housing, education, employment, health care and social services at the neighborhood level.

    New York City neighborhoods designated for revitalization with funding from the Model Cities Program.
    The City of New York, Community Development Program: A Progress Report, December 1968.

    To qualify for the program, cities had to apply for planning grants by submitting a detailed proposal that included an analysis of neighborhood conditions, long-term goals and strategies for addressing problems.

    Federal funds went directly to city governments, which then distributed them to local agencies and community organizations through contracts. These funds were relatively flexible but had to be tied to locally tailored plans. For example, Kansas City, Missouri, used Model Cities funding to support a loan program that expanded access to capital for local small businesses, helping them secure financing that might otherwise have been out of reach.

    Unlike previous programs, Model Cities emphasized what Johnson described as “comprehensive” and “concentrated” efforts. It wasn’t just about rebuilding streets or erecting public housing. It was about creating new ways for government to work in partnership with the people most affected by poverty and racism.

    A revolutionary approach to poverty

    What made Model Cities unique wasn’t just its scale but its philosophy. At the heart of the program was an insistence on “widespread citizen participation,” which required cities that received funding to include residents in the planning and oversight of local programs.

    The program also drew inspiration from civil rights leaders. One of its early architects, Whitney M. Young Jr., had called for a “Domestic Marshall Plan” – a reference to the federal government’s efforts to rebuild Europe after World War II – to redress centuries of racial inequality.

    Civil rights activist Whitney M. Young Jr. helped shape the vision of the Model Cities Program.
    Bettmann/Getty Images

    Young’s vision helped shape the Model Cities framework, which proposed targeted systemic investments in housing, health, education, employment and civic leadership in minority communities. In Atlanta, for example, the Model Cities Program helped fund neighborhood health clinics and job training programs. But the program also funded leadership councils that for the first time gave local low-income residents a direct voice in how city funds were spent.

    In other words, neighborhood residents weren’t just beneficiaries. They were planners, advisers and, in some cases, staffers.

    This commitment to community participation gave rise to a new kind of public servant – what sociologists Martin and Carolyn Needleman famously called “guerrillas in the bureaucracy.”

    A Model Cities staffer discusses the program to a group of students gathered at Denver’s Metropolitan Youth Education Center in 1970.
    Bill Wunsch/The Denver Post via Getty Images

    These were radical planners – often young, idealistic and deeply embedded in the neighborhoods they served. Many were recruited and hired through new Model Cities funding that allowed local governments to expand their staff with community workers aligned with the program’s goals.

    Working from within city agencies, these new planners used their positions to challenge top-down decision-making and push for community-driven planning.

    Their work was revolutionary not because they dismantled institutions but because they reimagined how institutions could function, prioritizing the voices of residents long excluded from power.

    Strengthening community ties

    In cities across the country, planners fought to redirect public resources toward locally defined priorities.

    A mobile dentist office in Baltimore.
    Robert Breck Chapman Collection, Langsdale Library Special Collections, University of Baltimore, CC BY-NC-ND

    In some cities, such as Tucson, the program funded education initiatives such as bilingual cultural programming and college scholarships for local students. In Baltimore, it funded mobile health services and youth sports programs.

    In New York City, the program supported new kinds of housing projects called vest-pocket developments, which got their name from their smaller scale: midsize buildings or complexes built on vacant lots or underutilized land. New housing such as the Betances Houses in the South Bronx were designed to add density without major redevelopment taking place – a direct response to midcentury urban renewal projects, which had destroyed and displaced entire neighborhoods populated by the city’s poorest residents. Meanwhile, cities such as Seattle used the funds to renovate older apartment buildings instead of tearing them down, which helped preserve the character of local neighborhoods.

    The goal was to create affordable housing while keeping communities intact.

    An Atlanta neighborhood identified as a candidate for street paving and home rehabilitation as part of the Model Cities Program.
    Georgia State University Special Collections

    What went wrong?

    Despite its ambitious vision, Model Cities faced resistance almost from the start. The program was underfunded and politically fragile. While some officials had hoped for US$2 billion in annual funding, the actual allocation was closer to $500 million to $600 million, spread across more than 60 cities.

    Then the political winds shifted. Though designed during the optimism of the mid-1960s, the program started being implemented under President Richard Nixon in 1969. His administration pivoted away from “people programs” and toward capital investment and physical development. Requirements for resident participation were weakened, and local officials often maintained control over the process, effectively marginalizing the everyday citizens the program was meant to empower.

    In cities such as San Francisco and Chicago, residents clashed with bureaucrats over control, transparency and decision-making. In some places, participation was reduced to token advisory roles. In others, internal conflict and political pressure made sustained community governance nearly impossible.

    Critics, including Black community workers and civil rights activists, warned that the program risked becoming a new form of “neocolonialism,” one that used the language of empowerment while concentrating control in the hands of white elected officials and federal administrators.

    A legacy worth revisiting

    Although the program was phased out by 1974, its legacy lived on.

    In cities across the country, Model Cities trained a generation of Black and brown civic leaders in what community development leaders and policy advocates John A. Sasso and Priscilla Foley called “a little noticed revolution.” In their book of the same name, they describe how those involved in the program went on to serve in local government, start nonprofits and advocate for community development.

    It also left an imprint on later policies. Efforts such as participatory budgeting, community land trusts and neighborhood planning initiatives owe a debt to Model Cities’ insistence that residents should help shape the future of their communities. And even as some criticized the program for failing to meet its lofty goals, others saw its value in creating space for democratic experimentation.

    A housing meeting takes place at a local Model Cities field office in Baltimore in 1972.
    Robert Breck Chapman Collection, Langsdale Library Special Collections, University of Baltimore, CC BY-NC-ND

    Today’s housing crisis demands structural solutions to structural problems. The affordable housing crisis is deeply connected to other intersecting crises, such as climate change, environmental injustice and health disparities, creating compounding risks for the most vulnerable communities. Addressing these issues through a fragmented social safety net – whether through housing vouchers or narrowly targeted benefit programs – has proven ineffective.

    Today, as policymakers once again debate how to respond to deepening inequality and a lack of affordable housing, the lost promise of Model Cities offers vital lessons.

    Model Cities was far from perfect. But it offered a vision of how democratic, local planning could promote health, security and community.

    Deyanira Nevárez Martínez is a trustee of the Lansing School District Board of Education and is currently a candidate for the Lansing City Council Ward 2.

    ref. Could a bold anti-poverty experiment from the 1960s inspire a new era in housing justice? – https://theconversation.com/could-a-bold-anti-poverty-experiment-from-the-1960s-inspire-a-new-era-in-housing-justice-253706

    MIL OSI – Global Reports

  • MIL-OSI Global: Air traffic controller shortages in Newark and other airports partly reflect long, intense training − but university-based training programs are becoming part of the solution

    Source: The Conversation – USA – By Melanie Dickman, Lecturer in Aviation Studies, The Ohio State University

    Air traffic controllers observe a plane taking off from San Francisco International Airport in 2017. AP Photo/Jeff Chiu

    Air traffic controllers have been in the news a lot lately.

    A spate of airplane crashes and near misses have highlighted the ongoing shortage of air traffic workers, leading more Americans to question the safety of air travel.

    The shortage, as well as aging computer systems, have also led to massive flight disruptions at airports across the country, particularly at Newark Liberty International Airport. The staffing shortage is also likely at the center of an investigation of a deadly crash between a commercial plane and an Army helicopter over Washington, D.C., in January 2025.

    One reason for the air traffic controller shortage relates to the demands of the job: The training to become a controller is extremely intense, and the Federal Aviation Administration wants only highly qualified personnel to fill those seats, which has made it difficult for what has been the sole training center in the U.S., located in Oklahoma City, to churn out enough qualified graduates each year.

    As scholars who study and teach tomorrow’s aviation professionals, we are working to be part of the solution. Our program at Ohio State University is applying to join over two dozen other schools in an effort to train air traffic controllers and help alleviate the shortage.

    Air traffic controller school

    Air traffic control training today – overseen by the Federal Aviation Administration – remains as intense as it’s ever been.

    In fact, about 30% of students fail to make it from their first day of training at the FAA Academy in Oklahoma City to the status of a certified professional air traffic controller. The academy currently trains the majority of the air traffic controllers in the U.S.

    Before someone is accepted into the training program, they must meet several qualifications. That includes being a U.S. citizen under the age of 31 and speaking English clearly enough to be understood over the radio. The low recruitment age is because controllers currently have a mandatory retirement age of 56 – with some exceptions – and the FAA wants them to work for at least 25 years in the job.

    They must also pass a medical exam and security investigation. And they must pass the air traffic controller specialists skills assessment battery, which measures an applicant’s spatial awareness and decision-making abilities.

    Candidates, additionally, must have three years of general work experience, or a combination of postsecondary education and work experience totaling at least three years.

    This alone is no easy feat. Fewer than 10% of applicants meet those initial requirements and are accepted into training.

    An air traffic controller monitors a runway in the tower at John F. Kennedy International Airport in New York.
    AP Photo/Seth Wenig

    Intense training

    Once applicants meet the initial qualifications, they begin a strenuous training process.

    This begins with several weeks of classroom instruction and several months of simulator training. There are several types of simulators, and a student is assigned to a simulator based on the type of facility for which they will be hired – which depends on a trainee’s preference and where controllers are needed.

    There are two main types of air traffic facilities: control towers and radar. Anyone who has flown on a plane has likely seen a control tower near the runways, with 360 degrees of tall glass windows to monitor the skies nearby. Controllers there mainly look outside to direct aircraft but also use radar to monitor the airspace and assist aircraft in taking off and landing safely.

    Radar facilities, on the other hand, monitor aircraft solely through the use of information depicted on a screen. This includes aircraft flying just outside the vicinity of a major airport or when they’re at higher altitudes and crisscrossing the skies above the U.S. The controllers ensure they don’t fly too close to one another as they follow their flight paths between airports.

    If the candidates make it through the first stage, which takes about six months and extensive testing to meet standards, they will be sent to their respective facilities.

    Once there, they again go to the classroom, learning the details of the airspace they will be working in. There are more assessments and chances to “wash out” and have to leave the program.

    Finally, the candidates are paired with an experienced controller who conducts on-the-job training to control real aircraft. This process may take an additional year or more. It depends on the complexity of the airspace and the amount of aircraft traffic at the site.

    Two control towers watch over Newark Liberty International Airport, where a shortage of air traffic controllers has led to blackouts and other problems lately.
    AP Photo/Seth Wenig

    Increasing the employment pipeline

    But no matter how good the training is, if there aren’t enough graduates, that’s a problem for managing the increasingly crowded skies.

    The FAA is currently facing a deficit of about 3,000 controllers and has unveiled a plan in May 2025 to increase hiring and boost retention. In addition, Congress is mulling spending billions of dollars to update the FAA’s aging systems and hire more air traffic controllers.

    Other plans include paying retention bonuses and allowing more controllers to work beyond the age of 56. That retirement age was put in place in the 1970s on the assumption that cognition for most people begins to decline around then, although research shows that age alone is not necessarily a predictor of cognitive abilities.

    But we believe that aviation programs and universities can play an important role fixing the shortage by providing FAA Academy-level training.

    Currently, 32 universities including the Florida Institute of Technology and Arizona State University partner with the FAA in its collegiate training initiative to provide basic air traffic control training, which gives graduates automatic entry into the FAA Academy and allows them to skip five weeks of coursework.

    The institution where we work, Ohio State University, is currently working on becoming the 33rd this summer and plans to offer an undergraduate major in aviation with specialization in air traffic control.

    This helps, but an enhanced version of this program, announced in October 2024, allows graduates of a select few of those universities to skip the FAA Academy altogether and go straight to a control tower or radar facility once they’ve passed all the extensive tests. These schools must match or exceed the level of rigor in their training with the FAA Academy itself.

    At the end of the program, students are required to pass an evaluation by an FAA-approved evaluator to ensure that the student graduating from the program meets the same standards as all FAA Academy graduates and is prepared to go to their assigned facility for further training. So far, five schools, such as the University of North Dakota, have joined this program and are currently training air traffic controllers. We intend to join this group in the near future.

    Allowing colleges and universities to start the training process while students are still in school should accelerate the pace at which new controllers enter the workforce, alleviate the shortage and make the skies over the U.S. as safe as they can be.

    Melanie Dickman is a member at large of the Air Traffic Controllers Association

    Brian Strzempkowski does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Air traffic controller shortages in Newark and other airports partly reflect long, intense training − but university-based training programs are becoming part of the solution – https://theconversation.com/air-traffic-controller-shortages-in-newark-and-other-airports-partly-reflect-long-intense-training-but-university-based-training-programs-are-becoming-part-of-the-solution-249715

    MIL OSI – Global Reports

  • MIL-OSI Global: Managing forests and other ecosystems under rising threats requires thinking across wide-ranging scenarios

    Source: The Conversation – USA – By Kyra Clark-Wolf, Research Scientist in Ecological Transformation, University of Colorado Boulder

    Thinking through scenarios allows land managers to prepare for many potential outcomes. Benjamin Slyngstad via USGS

    In Sequoia and Kings Canyon National Parks in California, trees that have persisted through rain and shine for thousands of years are now facing multiple threats triggered by a changing climate.

    Scientists and park managers once thought giant sequoia forests nearly impervious to stressors like wildfire, drought and pests. Yet, even very large trees are proving vulnerable, particularly when those stressors are amplified by rising temperatures and increasing weather extremes.

    The rapid pace of climate change – combined with threats like the spread of invasive species and diseases – can affect ecosystems in ways that defy expectations based on past experiences. As a result, Western forests are transitioning to grasslands or shrublands after unprecedented wildfires. Woody plants are expanding into coastal wetlands. Coral reefs are being lost entirely.

    Nate Stephenson, from the U.S. Geological Survey, talks about the fire damage at Redwood Mountain Grove in the Kings Canyon National Park, Calif., in 2021.
    AP Photo/Gary Kazanjian

    To protect these places, which are valued for their natural beauty and the benefits they provide for recreation, clean water and wildlife, forest and land managers increasingly must anticipate risks they have never seen before. And they must prepare for what those risks will mean for stewardship as ecosystems rapidly transform.

    As ecologists and a climate scientist, we’re helping them figure out how to do that.

    Managing changing ecosystems

    Traditional management approaches focus on maintaining or restoring how ecosystems looked and functioned historically.

    However, that doesn’t always work when ecosystems are subjected to new and rapidly shifting conditions.

    Ecosystems have many moving parts – plants, animals, fungi and microbes; and the soil, air and water in which they live – that interact with one another in complex ways.

    When the climate changes, it’s like shifting the ground on which everything rests. The results can undermine the integrity of the system, leading to ecological changes that are hard to predict.

    To plan for an uncertain future, natural resource managers need to consider many different ways changes in climate and ecosystems could affect their landscapes. Essentially, what scenarios are possible?

    Preparing for multiple possibilities

    At Sequoia and Kings Canyon, park managers were aware that climate change posed some big risks to the iconic trees under their care. More than a decade ago, they undertook a major effort to explore different scenarios that could play out in the future.

    It’s a good thing they did, because some of the more extreme possibilities they imagined happened sooner than expected.

    In 2014, drought in California caused the giant sequoias’ foliage to die back, something never documented before. In 2017, sequoia trees began dying from insect damage. And, in 2020 and 2021, fires burned through sequoia groves, killing thousands of ancient trees.

    While these extreme events came as a surprise to many people, thinking through the possibilities ahead of time meant the park managers had already begun to take steps that proved beneficial. One example was prioritizing prescribed burns to remove undergrowth that could fuel hotter, more destructive fires.

    Insulating wraps protected the giant sequoia General Sherman from a fire in 2021.
    Patrick T. Fallon/AFP via Getty Images

    The key to effective planning is a thoughtful consideration of a suite of strategies that are likely to succeed in the face of many different changes in climates and ecosystems. That involves thinking through wide-ranging potential outcomes to see how different strategies might fare under each scenario – including preparing for catastrophic possibilities, even those considered unlikely.

    For example, prescribed burning may reduce risks from both catastrophic wildfire and drought by reducing the density of plant growth, whereas suppressing all fires could increase those risks in the long run.

    Strategies undertaken today have consequences for decades to come. Managers need to have confidence that they are making good investments when they put limited resources toward actions like forest thinning, invasive species control, buying seeds or replanting trees. Scenarios can help inform those investment choices.

    Constructing credible scenarios of ecological change to inform this type of planning requires considering the most important unknowns. Scenarios look not only at how the climate could change, but also how complex ecosystems could react and what surprises might lay beyond the horizon.

    Scientists at the North Central Climate Adaptation Science Center are collaborating with managers in the Nebraska Sandhills to develop scenarios of future ecological change under different climate conditions, disturbance events like fires and extreme droughts, and land uses like grazing.
    Photos: T. Walz, M. Lavin, C. Helzer, O. Richmond, NPS (top to bottom)., CC BY

    Key ingredients for crafting ecological scenarios

    To provide some guidance to people tasked with managing these landscapes, we brought together a group of experts in ecology, climate science, and natural resource management from across universities and government agencies.

    We identified three key ingredients for constructing credible ecological scenarios:

    1. Embracing ecological uncertainty: Instead of banking on one “most likely” outcome for ecosystems in a changing climate, managers can better prepare by mapping out multiple possibilities. In Nebraska’s Sandhills, we are exploring how this mostly intact native prairie could transform, with outcomes as divergent as woodlands and open dunes.

    2. Thinking in trajectories: It’s helpful to consider not just the outcomes, but also the potential pathways for getting there. Will ecological changes unfold gradually or all at once? By envisioning different pathways through which ecosystems might respond to climate change and other stressors, natural resource managers can identify critical moments where specific actions, such as removing tree seedlings encroaching into grasslands, can steer ecosystems toward a more desirable future.

    3. Preparing for surprises: Planning for rare disasters or sudden species collapses helps managers respond nimbly when the unexpected strikes, such as a severe drought leading to widespread erosion. Being prepared for abrupt changes and having contingency plans can mean the difference between quickly helping an ecosystem recover and losing it entirely.

    Over the past decade, access to climate model projections through easy-to-use websites has revolutionized resource managers’ ability to explore different scenarios of how the local climate might change.

    What managers are missing today is similar access to ecological model projections and tools that can help them anticipate possible changes in ecosystems. To bridge this gap, we believe the scientific community should prioritize developing ecological projections and decision-support tools that can empower managers to plan for ecological uncertainty with greater confidence and foresight.

    Ecological scenarios don’t eliminate uncertainty, but they can help to navigate it more effectively by identifying strategic actions to manage forests and other ecosystems.

    Kyra Clark-Wolf receives funding from USGS, NSF, and National Park Service. She is affiliated with the Cooperative Institute for Research in Environmental Sciences at the University of Colorado Boulder and the North Central Climate Adaptation Science Center.

    Brian W. Miller receives funding from the U.S. Geological Survey North Central Climate Adaptation Science Center. Any use of trade, firm, or product names is for descriptive purposes only and does not imply endorsement by the U.S. Government.

    Imtiaz Rangwala receives funding from USGS, USDA, NOAA, US Forest Service and National Park Service. He is affiliated with the Cooperative Institute for Research in Environmental Sciences at the University of Colorado Boulder, North Central Climate Adaptation Science Center, Western Water Assessment and Boundless In Motion.

    ref. Managing forests and other ecosystems under rising threats requires thinking across wide-ranging scenarios – https://theconversation.com/managing-forests-and-other-ecosystems-under-rising-threats-requires-thinking-across-wide-ranging-scenarios-253842

    MIL OSI – Global Reports

  • MIL-OSI Global: Christianity has long revered saints who would be called ‘transgender’ today

    Source: The Conversation – USA – By Sarah Barringer, Ph.D. Candidate in English, University of Iowa

    Several Republican-led states have restricted transgender rights: Iowa has signed a law removing civil rights protection for transgender people; Wyoming has prohibited state agencies from requiring the use of preferred pronouns; and Alabama recently passed a law that only two sexes would be recognized. Hundreds of bills have been introduced in other state legislatures to curtail trans rights.

    Earlier in the year, several White House executive orders pushed to deny trans identity. One of them, “Eradicating Anti-Christian Bias,” claimed that gender-affirming policies of the Biden administration were “anti-Christian.” It accused the Biden Equal Employment Opportunity Commission of forcing “Christians to affirm radical transgender ideology against their faith.”

    To be clear, not all Christians are anti-trans. And in my research of medieval history and literature, I found evidence of a long history in Christianity of what today could be called “transgender” saints. While such a term did not exist in medieval times, the idea of men living as women, or women living as men, was unquestionably present in the medieval period. Many scholars have suggested that using the modern term transgender creates valuable connections to understand the historical parallels.

    There are at least 34 documented stories of transgender saints’ lives from the early centuries of Christianity. Originally appearing in Latin or Greek, several stories of transgender saints made their way into vernacular languages.

    Transgender saints

    Of the 34 original saints, at least three gained widespread popularity in medieval Europe: St. Eugenia, St. Euphrosyne and St. Marinos. All three were born as women but cut their hair and put on men’s clothes to live as men and join monasteries.

    Eugenia, raised pagan, joined a monastery to learn more about Christianity and later became abbot. Euphrosyne joined a monastery to escape an unwanted suitor and spent the rest of his life there. Marinos, born Marina, decided to renounce womanhood and live with his father at the monastery as a man.

    These were well-read stories. Eugenia’s story appeared in two of the most popular manuscripts of their day – Ælfric’s “Lives of Saints” and “The Golden Legend.” Ælfric was an English abbot who translated Latin saints’ lives into Old English in the 10th century, making them widely available to a lay audience. “The Golden Legend” was written in Latin and compiled in the 13th century; it is part of more than a thousand manuscripts.

    Euphrosyne also appears in Ælfric’s saints’ lives, as well as in other texts in Latin, Middle English, and Old French. Marinos’ story is available in over a dozen manuscripts in at least 10 languages. For those who couldn’t read, Ælfric’s saints’ lives and other manuscripts were read aloud in churches during service on the saint’s day.

    Euphrosyne of Alexandria.
    Anonymous via Wikimedia Commons

    A small church in Paris built in the 10th century was dedicated to Marinos, and relics of his body were supposedly kept in Qannoubine monastery in Lebanon.

    This is all to say, a lot of people were talking about these saints.

    Holy transness

    In the medieval period, saints’ lives were less important as history and more important as morality tales. As a morality tale, the audience was not intended to replicate a saint’s life, but learn to emulate Christian values. Transitioning between male and female becomes a metaphor for transitioning from pagan to Christian, affluence to poverty, worldliness to spirituality. The Catholic Church opposed cross-dressing in laws, liturgical meetings and other writings. However, Christianity honored the holiness of these transgender saints.

    In a 2021 collection of essays about transgender and queer saints in the medieval period, scholars Alicia Spencer-Hall and Blake Gutt argue that medieval Christianity saw transness as holy.

    “Transness is not merely compatible with holiness; transness itself is holy,” they write. Transgender saints had to reject convention in order to live their own authentic lives, just as early Christians had to reject convention in order to live as Christians.

    Literature scholar Rhonda McDaniel explains that in 10th-century England, adopting the Christian values of shunning wealth, militarism and sex made it easier for people to go beyond strict ideas about male and female gender. Instead of defining gender by separate male and female values, all individuals could be defined by the same Christian values.

    Historically and even in contemporary times, gender is associated with specific values and roles, such as assuming that homemaking is for women, or that men are stronger. But adopting these Christian values allowed individuals to transcend such distinctions, especially when they entered monasteries and nunneries.

    According to McDaniel, even cisgender saints like St. Agnes, St. Sebastian and St. George exemplified these values, exhibiting how anyone in the audience could push against gender stereotypes without changing their bodies.

    Agnes’ love of God allowed her to give up the role of wife. When offered love and wealth by men, she rejected them in favor of Christianity. Sebastian and George were powerful Roman men who were expected, as men, to engage in violent militarism. However, both rejected their violent Roman masculinity in favor of Christian pacifism.

    A life worth emulating

    Although most saints’ lives were written primarily as morality tales, the story of Joseph of Schönau was told as both very real and worthy of emulation by the audience. His story is told as a historical account of a life that would be attainable for ordinary Christians.

    In the late 12th century, Joseph, born female, joined a Cistercian monastery in Schönau, Germany. During his deathbed confession, Joseph told his life story, including his pilgrimage to Jerusalem as a child and his difficult journey back to Europe after the death of his father. When he finally returned to his birthplace of Cologne, he entered a monastery as a man in gratitude to God for returning him home safely.

    Despite arguing that Joseph’s life was worth emulating, the first author of Joseph’s story, Engelhard of Langheim, had a complicated relationship with Joseph’s gender. He claimed Joseph was a woman, but regularly used masculine pronouns to describe him.

    Marinos the monk.
    Richard de Montbaston via Wikimedia Commons

    Even though Eugenia, Euphrosyne and Marinos’ stories are told as morality tales, their authors had similarly complicated relationships with their gender. In the case of Eugenia, in one manuscript, the author refers to her with entirely female pronouns, but in another, the scribe slips into male pronouns.

    Marinos and Euphrosyne were also frequently referred to as male. The fact that the authors referred to these characters as male suggests that their transition to masculinity was not only a metaphor, but in some ways just as real as Joseph’s.

    Based on these stories, I argue that Christianity has a transgender history to pull from and many opportunities to embrace transness as an essential part of its values.

    Sarah Barringer does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Christianity has long revered saints who would be called ‘transgender’ today – https://theconversation.com/christianity-has-long-revered-saints-who-would-be-called-transgender-today-254769

    MIL OSI – Global Reports

  • MIL-OSI Global: Pope Leo XIV is the first member of the Order of St. Augustine to be elected pope – but who are the Augustinians?

    Source: The Conversation – USA – By Joanne M. Pierce, Professor Emerita of Religious Studies, College of the Holy Cross

    Pope Leo XIV leaves the Augustinian General House in Rome after a visit on May 13, 2025. AP Photo/Domenico Stinellis

    When Pope Leo XIV was elected pope, the assembled crowd reacted with joy but also with surprise: He was the first pope from the United States, and North America more broadly. Moreover, he was the first member of the Order of St. Augustine to be elected to the papacy.

    Out of all 267 popes, only 51 have been members of religious orders. Pope Francis was elected in 2013 as the first member of the Jesuit order, the Society of Jesus; he was also the first member of any religious order to be chosen in over 150 years.

    As a specialist in medieval Christianity, I am familiar with the origins of many Catholic religious orders, and I was intrigued by the choice of a member of the Order of St. Augustine to follow a Jesuit as pope.

    So, who are the Augustinians?

    Early monks and concern for community

    In antiquity, some Christians chose to lead a more perfect religious life by leaving ordinary society and living together in groups, in the wilderness. They would be led by an older, more experienced person – an abbot. As monks, they followed a set of regulations and guidelines called a “monastic rule.”

    The earliest of these rules, composed about the year 400, is attributed to an influential theologian, later a bishop in North Africa, called St. Augustine of Hippo. The Rule of St. Augustine is a short text that offered monks a firm structure for their daily lives of work and prayer, as well as guidelines on how these rules could be implemented by the abbot in different situations. The rule is both firm and flexible.

    The first chapter stresses the importance of “common life”: It instructs monks to love God and one’s neighbor by living “together in oneness of mind and heart, mutually honoring God in yourselves, whose temples you have become.”

    This is the overriding principle that shapes all later instructions in Augustinian rule.

    For example, Chapter III deals with how the monks should behave when out in public. They should not go alone, but in a group, and not engage in scandalous behavior – specifically, staring at women.

    If one monk starts staring at a woman, one of the other monks with him should “admonish” him. If he does it again, his companion should tell the abbot first, before any other witnesses are notified, so that the monk can try to change his behavior on his own first, so as not to cause disruption in the community.

    Because of this clarity and flexibility, its concern for both the community and the individual members, many early religious communities in the early Middle Ages adopted the Rule of St. Augustine; formal papal approval was not required at this time.

    Mendicant friars in medieval Europe

    By the end of the 12th century, Western Europe had become much more urbanized.

    In response, a new form of religious life emerged: the mendicant friars. Unlike monks who withdrew from ordinary life, mendicants stressed a life of poverty, spent in travel from town to town to preach and help the poor. They would beg for alms along the way to provide for their own needs.

    The first mendicant orders, like the Franciscans and Dominicans, received papal approval in the early 13th century. Others were organized later.

    A few decades later, several hermits living in the Italian region of Tuscany decided to join together to form a new mendicant order. They chose to follow the Rule of St. Augustine under one superior general; Pope Innocent IV approved the new order as the Order of Hermits of St. Augustine in 1244. Later, in 1254, Pope Alexander IV included other groups of hermits in the order, known as the Grand Union.

    The new order grew and eventually expanded across Western Europe, becoming involved in preaching and other kinds of pastoral work in several countries.

    Early missionaries to modern times

    As European countries began to explore the New World, missionary priests took their place on ships sent from Catholic countries, like Spain and Portugal.

    Augustinians were among these early missionaries, quickly establishing themselves in Latin America, several countries in Africa and parts of Southeast Asia and Oceania, arriving in the Philippines in the 16th century.

    There, they not only ministered to the European crews and colonists, but they also evangelized – preached the Christian gospel – to the native inhabitants of the country.

    Augustinian missionaries started the process of setting up Catholic parishes and, eventually, new dioceses. In time, they founded and taught in seminaries to train native-born men who wanted to join their order.

    It wasn’t until the end of the 18th century that Augustinian friars arrived in the United States. Despite many struggles and setbacks in the 19th century, they established Villanova University in Pennsylvania and other ministries in New York and Massachusetts. Except for two 17th-century missionaries, Augustinian friars didn’t arrive in Canada until the 20th century, when they were sent from the German province of the order to escape financial pressure from the economic depression of the 1920s and political pressure from the Nazis.

    Pope Francis meets with members of the Order of Augustinian Recollects at the Vatican on Oct. 20, 2016.
    L’Osservatore Romano/Pool Photo via AP

    Today, there are some 2,800 Augustinian friars in almost 50 countries worldwide. They serve as pastors, teachers and bishops, and have founded schools, colleges and universities on almost every continent. They are also active in promoting social justice in many places – for example, in North America and Australasia, comprising Australia and parts of South Asia.

    Based on his years as a missionary and as provincial of the entire order worldwide, Leo XIV draws on the rich interpersonal tradition of the Order of St. Augustine. I believe his pontificate will be one marked by his experiential awareness of Catholicism as a genuinely global religion, and his deep concern for the suffering of the marginalized and those crushed by political and economic injustice.

    Joanne M. Pierce does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Pope Leo XIV is the first member of the Order of St. Augustine to be elected pope – but who are the Augustinians? – https://theconversation.com/pope-leo-xiv-is-the-first-member-of-the-order-of-st-augustine-to-be-elected-pope-but-who-are-the-augustinians-257175

    MIL OSI – Global Reports

  • MIL-OSI Global: Europeans are concerned that the US will withdraw support from NATO. They are right to worry − Americans should, too

    Source: The Conversation – USA – By John Deni, Research Professor of Joint, Interagency, Intergovernmental, and Multinational Security Studies, US Army War College

    American soldiers join 3,000 troops from other NATO member countries in a four-week exercise in Hohenfels, Germany, in March 2025. Sean Gallup/Getty Images

    The United States has long played a leadership role in NATO, the most successful military alliance in history.

    The U.S. and 11 other countries in North America and Europe founded NATO in 1949, following World War II. NATO has since grown its membership to include 32 countries in Europe and North America.

    But now, European leaders and politicians fear the United States has become a less reliable ally, posing major challenges for Europe and, by implication, NATO.

    This concern is not unfounded.

    President Donald Trump has repeatedly spoken of a desire to seize Greenland, which is an autonomous territory of Denmark, a NATO member. He has declared that Canada, another NATO member, should become “the 51st state.” Trump has also sided with Russia at the United Nations and said that the European Union, the political and economic group uniting 27 European countries, was designed to “screw” the U.S.

    Still, Trump – as well as other senior U.S. government officials – has said that the U.S. remains committed to staying in and supporting NATO.

    For decades, both liberal and conservative American politicians have recognized that the U.S. strengthens its own military and economic interests by being a leader in NATO – and by keeping thousands of U.S. troops based in Europe to underwrite its commitment.

    President Donald Trump speaks at a NATO Summit in July 2018 during his first term.
    Sean Gallup/Getty Images

    Understanding NATO

    The U.S., Canada and 10 Western European countries formed NATO nearly 80 years ago as a way to help maintain peace and stability in Europe following World War II. NATO helped European and North American countries bind together and defend themselves against the threat once posed by the Soviet Union, a former communist empire that fell in 1991.

    NATO employs about 2,000 people at its headquarters in Brussels. It does not have its own military troops and relies on its 32 member countries to volunteer their own military forces to conduct operations and other tasks under NATO’s leadership.

    NATO does have its own military command structure, led by an American military officer, and including military officers from other countries. This team plans and executes all NATO military operations.

    In peacetime, military forces working with NATO conduct training exercises across Eastern Europe and other places to help reassure allies about the strength of the military coalition – and to deter potential aggressors, like Russia.

    NATO has a relatively small annual budget of around US$3.6 billion. The U.S. and Germany are the largest contributors to this budget, each responsible for funding 16% of NATO’s costs each year.

    Separate from NATO’s annual budget, in 2014, NATO members agreed that each participating country should spend the equivalent of 2% of its gross domestic product on their own national defense. Twenty two of NATO’s 31 members with military forces were expected that 2% threshold as of April 2025.

    Although NATO is chiefly a military alliance, it has roots in the mutual economic interests of both the U.S. and Europe.

    Europe is the United States’ most important economic partner. Roughly one-quarter of all U.S. trade is with Europe – more than the U.S. has with Canada, China or Mexico.

    Over 2.3 million American jobs are directly tied to producing exports that reach European countries that are part of NATO.

    NATO helps safeguard this mutual economic relationship between the U.S. and Europe. If Russia or another country tries to intimidate, dominate or even invade a European country, this could hurt the American economy. In this way, NATO can be seen as the insurance policy that underwrites the strength and vitality of the American economy.

    The heart of that insurance policy is Article 5, a mutual defense pledge that member countries agree to when they join NATO.

    Article 5 says that an armed attack against one NATO member is considered an attack against the entire alliance. If one NATO member is attacked, all other NATO members must help defend the country in question. NATO members have only invoked Article 5 once, following the Sept. 11, 2001, attacks in the U.S., when the alliance deployed aircraft to monitor U.S. skies.

    A wavering commitment to Article 5

    Trump has questioned whether he would enforce Article 5 and help defend a NATO country if it is not paying the required 2% of its gross domestic product.

    NBC News also reported in April 2025 that the U.S. is likely going to cut 10,000 or more of the nearly 85,000 American troops stationed in Europe. The U.S. might also relinquish its top military leadership position within NATO, according to NBC.

    Many political analysts expect the U.S. to shift its national security focus away from Europe and toward threats posed by China – specifically, the threat of China invading or attacking Taiwan.

    At the same time, the Trump administration appears eager to reset relations with Russia. This is despite the Russian military’s atrocities committed against Ukrainian military forces and civilians in the war Russia began in 2022, and Russia’s intensifying hybrid war against Europeans in the form of covert spy attacks across Europe. This hybrid warfare allegedly includes Russia conducting cyberattacks and sabotage operations across Europe. It also involves Russia allegedly trying to plant incendiary devices on planes headed to North America, among other things.

    President Joe Biden speaks during a NATO summit in Washington in July 2024.
    Roberto Schmidt/AFP via Getty Images

    A shifting role in Europe

    The available evidence indicates that the U.S. is backing away from its role in Europe. At best – from a European security perspective – the U.S. could still defend European allies with the potential threat of its nuclear weapon arsennal. The U.S. has significantly more nuclear weapons than any Western European country, but it is not clear that this is enough to deter Russia without the clear presence of large numbers of American troops in Europe, especially given that Moscow continues to perceive the U.S. as NATO’s most important and most powerful member.

    For this reason, significantly downsizing the number of U.S. troops in Europe, giving up key American military leadership positions in NATO, or backing away from the alliance in other ways appears exceptionally perilous. Such actions could increase Russian aggression across Europe, ultimately threatening not just European security bu America’s as well.

    Maintaining America’s leadership position in NATO and sustaining its troop levels in Europe helps reinforce the U.S. commitment to defending its most important allies. This is the best way to protect vital U.S. economic interests in Europe today and ensure Washington will have friends to call on in the future.

    John Deni does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Europeans are concerned that the US will withdraw support from NATO. They are right to worry − Americans should, too – https://theconversation.com/europeans-are-concerned-that-the-us-will-withdraw-support-from-nato-they-are-right-to-worry-americans-should-too-253907

    MIL OSI – Global Reports

  • MIL-OSI Global: Why some towns lose local news − and others don’t

    Source: The Conversation – USA – By Abby Youran Qin, Ph.D. candidate at School of Journalism & Mass Communication, University of Wisconsin-Madison

    Five elements determine which towns lose their papers and which ones beat the odds. Hans Henning Wenk/Getty Images

    Why did your hometown newspaper vanish while the next town over kept theirs?

    This isn’t bad luck − it’s a systemic pattern. Since 2005, the United States has lost over one-third of its local newspapers, creating “news deserts” where corruption is more likely to spread and communities may become politically polarized.

    My research, published in Journalism & Mass Communication Quarterly, analyzes the factors behind the decline of local newspapers between 2004 and 2018. It identifies five key drivers − ranging from racial disparity to market forces − that determine which towns lose their papers and which ones beat the odds.

    1. Newspapers follow the money, not community needs

    You might expect news media to gravitate toward areas where their work is needed most − communities experiencing population growth or facing systemic challenges. But in reality, newspapers, like any business, tend to thrive where the financial resources are greatest.

    My analyses suggest that local newspapers survive where affluent subscribers and deep-pocketed advertisers cluster. That means wealthy white suburbs keep their watchdogs, while low-income and diverse communities lose theirs.

    When police brutality spikes, when welfare offices deny claims, when local officials divert funds − these are the moments when communities need their journalists the most.

    Bertram de Souza works on a story for The Vindicator newspaper in Youngstown, Ohio, on Aug. 7, 2019. The 150-year-old paper shut down later that month because of financial struggles.
    Tony Dejak, AP Photos

    Poor and racially diverse communities often face the harshest policing and interact more with street-level bureaucrats than wealthier citizens. That makes them more vulnerable to government corruption and misconduct. Yet, these same communities are the first to lose their newspapers, because there are no luxury real estate agencies buying ads, and few residents can afford the monthly subscriptions.

    Without journalistic scrutiny, scholars find that mismanagement flourishes, corruption costs balloon, and the communities most vulnerable to abuse receive the least accountability. This is how news deserts exacerbate inequality.

    2. Newspapers don’t adequately serve diverse communities

    Picture this: A newsroom sends its reporters, most of whom are white, to a Black neighborhood − but only after reports of gunshots or building fires. Residents, still in shock, don’t want to talk. So journalists call the same three community leaders they always quote, run the tragic story and disappear until the next crisis. This approach, often referred to as “parachute journalism,” results in shallow coverage that paints the community in a negative light while overlooking its complexities.

    Year after year, the pattern repeats. The only time residents see their neighborhood in the paper is when something terrible happens. No feature story of the family-owned restaurant celebrating its 20-year anniversary, no reporter at the town hall when the new police chief gets grilled about stop-and-frisk − just the constant drumbeat of crime and crisis.

    Is it any wonder racially diverse communities stop trusting and paying for that paper? Not when many working-class families of color can barely afford to add a newspaper subscription to their bills.

    Diverse neighborhoods get hit twice. First, their local papers inadequately represent them. Then, when people understandably turn away, subscriptions drop, advertisers pull back and the outlets shut down, leaving whole communities without a voice.

    Only in recent years have more media outlets begun to make a concerted effort to engage with and reflect the communities they serve. However, such efforts are often led by newer media organizations with fresh ideologies, while many long-standing media outlets remain stuck in traditional reporting practices, as illustrated in Jacob Nelson’s “Imagined Audiences.” Although my analyses of local newspaper decline from 2004 to 2018 paints a frustrating picture, the emerging trend of community-oriented journalism holds promise for positive changes in diverse communities.

    3. Population growth doesn’t always save newspapers

    It’s easy to assume that more people = more readers = healthier news organizations. But my research tells a different story: Counties with larger population growth actually saw greater declines in local newspapers.

    The catch lies in who is moving in: Population growth saves papers only when it comes with wealth. Affluent newcomers bring subscriptions and advertisers’ attention. But growth driven by high birth rates, typically seen in less developed areas with more racial and ethnic minorities, doesn’t translate to revenue. In short, growth alone isn’t enough − it’s the type of growth, and the economic power behind it, that matters.

    This highlights the fragility of market-dependent journalism. The news gap experienced by fast-growing communities may persist where local journalism depends primarily on traditional advertising and subscription revenues rather than diversified revenue sources such as grants and philanthropic donations. The latter, which often focus on community needs rather than profit potential, are more likely to help sustain journalism in areas with significant population growth.

    Local news sources help residents hold their elected officials accountable.
    Jim Mone/AP Photos

    4. Neighbors’ newspapers can save yours

    You’d think that competition between newspapers would be a cutthroat affair. But in an era of decline, my analyses reveal a counterintuitive truth: Your town’s paper actually has better odds when nearby communities keep theirs.

    Rather than competing, neighboring papers often become allies, sharing breaking news, splitting investigative costs and attracting advertisers who want regional reach. While this collaboration can sometimes cause papers to lose their local identity, having some local journalism is still better than none. It ensures some level of accountability, even if the news isn’t as focused on each town’s unique needs.

    Resilient local journalism clusters together. When one paper invests in original reporting, its neighbors often benefit too. When regional businesses support multiple outlets, the entire news ecosystem becomes more sustainable.

    5. Left or right? Local papers die either way

    In this highly polarized era, it turns out that there’s no significant link between a county’s partisan makeup and its ability to keep newspapers.

    Urban hubs such as Chicago keep robust media thanks to dense populations and corporate advertisers, not because they vote for Democrats. Meanwhile, newspapers in conservative rural areas can survive by cultivating loyal readerships within their communities.

    In contrast, communities with lower income and a diverse population lose outlets no matter whether they are red, blue or purple.

    Partisan battles might dominate national headlines, but local journalism’s survival hinges on practical factors such as money and market size. Saving local news isn’t a left vs. right debate − it’s a community issue that requires nonpartisan solutions.

    Abby Youran Qin does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Why some towns lose local news − and others don’t – https://theconversation.com/why-some-towns-lose-local-news-and-others-dont-252155

    MIL OSI – Global Reports

  • MIL-OSI Global: Mountain chickadee chatter: Scientists are decoding the songbird’s complex calls

    Source: The Conversation – USA – By Sofia Marie Haley, Ph.D. Student in Cognitive Ecology, University of Nevada, Reno

    Mountain chickadees are unusual in having more complex calls than songs. Vladimir Pravosudov

    I approach a flock of mountain chickadees feasting on pine nuts. A cacophony of sounds, coming from the many different bird species that rely on the Sierra Nevada’s diverse pine cone crop, fill the crisp mountain air.

    The strong “chick-a-dee” call sticks out among the bird vocalizations. The chickadees are communicating to each other about food sources – and my approach.

    Mountain chickadees are a member of the family Paridae, which is known for its complex vocal communication systems and cognitive abilities. Along with my advisers, behavioral ecologists Vladimir Pravosudov and Carrie Branch, I’m studying mountain chickadees at our study site in Sagehen Experimental Forest, outside of Truckee, California, for my doctoral research. I am focusing on how these birds convey a variety of information with their calls.

    The chilly autumn air on top of the mountain reminds me that it will soon be winter. It is time for the mountain chickadees to leave the socially monogamous partnerships they had while raising their chicks to form larger flocks. Forming social groups is not always simple; young chickadees are joining new flocks, and social dynamics need to be established before the winter storms arrive.

    I can hear them working this out vocally. There’s an unusual variety of complex calls, with melodic “gargle calls” at the forefront, coming from individuals announcing their dominance over other flock members.

    Examining and decoding bird calls is becoming an increasingly popular field of study, as scientists like me are discovering that many birds – including mountain chickadees – follow systematic rules to share important information, stringing together syllables like words in a sentence.

    Sofia Haley describes how she records chickadee vocalizations in the forest.

    Songs vs. calls

    For social animals, communication is a crucial part of everyday life. Communication can come in the form of visual, chemical, tactile, electrical or vocal signals.

    Birds are highly vocal, often relying on vocal communication to effectively interact with their environments and flock members. Temperate songbirds, including cardinals, bluebirds, wrens and blackbirds, have two main categories of vocalizations: songs and calls.

    Songs are vocalizations that are used primarily in the spring, during breeding season. Males in temperate regions sing to attract females and defend territories.

    Calls are basically any vocalization that is not a song. This category includes a limitless variety of vocalizations that communicate all sorts of essential information.

    Most songbird species have complex songs and fairly simple calls. This is why vocalizations sound most melodic during the spring, when birds are attracting mates and breeding.

    Members of the Pravosudov lab catch and release resident chickadees to attach identifying bands that allow the researchers to track individual birds.
    Sofia Haley

    However, chickadees are unusual in that they sing very simple songs relative to the complexity of their calls. Research suggests this is largely due to their social structure and complex environments. Living in flocks for the majority of the year means they need an elaborate communication system year-round. This is in contrast to many other songbird species that are more solitary during the nonbreeding season.

    Scientists know quite a lot about birdsong: It is highly organized and composed of multiple units that are strung together into “phrases,” like how musical notes are strung together in a song.

    Some species manipulate their song to sound more impressive, by incorporating new elements or performing impressive acoustic feats through note modification – imagine a trill or an impressive high note.

    Some songbirds must learn their songs from their parents and other adult males during a sensitive period in the first several months of their lives. It’s similar to how human children must learn how to speak from adults during a similar early sensitive period.

    In contrast, we know relatively little about the structure and organization of complex calls. Scientists have often regarded calls as unexciting and simple compared with birdsong. However, calls are arguably the most important type of vocalization, at least for highly social bird species.

    Translating mountain chickadee calls

    A focal microphone allows researchers to record the call of one bird at a time.
    Sofia Haley

    I spend my days out at our field site in the beautiful Sierra Nevada, following and recording chickadees as they communicate with each other. I have taken numerous focal recordings, where I stand in the forest with a directional microphone, identifying vocalizations and behaviors in real time.

    I also have hundreds of hours of recordings taken by automated recording devices called AudioMoths. These allow me to record vocalizations in the absence of people.

    The extensive vocal repertoire of mountain chickadees has yet to be fully documented. There are five basic categories of call types:

    • Contact calls: communicate identity, sort of like a name, and location.
    • “Chick-a-dee” calls: coordinate flock movement and communicate a variety of complex information about the environment, from food availability to predator presence and type.
    • Alarm calls: alert others of the presence of a predator.
    • Begging calls: used by chicks or females to elicit feeding behavior from males.
    • Gargle calls: advertise dominance over other individuals in a flock, primarily used by males.

    “Chick-a-dee” calls contain several elements resembling the basic elements of human grammar. Essentially, the various sounds a chickadee utters mean different things, similar to words in human languages. And the way that a chickadee combines these sounds changes the meaning. Word order matters, just like grammar matters in human language. If a chickadee were to phrase its calls in the wrong note order, the call would no longer convey the same meaning, even if composed of the same elements.

    The “chick-a-dee” call of the mountain chickadee contains six elements, known as notes or syllables, that can be combined in hundreds of unique combinations to say many different things. These elements are labeled A, A/B, B, C, D and Dh.

    Although scientists don’t fully know the meaning of each note in different contexts, it is generally believed that A notes typically contain identifying information about how important the topic seems to the caller, while A/B and B notes tend to further inform the listener of the topic of conversation. C notes contain information about the subject of the call, often a food source, and D notes convey information about the excitement and urgency of the message, including level of threat of a spotted predator or size of a food source. The D notes basically function like exclamation points at the end of a sentence, while the other notes convey more specific information.

    Mountain chickadees can use their “chick-a-dee” calls to convey hundreds of different phrases that are relevant to navigating their habitats and social environments. As a hypothetical example, a mountain chickadee call might have the following syntax: A-A-A/B-B-D-D, which could roughly translate to something like, “Listen to me carefully (A-A): there is a predator (A/B) close by (B) and a medium threat level (DD).”

    If the note order switched to D-A-B-D-A/B-A, the sentence would look more like: “Noteworthy listen close by noteworthy predator listen to me.” Although all the same elements are there, this sentence is now much more difficult to comprehend. Notes that are out of order can confuse chickadees, preventing them from grasping the correct meaning of the call.

    This “translation” is an example based on what we have learned from playback experiments, but the exact meaning will depend on the specific population and surrounding environment.

    Analyzing the ‘chick-a-dee’ calls

    Back in the lab, I parse through the endless hours of recordings using a deep-learning algorithm that I have modified to identify the specific calls of our chickadee population.

    A spectrogram visualizes a chickadee call, with frequency on the vertical axis and time on the horizontal axis.
    Sofia Haley

    I then use Raven Pro software, developed by the Cornell Lab of Ornithology, to visually inspect and analyze these calls on a spectrogram: a visual representation of sound, with frequency on the vertical axis, and time on the horizontal axis. This visualization allows me to study the structure of calls in great detail.

    Studying spectrograms can get me only so far. The next step is to experimentally test different “chick-a-dee” calls out in the wild. Using audio editing software, I manipulate the syntax of calls to either follow grammatical rules or violate them. Then, I broadcast these manipulated recordings out in the forest and observe how our chickadees react to grammatically incorrect calls, which would sound like gibberish to them.

    Audio editing software allows researchers to mix up the order of a chickadee’s call in order to see how birds react to the garbled message.
    Sofia Haley

    My hope is that this combination of experimental testing of calls and careful visual analysis will provide a step toward understanding the subtle complexities of chickadee communication. I’m trying to home in on the meaning of different syllables and syntax, the grammatical rules.

    Back in the forest with my directional microphone, watching the chickadees flit about, I hear different versions of the “chick-a-dee” calls. Some feature more D notes, which would indicate a higher level of excitement. Others feature more A, B or C notes, communicating more specific, identifying information. I am also surrounded by melodic gargle calls, harsh scolding calls and barely audible soft calls.

    Next time you find yourself out in the forest, stop and listen to the chickadees as they talk to each other. Maybe you’ll be able to hear the variation in their calls and know that they are talking about different things − and that grammar matters.

    Sofia Marie Haley does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Mountain chickadee chatter: Scientists are decoding the songbird’s complex calls – https://theconversation.com/mountain-chickadee-chatter-scientists-are-decoding-the-songbirds-complex-calls-247091

    MIL OSI – Global Reports

  • MIL-OSI Global: IDF firing ‘warning shots’ near diplomats sets an unacceptable precedent in international relations

    Source: The Conversation – UK – By Andrew Forde, Assistant Professor – European Human Rights Law, Dublin City University

    A still from footage of the incident when ‘warning shots’ were fired above visiting diplomats in Jenin on May 21. X (Twitter)

    The Israel Defense Forces (IDF) appears to have “crossed the Rubicon” in the West Bank town of Jenin, when it opened fire in the vicinity of a group of visiting diplomats on May 21 – in flagrant violation of international law. The group of diplomats representing 31 countries – including Ireland, UK, France, Germany, Italy, Egypt, Russia and China – were on an official mission organised by the Palestinian Authority to observe the humanitarian situation there.

    They were giving media interviews when IDF troops fired what they later referred to as “warning shots” over their heads, forcing them to run for cover. The shots came despite the visit having been flagged and coordinated in advance with both the Palestinian Authority and the IDF, which has effective control over the area.

    Jenin has long been a flash point in the Israeli-Palestinian conflict. With much of the population descendants of Palestinian refugees from the 1948 war, Israeli occupation and active Palestinian resistance are observable in the town.

    The international community’s reaction to the warning shots incident – in particular, by those states whose diplomatic officials were directly involved – was one of swift and widespread outrage. The high representative of the European Union for foreign affairs and security policy, Kaja Kallas, called for a full investigation into the incident, and for those responsible to be held accountable. “Any threats on diplomats’ lives are not acceptable,” she said.

    The Palestinian foreign ministry accused Israel of having “deliberately targeted with live fire an accredited diplomatic delegation”.

    Israel acknowledged the incident and triggered an initial investigation, but downplayed its significance. A spokesman for the IDF said it “regrets the inconvenience caused” by the incident. But its statement went on to effectively justify the action, arguing that the diplomats had “deviated from the approved route” by entering a restricted area – leading to IDF soldiers firing warning shots into the air.

    Such a response doesn’t remotely correspond to the seriousness of the situation, and Israel is perfectly aware of this.

    International law and diplomats

    Diplomats carry out functions on behalf of the country they represent. They are the eyes, ears and voice of their country, called upon to pursue legitimate diplomatic activities. The protections afforded to individual diplomats must therefore be seen in the context of broader and longer-term diplomatic relations between states.

    To carry out diplomatic functions effectively, those individuals must be allowed to perform their functions without hindrance, coercion or harassment from any country that hosts their delegations. These customary rules are thousands of years old, and have been codified in international law through the Vienna convention on diplomatic relations – to which Israel is a signatory.

    That convention provides for diplomatic inviolability, immunity from criminal, civil and administrative jurisdiction, and freedom from detention or arrest. It also affords diplomatic staff the right to freedom of movement and free communications.

    Most importantly for this case, article 29 of the convention states that the host state “shall take all appropriate steps to prevent any attack on [their] person, freedom or dignity”.

    Firing warning shots in the vicinity of diplomats, even if done in error or without ill-intent, represents a serious threat to the person and their dignity. As such, it constitutes a flagrant abdication of Israel’s duty to protect them.

    Moreover, the firing of warning shots in Jenin immediately interrupted the diplomatic work there, and as such can be seen as an attempt to intimidate or limit the efficient and effective performance of diplomatic functions on behalf of their governments.

    Need for accountability

    Any use of force against diplomats, even indirect, is incompatible with the principles of diplomatic immunity enshrined in international law. The onus is on the host state to ensure the safety and inviolability of diplomatic personnel.

    And this duty of care is not diminished in situations of conflict. On the contrary, states have a special duty in times of conflict to protect diplomats and preserve diplomatic channels of communication.

    Israel’s actions in firing above these diplomats may or may not have been deliberate. But they had an intimidatory effect, which undermines the foundational principles of international relations. In a climate where Israel’s courts have effectively endorsed a media blackout in conflict-affected regions, the role of diplomats is indispensable.

    The entire system of diplomatic relations relies on the presumption that diplomats can carry out their functions freely and effectively. Diplomatic protections work effectively when they are reciprocal. Without trust, the system quickly unravels.

    It would be wrong to suggest this act may have tipped the balance of international opinion against Israel, when you consider the 19 months of violence in Gaza. The killing by the IDF of vast numbers of civilians (including thousands of women and children), the seeming use of starvation as a weapon of war, and the destruction of vast swaths of Gaza have rightly attracted growing international condemnation.

    On May 19, Britain, France and Canada – staunch allies of Israel – said they will “not stand by”, and would take “concrete actions” if the military offensive is not halted and humanitarian aid is not delivered to the people of Gaza.

    But threatening diplomats – even if not actively shooting at them – is an egregious breach of trust under the laws of diplomatic relations, which requires a meaningful apology and effective investigation. Those responsible for giving the orders to fire the “warning shots” need to be held accountable for that decision.

    Andrew Forde is affiliated with Dublin City University (Assistant Professor, European Human Rights Law).

    He is also, separately, affiliated with the Irish Human Rights and Equality Commission (Commissioner).

    ref. IDF firing ‘warning shots’ near diplomats sets an unacceptable precedent in international relations – https://theconversation.com/idf-firing-warning-shots-near-diplomats-sets-an-unacceptable-precedent-in-international-relations-257488

    MIL OSI – Global Reports

  • MIL-OSI Global: Trump v Harvard: why this battle will damage the US’s reputation globally

    Source: The Conversation – UK – By Thomas Gift, Associate Professor and Director of the Centre on US Politics, UCL

    Harvard University is suing the Trump administration over its unprecedented attempt to bar international students from its campus. The latest salvo is that the administration has said it is cancelling all federal funds, totalling US$100 million (£73.8 million). Although a federal judge has temporarily blocked the order to ban foreign students, many observers are rightly expressing deep concern about the global ramifications of the battle for the reputation of the US.

    The story hits home for me. Every year for the last decade, I’ve taught a course on globalisation in the Harvard summer school. Although 27% of Harvard’s student body is international, my course – due to its topical focus – draws a disproportionate number of international students, many from emerging economies.

    As I know firsthand, these students contribute enormously to the classroom experience. Their insights, shaped by distinct national contexts, enliven discussion and further understanding for everyone — international and domestic students alike. Without them, the classroom isn’t just quieter; it’s poorer in perspective.

    Yet my concern with Trump’s latest attempt to put a political target on Harvard’s back extends beyond international students. For centuries Harvard and countless other leading US institutions of higher learning have welcomed international students to their campuses. This isn’t purely a selfless act. These students are a boon to the US at home and abroad. Here’s why.

    1. Spreading democracy

    Universities aren’t just a key economic driver for the United States. They’re also a reflection of its democratic values. Students who attend Harvard and similar universities, especially those from outside advanced, Organisation for Economic Co-operation and Development (OECD) democracies, often return to their native countries after they’ve received their diplomas, poised to make a difference in national politics.

    My own research suggests this can help to promote democracy in autocratic parts of the world. Because of how they’re socialised both inside and outside the classroom, students who attend western universities and go on to become national leaders are more likely to embrace democratic values, and highly educated leaders also tend to increase economic growth.

    Personal connections that they’ve forged in the west also bind them into international networks that are pro-democracy.

    An attack on Harvard will also damage its soft power, say some.

    Consider one example: Ellen Johnson Sirleaf, the former president of Liberia, attended the Harvard Kennedy School, then went onto serve as head of her country from 2006 to 2018. As the first female elected head of state in Africa, Sirleaf proceeded to win the Nobel peace prize in 2011 for her “non-violent efforts to promote peace and her struggle for women’s rights”.

    2. Projecting soft power

    The best universities in the US are also a crucial component of what the late Harvard political scientist Joseph Nye called “soft power”. Soft power is about how western nations such as the US influence the world rather than via bullets and tanks. It’s an approach to foreign relations that projects US culture by winning “hearts and minds”.

    Harvard isn’t just one of the leading brands in US higher education, it’s one of the US’s leading brands. More generally, American universities dominate the international league tables, such as the QS World Rankings, where ten of the top 25 schools are US-based. That makes universities key ambassadors for the US.

    There’s a reason why Harvard attracts students from more than 140 countries. Its reputation for academic excellence, combined with its world-leading research in areas as diverse as curing neurogenerative diseases to improving economic mobility, make it a magnet for students angling to test their mettle against the best and brightest.

    3. Driving the US economy

    Many international graduates of top US universities go on to become entrepreneurs or to pursue careers in cutting-edge fields at companies such as Apple, Google and Meta, filling jobs for which there’s a shortage of talent in the US labour market.

    The upper echelons of the executive class in the US is also filled with leaders who were once international students in the US. Tesla CEO Elon Musk, who studied at the University of Pennsylvania, and Microsoft CEO Satya Nadella, who studied at the University of Chicago, are two prominent examples.

    According to a report from the National Foundation for American Policy, approximately 25% of US firms worth at least a billion dollars had a founder who enrolled at a US university as an international student.

    It’s also worth noting that international students are more likely to pay much higher tuition fees than US students. These dollars subsidise academic and student programming for domestic students, enabling places like Harvard to maintain the high standards which they’re renowned.

    Recent data from the Association of International Educators show that international students at US colleges and universities “contributed US$43.8 billion to the US economy during the 2023-2024 academic year and supported more than 378,000 jobs”.

    But, says the Economist’s US editor John Prideaux, this whole battle is really about power. “If you stand up to the Trump administration they will come after you.”

    Former Harvard president Larry Summers has said, that the ban on international students “would be devastating…, not just for the university but for the image of the United States in the world, where our universities in general, and Harvard in particular, have been a beacon”.

    The reputation of US universities around the world is especially vital today as Trump’s “America first” foreign policy signals a descent into belligerent isolationism. As the US retreats from the world and its president attacks multilateral institutions and shows a lack of respect for allies , this latest tussle with Harvard could erode the US’s international image even further.

    Thomas Gift teaches an annual course in the Harvard Summer School, and worked full-time at the Harvard Kennedy School in 2015-16.

    ref. Trump v Harvard: why this battle will damage the US’s reputation globally – https://theconversation.com/trump-v-harvard-why-this-battle-will-damage-the-uss-reputation-globally-257512

    MIL OSI – Global Reports

  • MIL-OSI Global: Why Alberta’s push for independence pales in comparison to Scotland’s in 2014

    Source: The Conversation – Canada – By Piers Eaton, PhD Candidate in Political Science, L’Université d’Ottawa/University of Ottawa

    One day after the Liberal Party secured their fourth consecutive federal election victory, Alberta Premier Danielle Smith tabled legislation to change the signature threshold needed to put citizen-proposed constitutional questions on the ballot. She lowered it from the current 600,000 signatures to 177,000.

    Since the pro-independence Alberta Prosperity Project already claims to have 240,000 pledges in support of an Albertan sovereignty referendum, the change clears a path to a separation referendum.

    In 2014, Scottish voters went to the polls on a similar question to the one proposed by the Alberta Prosperity Project, but asking voters whether they wanted to regain their independence from Britain. Although the Scottish “Yes” campaign was defeated, it garnered 45 per cent of the vote, far exceeding what most thought was possible at the start of the campaign.

    The 2014 Scottish referendum injected a huge amount of enthusiasm into the Scottish separatist parties, with the largest, the Scottish National Party (SNP) — which led the fight for the Yes side — soaring from 20,000 members in 2013 to more than 100,000 months after the referendum.

    While the Yes campaign did not achieve its goals and the Scottish historical context is very different from Alberta’s, there are still important lessons about how people can be won over to the cause of independence. Albertan separatists don’t seem to be heading down the same path.

    Timeline

    Smith has suggested that if the necessary signatures were collected, that she would aim to hold a referendum in 2026. But the Alberta Prosperity Project’s Jeffrey Rath suggested the group would push Smith to allow a referendum before the end of 2025, giving the referendum a maximum of seven months of official campaigning.

    The broad ground rules of the Scottish referendum were established in the Edinburgh Agreement in October 2012. On March 2013, the SNP-led Scottish government announced the date of the independence referendum — Sept. 18, 2014. The long campaign period allowed a wide variety of grassroots campaign groups to organize in favour of independence.

    While Alberta separatism is less likely to be buoyed by artist collectives and Green Party activists like Scottish independence was, a longer independence campaign would allow a variety of members of Albertan society to make the case for independence.

    Dennis Modry, a co-leader of the Alberta Prosperity Project, recently told CBC News that the initial signature threshold of 600,000 was not all bad, as it would “get (us) closer to the referendum plurality as well.” That remark suggested Modry sees value having more time to campaign before a referendum is held.

    In this regard, he and Rath seem to be sounding different notes.

    Leadership

    Hints that the Alberta Prosperity Project is already divided raises broader questions of leadership. In 2014, the Scottish Yes side had a clear and undisputed leader — First Minister Alex Salmond, head of the SNP.

    The late Salmond led the SNP to back-to-back electoral victories in Scotland, including the only outright majority ever won in the history of the Scottish parliament in 2011.

    Salmond was able to speak in favour of independence in debates and to answer, with democratic legitimacy, specific questions about what the initial policy of an independent Scotland would be.

    The SNP government published a report, Scotland’s Future, that systematically sought to assuage skeptics. Its “frequently asked questions” (FAQ) section answered 650 potential questions about independence. The Alberta Prosperity Project, on the other hand, only answers 74 questions in its FAQ.

    Whereas Salmon’s rise to the leadership of the Scottish independence movement was done in full public view and according to party rules, the Alberta Prosperity Project’s leadership structure is far murkier.

    The organization claims there “is no prima facie leader of the APP, but there (is) a management team which is featured on the website https://albertaprosperityproject.com/about-us/.” Follow that link, however, and no names or management structures are listed.

    Clarity and democracy

    While independence always involves some unknowns, clear leadership can provide answers about where a newly independent nation might find stability. The Yes Scotland campaign promised independence within Europe, meaning Scotland would retain access to the European Union’s common market.

    By contrast, the Alberta Prosperity Project isn’t clear on the fundamental question of whether a sovereign Alberta should remain independent or attempt to join the United States as its 51st state.

    Despite the claim on its website that “the objective of the Alberta Prosperity Project is for Alberta to become a sovereign nation, not the 51st state of the USA,” the organization backed Rath’s recent trip to Washington, D.C. to gauge support for Albertan integration into the U.S.

    Rath has also said that becoming a U.S. territory is “probably the best way to go.”

    Rath in an interview with Rachel Parker, an Alberta-based independent journalist. (Rachel Parker’s YouTube channel)

    The 2014 referendum in Scotland was called a “festival of democracy”, and even anti-independence forces agreed the referendum had been good for democracy.

    It took time and leadership to put forward a positive case for independence, one that voters could decide upon with confidence.

    Alberta could learn from Scotland and strengthen its democracy by holding a referendum based on legitimate leadership, reasonable timelines, diverse voices and clear aims. Or it could lurch into a rushed campaign, with divided leaders of dubious legitimacy, arguing for unclear outcomes — and end up, no matter which side wins, weakening its democracy in the process.

    Piers Eaton does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Why Alberta’s push for independence pales in comparison to Scotland’s in 2014 – https://theconversation.com/why-albertas-push-for-independence-pales-in-comparison-to-scotlands-in-2014-256838

    MIL OSI – Global Reports

  • MIL-OSI Global: For opioid addiction, treatment underdosing can lead to fentanyl overdosing – a physician explains

    Source: The Conversation – USA – By Lucinda Grande, Clinical Associate Professor of Family Medicine, University of Washington

    Buprenorphine is most effective when doctors and patients find the right dose together. AP Photo/Ted S. Warren

    Imagine a patient named Rosa tells you she wakes up night after night in a drenching sweat after having very realistic dreams of smoking fentanyl.

    The dreams seem crazy to her. Three months ago, newly pregnant, Rosa began visualizing being a good parent. She realized it was finally time to give up her self-destructive use of street fentanyl. With tremendous effort, she started treatment with buprenorphine for her opioid use disorder.

    As hoped, she was intensely relieved to be free from the distressing withdrawal symptoms – restless legs, anxiety, bone pain, nausea and chills – and from the guilt, shame and hardship of living with addiction. But even so, Rosa found herself musing throughout the day about the rewarding rush of fentanyl, which seemed ever more appealing. And she couldn’t escape those dreams at night.

    Rosa asks you, her doctor, for a higher dose of buprenorphine. You consider her request carefully. Your clinic follows the Food and Drug Administration prescribing guideline that has changed very little in over 20 years. It recommends her current prescription – 16 milligrams – as the “target” dose. You are aware of the prevailing view among medical providers that most patients don’t need a dose higher than that. Many believe that patients or others would use the extra pills to get high.

    But after many visits, you feel that you know Rosa well. You believe in her sincerity. She is a responsible 25-year-old with a full-time job who never misses appointments. She now has stable housing with her parents after years of couch surfing. You reluctantly agree and raise her daily dose by one additional 8-milligram pill, totaling 24 milligrams.

    At her next visit, Rosa tells you that the higher dose solved her daytime fentanyl craving, but the nightmares have continued. She would like to try an even higher dose.

    How should you respond? The FDA guideline clearly states there is no evidence to support any benefit above her new dose. You begin to doubt Rosa’s sincerity and your own judgment.

    Harms of low doses

    This hypothetical scenario has played out countless times in the U.S. since 2002, when buprenorphine was first approved as a treatment for opioid use disorder. As a family physician specializing in addiction medicine, I have frequently encountered patients who still experience withdrawal symptoms at the “target dose” and even at the suggested maximum dose of 24 milligrams.

    People like Rosa, plagued by uncontrolled fentanyl craving – either awake or in dreams – are at high risk of leaving treatment and returning to addiction. Yet from 2019 to 2020, only 2% of buprenorphine prescriptions were written for over 24 milligrams.

    Withdrawal symptoms and cravings make staying in recovery difficult.
    iStock/Getty Images Plus

    I was able to help some of those people in my work as co-founder and medical director of a low-barrier clinic, which is a clinic that makes it easier for people to get started with buprenorphine. I asked our clinicians to offer a higher dose when they believed the current one wasn’t meeting the patient’s needs.

    The dose choice may be a life-or-death decision. Increasing it by one more pill – to 32 milligrams – often makes the difference between a patient staying in or leaving treatment. The risk of leaving treatment is particularly significant for the patients we typically see at low-barrier clinics, many of whom face severe life challenges. While patients do sometimes give away or sell extra pills, research consistently shows that illegally obtained pills are most commonly used for self-treatment – to control withdrawal and help quit opioids when treatment is unavailable.

    Medicaid in my state of Washington began paying for prescriptions up to 32 milligrams in 2019. But clinicians may still encounter constraints from other health insurers and at pharmacies. Some states, such as Tennessee, Kentucky and Ohio, have dose restrictions cemented in law.

    Finding the right dose

    The challenge of finding the right treatment dose became more acute for clinicians and patients as fentanyl swept across the country starting in 2013. Fentanyl now dominates the unregulated opioid supply. Fifty times stronger than heroin, fentanyl overwhelms the ability of low doses of buprenorphine to counter its effects.

    Buprenorphine – also known by the brand name Suboxone, which contains a mix of buprenorphine and naloxone – is an opioid medication with the quirk of both activating the brain’s opioid receptors and partially blocking them. It provides just enough opioid effects to prevent withdrawal symptoms and craving while also blocking the reward of euphoria. It relieves pain like other opioids but doesn’t cause breathing to stop. It can dramatically reduce the risk of overdose death by as much as 70%.

    In medicine, there is a general concern that too high a dose may have toxic effects. However, as many clinicians and researchers have observed, using too low a dose of some treatments can also lead to harm, including death from patients going back to fentanyl.

    After observing so many patients responding well to higher doses, my colleagues and I looked in the medical literature for more information. We discovered over a dozen reports as far back as 1999 providing evidence that buprenorphine’s benefits steadily increase up to at least 32 milligrams.

    At higher doses, patients stay on treatment longer, use illicit opioids less often, have fewer complications such as hepatitis C, have fewer emergency room visits and hospitalizations, and suffer less from chronic pain. Brain scans show that buprenorphine at 32 milligrams occupies more opioid receptors – over 90% of receptors in some brain regions – compared with lower doses. One study even showed that a high enough dose of buprenorphine can directly prevent fentanyl overdose.

    As illicit opioids become more potent, addiction becomes more deadly – and more urgent to treat.

    Patients with some health conditions may especially benefit from higher doses. During pregnancy, as in Rosa’s case, withdrawal symptoms can grow more intense because of metabolism changes that reduce the blood concentration of most medications. A higher dose may be needed to maintain the level of effects they had before pregnancy. Additionally, I found that the patients in my clinic with chronic pain, post-traumatic stress disorder or longtime opioid use were most likely to find relief at a dose above 24 milligrams.

    The American Society of Addiction Medicine recommends
    four goals of treatment: suppressing opioid withdrawal, blocking the effects of illicit opioids, stopping opioid cravings and reducing the use of illicit opioids, and promoting recovery-oriented activities.

    Similarly, patients seek a comfortable and effective dose – that is, one that avoids withdrawal symptoms and craving, and allows them to avoid illicit drug use and the associated worry and stress. Many patients also yearn to feel trusted, accepted and understood by their clinician. Achieving that goal requires shared decision-making.

    A clinician can never be sure a patient is meeting all the goals of treatment. But a patient who reports positive life changes – such as stable housing and improved relationships – and reports low or no craving while awake or dreaming will likely be satisfied with the current dose. For a patient who does not make progress with a dose increase to 32 milligrams, the clinician might consider a different treatment plan, such as a 30-day buprenorphine injection, which can provide an even higher dose, or transition to methadone, the other highly effective FDA-approved medication for opioid use disorder.

    The FDA guideline change

    In August 2022, a team of addiction physicians attempted to move the FDA to change dosing guidelines for buprenorphine. They submitted a petition asking for a modernized guideline that based dosing on how a patient responds to buprenorphine – including symptom relief and reduced illicit drug use – rather than a fixed “target” dose. They asked to remove language that incorrectly denied evidence that patients benefited from doses above 24 milligrams.

    The FDA listened. In December 2023, it convened a public meeting with leading addiction clinicians, researchers and policymakers to review the evidence on buprenorphine dosing. The group came to an overwhelming consensus that there was extensive research showing benefit at doses above 24 milligrams. Moreover, they doubted whether the guideline’s dosing conclusions, made before fentanyl infiltrated the drug supply, applied today.

    Treatment is most effective when patients feel their needs are understood.
    Spencer Platt/Getty Images

    Then, the FDA responded. In December 2024, it announced a new buprenorphine recommendation that would not mention a target dose and would not deny the existence of evidence of benefits above 24 milligrams. Only time will tell whether and when the FDA’s new guideline will meaningfully alter prescribing patterns, insurance and pharmacy restrictions, and state laws.

    To maintain the national trend toward lower overdose deaths, the best possible use of each effective treatment is critical. Yet the Trump administration’s proposed cuts to Medicaid – which covers nearly half of all buprenorphine prescriptions – put access seriously at risk. Most people with untreated addiction would be blocked from accessing treatment altogether, let alone at an effective dose or with the behavioral health, social work and recovery support services needed for the best outcomes. Research shows that a sharp reduction in buprenorphine prescriptions occurred following 2023 Medicaid coverage restrictions.

    Opioid use disorder is treatable. Buprenorphine works well and saves lives when given at the right dose. An inadequate dose can directly harm patients who are simply trying to survive and improve their lives.

    Lucinda Grande is a physician and partner at Pioneer Family Practice in Lacey, Washington.

    ref. For opioid addiction, treatment underdosing can lead to fentanyl overdosing – a physician explains – https://theconversation.com/for-opioid-addiction-treatment-underdosing-can-lead-to-fentanyl-overdosing-a-physician-explains-250588

    MIL OSI – Global Reports

  • MIL-Evening Report: Could a bold anti-poverty experiment from the 1960s inspire a new era in housing justice?

    Source: The Conversation (Au and NZ) – By Deyanira Nevárez Martínez, Assistant Professor of Urban and Regional Planning, Michigan State University

    Model Cities staff in front of a Baltimore field office in 1971. Robert Breck Chapman Collection, Langsdale Library Special Collections, University of Baltimore, CC BY-NC-ND

    In cities across the U.S., the housing crisis has reached a breaking point. Rents are skyrocketing, homelessness is rising and working-class neighborhoods are threatened by displacement.

    These challenges might feel unprecedented. But they echo a moment more than half a century ago.

    In the 1950s and 1960s, housing and urban inequality were at the center of national politics. American cities were grappling with rapid urban decline, segregated and substandard housing, and the fallout of highway construction and urban renewal projects that displaced hundreds of thousands of disproportionately low-income and Black residents.

    The federal government decided to try to do something about it.

    President Lyndon B. Johnson launched one of the most ambitious experiments in urban policy: the Model Cities Program.

    As a scholar of housing justice and urban planning, I’ve studied how this short-lived initiative aimed to move beyond patchwork fixes to poverty and instead tackle its structural causes by empowering communities to shape their own futures.

    Building a great society

    The Model Cities Program emerged in 1966 as part of Johnson’s Great Society agenda, a sweeping effort to eliminate poverty, reduce racial injustice and expand social welfare programs in the United States.

    Earlier urban renewal programs had been roundly criticized for displacing communities of color. Much of this displacement occurred through federally funded highway and slum clearance projects that demolished entire neighborhoods and often left residents without decent options for new housing.

    So the Johnson administration sought a more holistic approach. The Demonstration Cities and Metropolitan Development Act established a federal framework for cities to coordinate housing, education, employment, health care and social services at the neighborhood level.

    New York City neighborhoods designated for revitalization with funding from the Model Cities Program.
    The City of New York, Community Development Program: A Progress Report, December 1968.

    To qualify for the program, cities had to apply for planning grants by submitting a detailed proposal that included an analysis of neighborhood conditions, long-term goals and strategies for addressing problems.

    Federal funds went directly to city governments, which then distributed them to local agencies and community organizations through contracts. These funds were relatively flexible but had to be tied to locally tailored plans. For example, Kansas City, Missouri, used Model Cities funding to support a loan program that expanded access to capital for local small businesses, helping them secure financing that might otherwise have been out of reach.

    Unlike previous programs, Model Cities emphasized what Johnson described as “comprehensive” and “concentrated” efforts. It wasn’t just about rebuilding streets or erecting public housing. It was about creating new ways for government to work in partnership with the people most affected by poverty and racism.

    A revolutionary approach to poverty

    What made Model Cities unique wasn’t just its scale but its philosophy. At the heart of the program was an insistence on “widespread citizen participation,” which required cities that received funding to include residents in the planning and oversight of local programs.

    The program also drew inspiration from civil rights leaders. One of its early architects, Whitney M. Young Jr., had called for a “Domestic Marshall Plan” – a reference to the federal government’s efforts to rebuild Europe after World War II – to redress centuries of racial inequality.

    Civil rights activist Whitney M. Young Jr. helped shape the vision of the Model Cities Program.
    Bettmann/Getty Images

    Young’s vision helped shape the Model Cities framework, which proposed targeted systemic investments in housing, health, education, employment and civic leadership in minority communities. In Atlanta, for example, the Model Cities Program helped fund neighborhood health clinics and job training programs. But the program also funded leadership councils that for the first time gave local low-income residents a direct voice in how city funds were spent.

    In other words, neighborhood residents weren’t just beneficiaries. They were planners, advisers and, in some cases, staffers.

    This commitment to community participation gave rise to a new kind of public servant – what sociologists Martin and Carolyn Needleman famously called “guerrillas in the bureaucracy.”

    A Model Cities staffer discusses the program to a group of students gathered at Denver’s Metropolitan Youth Education Center in 1970.
    Bill Wunsch/The Denver Post via Getty Images

    These were radical planners – often young, idealistic and deeply embedded in the neighborhoods they served. Many were recruited and hired through new Model Cities funding that allowed local governments to expand their staff with community workers aligned with the program’s goals.

    Working from within city agencies, these new planners used their positions to challenge top-down decision-making and push for community-driven planning.

    Their work was revolutionary not because they dismantled institutions but because they reimagined how institutions could function, prioritizing the voices of residents long excluded from power.

    Strengthening community ties

    In cities across the country, planners fought to redirect public resources toward locally defined priorities.

    A mobile dentist office in Baltimore.
    Robert Breck Chapman Collection, Langsdale Library Special Collections, University of Baltimore, CC BY-NC-ND

    In some cities, such as Tucson, the program funded education initiatives such as bilingual cultural programming and college scholarships for local students. In Baltimore, it funded mobile health services and youth sports programs.

    In New York City, the program supported new kinds of housing projects called vest-pocket developments, which got their name from their smaller scale: midsize buildings or complexes built on vacant lots or underutilized land. New housing such as the Betances Houses in the South Bronx were designed to add density without major redevelopment taking place – a direct response to midcentury urban renewal projects, which had destroyed and displaced entire neighborhoods populated by the city’s poorest residents. Meanwhile, cities such as Seattle used the funds to renovate older apartment buildings instead of tearing them down, which helped preserve the character of local neighborhoods.

    The goal was to create affordable housing while keeping communities intact.

    An Atlanta neighborhood identified as a candidate for street paving and home rehabilitation as part of the Model Cities Program.
    Georgia State University Special Collections

    What went wrong?

    Despite its ambitious vision, Model Cities faced resistance almost from the start. The program was underfunded and politically fragile. While some officials had hoped for US$2 billion in annual funding, the actual allocation was closer to $500 million to $600 million, spread across more than 60 cities.

    Then the political winds shifted. Though designed during the optimism of the mid-1960s, the program started being implemented under President Richard Nixon in 1969. His administration pivoted away from “people programs” and toward capital investment and physical development. Requirements for resident participation were weakened, and local officials often maintained control over the process, effectively marginalizing the everyday citizens the program was meant to empower.

    In cities such as San Francisco and Chicago, residents clashed with bureaucrats over control, transparency and decision-making. In some places, participation was reduced to token advisory roles. In others, internal conflict and political pressure made sustained community governance nearly impossible.

    Critics, including Black community workers and civil rights activists, warned that the program risked becoming a new form of “neocolonialism,” one that used the language of empowerment while concentrating control in the hands of white elected officials and federal administrators.

    A legacy worth revisiting

    Although the program was phased out by 1974, its legacy lived on.

    In cities across the country, Model Cities trained a generation of Black and brown civic leaders in what community development leaders and policy advocates John A. Sasso and Priscilla Foley called “a little noticed revolution.” In their book of the same name, they describe how those involved in the program went on to serve in local government, start nonprofits and advocate for community development.

    It also left an imprint on later policies. Efforts such as participatory budgeting, community land trusts and neighborhood planning initiatives owe a debt to Model Cities’ insistence that residents should help shape the future of their communities. And even as some criticized the program for failing to meet its lofty goals, others saw its value in creating space for democratic experimentation.

    A housing meeting takes place at a local Model Cities field office in Baltimore in 1972.
    Robert Breck Chapman Collection, Langsdale Library Special Collections, University of Baltimore, CC BY-NC-ND

    Today’s housing crisis demands structural solutions to structural problems. The affordable housing crisis is deeply connected to other intersecting crises, such as climate change, environmental injustice and health disparities, creating compounding risks for the most vulnerable communities. Addressing these issues through a fragmented social safety net – whether through housing vouchers or narrowly targeted benefit programs – has proven ineffective.

    Today, as policymakers once again debate how to respond to deepening inequality and a lack of affordable housing, the lost promise of Model Cities offers vital lessons.

    Model Cities was far from perfect. But it offered a vision of how democratic, local planning could promote health, security and community.

    Deyanira Nevárez Martínez is a trustee of the Lansing School District Board of Education and is currently a candidate for the Lansing City Council Ward 2.

    ref. Could a bold anti-poverty experiment from the 1960s inspire a new era in housing justice? – https://theconversation.com/could-a-bold-anti-poverty-experiment-from-the-1960s-inspire-a-new-era-in-housing-justice-253706

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: Plea for UN intervention over illegal PNG loggers ‘stealing forests’

    RNZ Pacific

    A United Nations committee is being urged to act over human rights violations committed by illegal loggers in Papua New Guinea.

    Watchdog groups Act Now! and Jubilee Australia have filed a formal request to the UN Committee on the Elimination of Racial Discrimination to consider action at its next meeting in August.

    “We have stressed with the UN that there is pervasive, ongoing and irreparable harm to customary resource owners whose forests are being stolen by logging companies,” Act Now! campaign manager Eddie Tanago said.

    He said these abuses were systematic, institutionalised, and sanctioned by the PNG government through two specific tools: Special Agriculture and Business Leases (SABLs) and Forest Clearing Authorities (FCAs) — a type of logging licence.

    “For over a decade since the Commission of Inquiry into SABLs, successive PNG governments have rubber stamped the large-scale theft of customary resource owners’ forests by upholding the morally bankrupt SABL scheme and expanding the use of FCAs,” Tanago said.

    He said the government had failed to revoke SABLs that were acquired fraudulently, with disregard to the law or without landowner consent.

    “Meanwhile, logging companies have made hundreds of millions, if not billions, in ill-gotten gains by effectively stealing forests from customary resource owners using FCAs.”

    Abuses hard to challenge
    The complaint also highlights that the abuses are hard to challenge because PNG lacks even a basic registry of SABLs or FCAs, and customary resource owners are denied access to information to the information they need, such as:

    • The existence of an SABL or FCA over their forest;
    • A map of the boundaries of any lease or logging licence;
    • Information about proposed agricultural projects used to justify the SABL or FCA;
    • The monetary value of logs taken from forests; and
    • The beneficial ownership of logging companies — to identify who ultimately profits from illegal logging.

    “The only reason why foreign companies engage in illegal logging in PNG is to make money,” he said, adding that “it’s profitable because importing companies and countries are willing to accept illegally logged timber into their markets and supply chains.”

    ACT NOW campaigner Eddie Tanago . . . “demand a public audit of the logging permits – the money would dry up.” Image: Facebook/ACT NOW!/RNZ Pacific

    “If they refused to take any more timber from SABL and FCA areas and demanded a public audit of the logging permits — the money would dry up.”

    Act Now! and Jubilee Australia are hoping that this UN attention will urge the international community to see this is not an issue of “less-than-perfect forest law enforcement”.

    “This is a system, honed over decades, that is perpetrating irreparable harm on indigenous peoples across PNG through the wholesale violation of their rights and destroying their forests.”

    This article is republished under a community partnership agreement with RNZ.

    MIL OSI AnalysisEveningReport.nz

  • MIL-OSI Global: 10 years ago Kenya set out to fix gender gaps in education – what’s working and what still needs to be done

    Source: The Conversation – Africa – By Benta A. Abuya, Research Scientist, African Population and Health Research Center

    The Kenyan government launched a big attempt in 2015 to promote gender equality in and through the education sector. This was guided by principles of equal participation and inclusion of women and men, and girls and boys in national development.

    The Education and Training Sector Gender Policy aligned with national, regional and global commitments. This included the constitution, and Sustainable Development Goals 4 on quality education and 5 on gender equality.

    Years later, however, it became clear that the government wasn’t achieving some policy’s objectives. Gaps remained in reducing gender inequalities in access, participation and achievement at all levels of education.

    The government decided to review the causes of these challenges and what could be done differently.

    This led to a two-year joint study in partnership with the African Population and Health Research Center. The study began in 2022. Its overall objective was to provide evidence for action on mainstreaming gender issues in basic education in Kenya. Gender mainstreaming generally refers to being sensitive to gender when developing policies and curricula, governing schools, teaching and using learning materials.

    The study specifically aimed to:

    1. examine how the teacher-training curriculum prepares teachers to implement gender mainstreaming strategies within the basic education sector

    2. examine how gender mainstreaming is practised in classrooms during teaching and learning

    3. assess the relationship between teaching practices and students’ attendance, choice of subjects and academic performance

    4. evaluate the availability of institutional policies, practices and guidelines to mainstream gender issues and the extent to which they influence gender mainstreaming in education.

    I’m a gender and education researcher and was part of the team from the African Population and Health Research Center that collected data for the policy review. This data came from 10 counties with high child poverty rates and urban informal settlements. These indicators highlight an inability to access one or more basic needs or services.

    The study involved teacher trainers and trainees. We also spoke to education officials, and learners in primary and secondary schools. We carried out classroom observations, knowledge and attitude surveys, questionnaires, key informant interviews and focus group discussions.




    Read more:
    6 priorities to get Kenya’s curriculum back on track – or risk excluding many children from education


    The data showed gaps in teacher training, as well as institutional and teaching practices at the basic education level. Policy wasn’t being carried through in practice.

    The gaps

    Our study found that Kenya needs to review its teacher education curriculum to make it more gender responsive.

    Teachers also need more training to follow practices that are gender responsive. These practices include extending positive reinforcement to girls and boys, maintaining eye contact and allowing learners to speak without interruption.

    Deliberate steps should be taken to ensure that schools and teacher training colleges are gender inclusive in their practices, guidelines and programmes.

    More specifically, our study found:

    • Teacher trainees had a relatively good understanding of gender-equitable teaching and learning practices. But there was a need to place greater importance on this in lesson planning and in supporting girls in science, technology, engineering and mathematics (STEM).

    • Gender mainstreaming is not built into the teacher training curriculum. It isn’t taught as a standalone unit. Teacher trainees learnt about it mainly from general courses, such as child development and psychology, or private training. And teacher trainees were unaware that they were being tested on this.

    • There were no significant gender differences in how teachers in pre-primary and primary school taught boys and girls. At the secondary level, however, teachers engaged boys more than girls during during literacy and STEM lessons.

    • At both primary and secondary levels, gender-equitable practices positively influenced learning outcomes in English and STEM subjects. These practices improved academic performances in English at the primary level. They led to improvements in biology, English, mathematics and physics at the secondary level.

    • The odds of school attendance increased if teachers treated boys and girls in equitable ways.

    • The odds of boys selecting chemistry and physics at the secondary level increased if the teacher of the subject was approachable and if the subject was considered applicable to future careers.

    • More than 40% of primary and secondary schools didn’t have guidelines on sexual harassment and gender-based violence for teachers and students. And most of the schools that said they had these guidelines couldn’t provide them to the research team. These guidelines help mainstream gender issues in schools and communities.

    What next

    To advance gender equality, Kenya must move beyond policy awareness. It must be more responsive to gender in teacher training, classroom practices and institutional leadership.

    Our study recommends:

    • creating a positive and inclusive learning environment where both boys and girls feel valued, capable, and motivated to learn

    • teaching gender mainstreaming as a standalone unit, or integrating it into the teaching methodology

    • coaching, mentorship and modelling of best practices to trainee teachers

    • financial support for gender mainstreaming in all areas of teacher education

    • encouraging girls to pursue STEM subjects and careers at an early age through formal mentorship programmes

    • encouraging and empowering women teachers and parents to take up leadership positions in schools to provide role models for students.




    Read more:
    Kenya’s decision to make maths optional in high school is a bad idea – what should happen instead


    Our findings offer a critical evidence base for the education ministry and other stakeholders. They should put accountability mechanisms in place.

    Only through sustained, data-driven action can Kenya achieve a truly inclusive and equitable education system.

    Benta A. Abuya does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. 10 years ago Kenya set out to fix gender gaps in education – what’s working and what still needs to be done – https://theconversation.com/10-years-ago-kenya-set-out-to-fix-gender-gaps-in-education-whats-working-and-what-still-needs-to-be-done-255400

    MIL OSI – Global Reports

  • MIL-OSI Global: How vitamin B12 deficiency may disrupt pregnant women’s bodies

    Source: The Conversation – UK – By Adaikala Antonysunil, Senior Lecturer in Biochemistry, School of Science and Technology, Nottingham Trent University

    Just Life/Shutterstock

    Despite living in an age of dietary abundance, vitamin B12 deficiency is on the rise.

    One major culprit? Our growing reliance on ultra-processed foods (UPFs) – those convenient, calorie-dense and nutrient-poor products that dominate supermarket shelves. While they might fill us up, they’re fuelling a global epidemic of “hidden hunger”.

    This refers to a lack of essential micronutrients including B12, folate, iron and zinc, even when people consume enough (or too many) calories. It’s often invisible but can have long-term consequences, particularly for vulnerable groups like pregnant women, children and the elderly.

    B12 deficiency in pregnancy, especially in the context of a diet high in ultra-processed foods, can disturb how fat is processed and increase systemic inflammation. This raises the risk of long-term health problems for both mother and baby.


    Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.


    A recent study shed light on how B12 deficiency during pregnancy may disrupt two critical systems in the body: fat metabolism and inflammation – both of which are closely linked to chronic diseases like heart disease and type 2 diabetes.

    Researchers studied fat tissue from 115 pregnant women with low B12 levels, focusing on two types of abdominal fat: subcutaneous (under the skin) and omental or visceral (around the organs). They also examined lab-grown fat cells exposed to different B12 levels and collected samples from women of different body weights.

    The results were striking. Women with low B12 had higher body weight and lower levels of HDL (the “good” form of cholesterol). Their fat cells showed increased fat storage, reduced fat breakdown, and impaired mitochondrial function – the energy engines inside our cells.

    Most concerning, these women’s fat tissue released higher levels of inflammatory molecules, suggesting that B12 deficiency might place the body into a constant state of low-grade stress.

    Ancient molecule

    What sets B12 apart from other vitamins is that it’s made exclusively by bacteria and archaea (tiny single-celled organisms similar to bacteria but with important genetic and biochemical differences). Neither plants, animals nor humans can produce B12.

    Some scientists even speculate that B12 may have formed prebiotically, before life itself began. It shares part of its structure, known as a tetrapyrrole ring, with several other of life’s most vital compounds including chlorophyll (for photosynthesis) and heme (for carrying oxygen in our blood).

    Although heme has typically been seen as the elder of all these molecules, recent evidence suggests B12 might have come first. Its core structure – a tetrapyrrole known as the corrin ring – has been found in bacteria that don’t produce heme at all, hinting at even deeper evolutionary roots.

    Because humans can’t make B12, we depend on our diet to get it. Ruminant animals like cows and sheep are able to host B12-producing bacteria in their stomachs and absorb the nutrient directly. We, however, must obtain it from animal-based foods – or from supplements and fortified products.

    Since plants neither produce nor store B12, vegetarians and vegans are at higher risk of this deficiency unless they supplement regularly. As diets become more processed and less diverse, B12 intake and absorption drops, leading to problems in brain function, metabolism and fetal development. Often, the deficiency isn’t spotted until symptoms become serious or irreversible.

    The takeaway is that we need to pay more attention to micronutrients, not just calories. Ensuring adequate B12 levels, particularly before and during pregnancy, is crucial. That means prioritising whole foods, fruits, vegetables and quality sources of protein, while limiting ultra-processed products.

    From the primordial soup to the modern dinner plate, vitamin B12 is more than a nutrient – it’s a molecular link between our evolutionary past and our future health. Recognising its importance might just be one of the most powerful steps we can take toward a healthier, more informed life.

    Adaikala Antonysunil receives funding from DRWF, BBSRC, Rosetrees Trust and Society of Endocrinology.

    ref. How vitamin B12 deficiency may disrupt pregnant women’s bodies – https://theconversation.com/how-vitamin-b12-deficiency-may-disrupt-pregnant-womens-bodies-256244

    MIL OSI – Global Reports

  • MIL-OSI Global: What the hidden rhythms of orangutan calls can tell us about language – new research

    Source: The Conversation – UK – By Chiara De Gregorio, Post Doctoral Research Fellow, University of Warwick

    Don Mammoser/Shutterstock

    In the dense forests of Indonesia, you can hear strange and haunting sounds. At first, these calls may seem like a random collection of noises – but my rhythmic analyses reveal a different story.

    Those noises are the calls of Sumatran orangutans (Pongo abelii), used to warn others about the presence of predators. Orangutans belong to our animal family – we’re both great apes. That means we share a common ancestor – a species that lived millions of years ago, from which we both evolved.

    Like us, orangutans have hands that can grasp, they use tools and can learn new things. We share about 97% of our DNA with orangutans, which means many parts of our bodies and brains work in similar ways.

    That’s why studying orangutans can also help us understand more about how humans evolved, especially when it comes to things like communication, intelligence and the roots of language and rhythm.

    Research on orangutan communication conducted by evolutionary psychologist Adriano Lameira and colleagues in 2024 focused on a different species of orangutan, the wild Bornean orangutan (Pongo pygmaeus wurmbii). They looked at a type of vocalisation made only by males, known as the long call, and found that long calls are organised into two levels of rhythmic hierarchy.

    This was a groundbreaking discovery, showing that orangutan rhythms are structured in a recursive way. Human language is deeply recursive.

    Recursion is when something is built from smaller parts that follow the same pattern. For example, in language, a sentence can contain another sentence inside it. In music, a rhythm can be made of smaller rhythms nested within each other. It’s a way of organising information in layers, where the same structure repeats at different levels.

    So, when the two-level rhythmic pattern was discovered in the long calls of male Bornean orangutans, my team wanted to know whether this kind of rhythm was unique to those particular calls, or revealed a deeper part of how orangutans communicate. To find out, we studied the alarm calls of wild female Sumatran orangutans and found something surprising.

    Instead of two levels, as had been seen in the Bornean males, this time we found three. This is an even more sophisticated pattern than we expected.

    The shared roots of language

    Returning to those alarm calls echoing through the Indonesian forest, we can now hear them with new ears. With the help of statistical tools, what sounded like random noise now takes on a clear structure – a rhythmic pattern of calls grouped into regular bouts and repeated in sequences.

    Each layer follows a steady rhythm, like the ticking of a metronome.

    Until recently, many scientists believed only humans could build layered vocal structures. This belief helped reinforce the idea of a divide between us and other animals.

    But our discovery adds to a growing body of research showing this divide may not be so clear-cut. Studies on great apes and other animals such as lemurs, whales and dolphins have revealed they are capable of rhythmic structuring, vocal learning, combining signals and sounds to make new ones, and even using vowels and consonants. These findings suggest the roots of language may lie in shared evolutionary mechanisms.

    Human language is unique in many ways. But it probably did not appear suddenly. Even the most striking traits in life evolve by reshaping what already exists, through the slow work of variation and natural selection. Our work suggests the brain systems needed to build recursive patterns were present in our ancestors millions of years ago.

    The evolution of language

    We wanted to take our investigation a step further and ask why recursive patterns evolved. So, we designed an experiment in which wild orangutans were exposed to different predator models, some posing a more realistic threat than others.

    This involved a person walking on all fours under different-coloured blankets. One had tiger stripes (tigers are orangutan predators). The other blankets were blue, white or multi-coloured.

    We found that more structured, regular and faster orangutan alarm sequences were made in response to tiger stripes. When the predator seemed less convincing, the vocalisations lost that regularity and slowed down. So, rhythm may help listeners gauge the seriousness of a situation.

    These patterns in orangutan calls give us some important hints about how language might have started. But it’s possible that other animals have similar ways of communicating that we haven’t discovered yet. To really understand how things like evolution, social life and the environment shape these interesting communication skills, we need to keep studying many different animals.

    Perhaps the most surprising lesson is this: complexity doesn’t always need words. The rhythms, patterns and structures we have uncovered in orangutan alarms remind us that meaningful communication can emerge in many forms – and that the roots of our language may lie not just in what is said, but how it is expressed.

    Chiara De Gregorio does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. What the hidden rhythms of orangutan calls can tell us about language – new research – https://theconversation.com/what-the-hidden-rhythms-of-orangutan-calls-can-tell-us-about-language-new-research-257400

    MIL OSI – Global Reports

  • MIL-Evening Report: What’s the difference between abs and core? One term focuses on aesthetics – and the other on function

    Source: The Conversation (Au and NZ) – By Hunter Bennett, Lecturer in Exercise Science, University of South Australia

    Maksim Goncharenok/Pexels

    You’ve probably heard the terms “abs” and “core” used in social media videos, Pilates classes, or even by physiotherapists.

    Given they seem to refer to the same general area of your body, you might have wondered what the difference is.

    When people talk about “abs”, they’re often referring to the abdominal muscles you can see. Conversely, the term “core” is used to describe a broader group of muscles in the context of function, rather than aesthetics.

    While abs and core are often spoken about separately, there’s a lot of overlap between them.

    What are abs?

    The term “abs” is short for abdominal muscles. These are the muscles that run along the front and side of your stomach.

    When someone talks about getting a six-pack, they’re usually referring to toning the rectus abdominis, the long muscle that goes from the bottom of your ribs to the top of your pelvis.

    Your abdominals also include your obliques, which sit on the side of your body, and your transverse abdominis, which sits underneath your other abdominal muscles and wraps around your waist like a belt.

    The term “abs” has been around for a long time, and is perhaps most often used when discussing aesthetics.

    For example, it’s common to see health and wellness publications offering advice on how to achieve “flat” or “six-pack” abs.

    The long muscle that goes from the bottom of your ribs to the top of your pelvis is called the rectus abdominis.
    phoenix creation/Shutterstock

    What about the core?

    When people talk about the “core”, they are often referring to your abdominals, but also the muscles in your back (your spinal erectors), hips, glutes, pelvic floor, and your diaphragm.

    These are the muscles that can stabilise your spine against movement, and aid in the transfer of force between the upper and lower limbs.

    The term “core” wasn’t commonly used until the early 2000s, when it became synonymous with core training.

    While the exact reason for its surge in popularity isn’t clear, it most likely followed a study published in 1998 that suggested people with lower back pain might have impaired function of their deep abdominal muscles.

    From there, the concept of “core training” entered the mainstream, where it was proposed to reduce lower back pain and improve athletic performance.

    ‘Core’ training only entered the mainstream this century.
    nadia_acosta/Shutterstock

    What does the evidence say?

    When we consider all the muscles that make up the core, it seems obvious they would be important – but it might not be for the reasons you think.

    For example, having good core stability doesn’t necessarily prevent lower back pain, as it’s been touted to do.

    There’s evidence suggesting core stability training, which might include exercises such as planks and dead bugs, can help reduce bouts of lower back pain. However it doesn’t appear to be any more effective than other types of exercise, such as walking or weight training.

    Other research suggests there aren’t any differences in how people with and without lower back pain recruit and use their core muscles.

    In a separate study, improvements in core strength and stability after a nine-week core stability training program were not significantly associated with improvements in pain and function, further questioning this relationship.

    The link between core strength and athletic performance is also unclear.

    A 2016 review found some very small associations between measures of core muscle strength and measures of whole body strength, power and balance. However, because of the design of the studies reviewed, we don’t know whether people who have better strength, power and balance simply have stronger core muscles, or whether stronger core muscles increase strength, power and balance.

    An earlier review summarised the effect of core stability training on measures of athletic performance, including jumping, sprinting and throwing. It concluded this type of training is unlikely to provide substantial benefits to measures of general athletic performance such as jumping and sprinting.

    However, this review also suggested that, given the important role of the abs in torso rotation, strengthening these muscles might have merit in improving performance in sports that involve swinging a bat or throwing a ball.

    This is likely to apply to other sports that involve rapid torso movement as well, such as mixed martial arts and kayaking.

    Stronger abdominal muscles could offer an advantage in sports that involve rotation.
    Lino Khim Medrina/Pexels

    How can you exercise your abs and core?

    There’s good evidence that simply getting stronger by lifting weights can help prevent injuries. Training your core to get stronger should have a similar impact, as long as it’s part of a broader training program.

    We also know having weaker muscles makes you more likely to experience functional limitations and disability in older age. So alongside any other potential benefits, improving core strength with the rest of your body could help keep you fit and healthy as you get older.

    There are plenty of exercises you can do to train your core and abs.

    If you’re new to core training, you might want to start off with some lower-level isolation exercises that don’t involve any movement of the core. These include things like planks, bird dogs, and pallof presses. These are unlikely to cause too much muscle soreness, but will train your core muscles.

    Once you feel like these are going well, you can start moving into some more dynamic exercises such as sit ups, Russian twists and leg raises, where you train your abdominals using a full range of motion.

    Hunter Bennett does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. What’s the difference between abs and core? One term focuses on aesthetics – and the other on function – https://theconversation.com/whats-the-difference-between-abs-and-core-one-term-focuses-on-aesthetics-and-the-other-on-function-254582

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: The drought is back – we need a new way to help farmers survive tough times

    Source: The Conversation (Au and NZ) – By Linda Botterill, Visiting Fellow, Crawford School of Public Policy, Australian National University

    Australia in 2025 is living up to Dorothy McKellar’s poetic vision of a country stricken by “drought and flooding rains”.

    The clean up is underway from the deadly floods in the Hunter and mid-north coast regions of New South Wales. At the same time, large swathes of Victoria, South Australia and Tasmania are severely drought affected due to some of the lowest rainfall on record.

    Do we have the right support arrangements in place to help farmers and communities survive the current dry period?

    Or is there a better way to help primary producers through the tough times, which are predicted to become more frequent and severe under climate change?

    Managing risk

    Drought is not a natural disaster – at least not according to Australia’s National Drought Policy. In 1989, drought was removed from what are now known as the Natural Disaster Relief and Recovery Arrangements.

    The decision was made for several reasons, including the high level of expenditure on drought relief in Queensland. The federal finance minister at the time, Peter Walsh, suggested the Queensland government was using the arrangements as a “sort of National Party slush fund to be distributed to National Party toadies and apparatchiks”.

    The more considered reason was that our scientific understanding of the drivers of Australia’s climate, such as El Niño, suggested drought was a normal part of our environment. Since then, climate modelling points to droughts becoming an even more familiar sight in Australia as a result of global warming.

    So the focus of drought relief shifted from disaster response to risk management.

    Building resilience

    The National Drought Policy announced in 1992 stated drought should be managed like any other business risk.

    Since then, the language of resilience has been added to the mix and the government lists three objectives for drought policy:

    • to build the drought resilience of farming businesses by enabling preparedness, risk management and financial self-reliance
    • to ensure an appropriate safety net is always available to those experiencing hardship
    • to encourage stakeholders to work together to address the challenges of drought.

    Since 1992, various governments have introduced, and tweaked, different programs aimed at supporting drought-affected farmers.

    The most successful program is the Farm Management Deposits Scheme. This has accumulated a whisker under A$6 billion in farmer savings, which are available to be drawn down during drought to support farm businesses.

    Others have come and gone – for example, the much-criticised Exceptional Circumstances Program.

    More help needed

    In 2025, the federal government is using the Future Drought Fund to invest $100 million per year to promote resilience. It also offers support through the Farm Household Allowance and concessional loans for farms and related small businesses.

    Apart from the Farm Management Deposit Scheme and the Farm Household Allowance, these programs do not offer immediate financial assistance to the increasing number of farmers across southern Australia being impacted by drought. If the drought worsens, it is likely there will be increasing calls for greater support.

    This provides the government with a dilemma: it is already investing significantly in the risk and resilience approach to drought, but politically, it is hard to resist cries for help from farmers who are a highly valued group in our community.

    A better way?

    There is a solution available to government to improve support. It can be done through the provision of “revenue contingent loans” for drought-affected farmers. Financial support would be available to farmers when they need it, consistent with the risk management principles underpinning the national drought policy.

    Our detailed modelling, extending now over 25 years, shows compellingly that revenue-based loans would mean taxpayers spending less on drought arrangements. But the assistance compared with other forms of public sector help would be greater.

    Capacity to repay would be the defining feature of the scheme. A revenue contingent loan is only paid down in periods when the farm is experiencing healthy cash flow. If a farm’s annual financial situation is difficult, no repayments are required.

    These loans would also remove foreclosure risk associated with an inability to repay when times are tough. Loan defaults simply can’t happen, a feature which also takes away the psychological trauma associated with the fear of losing the property due to unforeseen financial difficulties.

    Good policy

    These benefits would address governments’ main motivation with drought policy, which is risk management. That is because repayment concerns and default prospects would be eliminated. With farming, in which there is great uncertainty, these are very significant pluses for policy.

    Revenue contingent loans are a proper risk management financial instrument that requires low or no subsidies from government. They would complement the Farm Management Deposit Scheme and be an effective replacement for the concessional loans currently on offer.

    A win-win for farmer and taxpayer, alike.

    Linda Botterill has in the past received funding from the Australian Research Council, the Grains Research and Development Corporation, and Rural Industries Research and Development Corporation (now Agrifutures).

    Bruce Chapman has received funding from the Australian Research Council in various years, and was a consultant to the Federal Government’s Department of Education University Accord Enquiry in 2023/24.

    ref. The drought is back – we need a new way to help farmers survive tough times – https://theconversation.com/the-drought-is-back-we-need-a-new-way-to-help-farmers-survive-tough-times-256576

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: Australia’s first machete ban is coming to Victoria. Will it work, or is it just another political quick fix?

    Source: The Conversation (Au and NZ) – By Samara McPhedran, Principal Research Fellow, Griffith University

    Following a shopping centre brawl in Melbourne at the weekend, Victorian Premier Jacinta Allan announced the state will ban the sale of all machetes from Wednesday.

    In March this year, the Victorian government had already announced that from September 1 machetes would become a “prohibited weapon”.

    Prohibited weapons are items considered inappropriate for general possession and use without a police commissioner’s approval or a Governor in Council Exemption Order.

    This means machetes will be added to the list of things – such as swords, crossbows, slingshots, pepper spray and about 40 other items – that are essentially banned.

    Possession of a prohibited item can result in penalties of two years imprisonment or a fine of more than $47,000.

    Victoria is the first state in Australia to outright ban machetes. In other jurisdictions, machetes (like knives) may be used for lawful purposes, and are “controlled” or “restricted” – meaning you need a reasonable excuse or valid reason for possessing one.

    Most jurisdictions (except Tasmania and the Northern Territory) prohibit sales to minors.

    Will there be exemptions?

    Allan said the sales ban will have no exceptions, meaning nobody will be able to purchase a machete.

    However, machetes are a useful tool, particularly for agricultural purposes, and outdoor uses such as camping.

    When the new laws come into effect in September, people will be able to apply for a special “commissioner’s approval” to possess a machete. The exact details of who may be granted an exemption, and under what circumstances, are not yet clear.

    Nor is it clear whether people will have to, for example, pay for a permit to own a machete, or what measures people may have to take to prevent unauthorised access or theft.

    How much of a problem is knife crime in Australia?

    Despite alarming headlines and political rhetoric about a knife crime epidemic, it is hard to say exactly how much of a problem knife crime is.

    Statistics about weapon use and unlawful possession are not always disaggregated by type of weapon.

    Crime statistics are notoriously slippery, and what looks like a “crisis” can often be the result of changes in policing practices. For instance, when police run an intensive operation searching for knives in public places, they are more likely to find knives in public places. This does not necessarily mean there are more people out there carrying knives.

    The one crime where statistics are fairly clear is homicide: knives or other sharp instruments have long been the most common weapon used in Australia.

    The actual number of homicides involving knives or sharp instruments has stayed relatively stable over time. When you take into account the increase in how many people live in Australia, the rate per head of population has fallen.

    It is tempting to think a machete ban would reduce these figures even more. Unfortunately, violence prevention is not that simple.

    Homicides that involve people using their hands and feet have declined markedly over time. Why has this “method”, which is available to anybody, fallen so much? The answer is: nobody really knows.

    This tells us we need to look beyond types of weapons.

    Will the ban achieve anything?

    Violence is complex and simple “solutions” may make people feel safe (at least temporarily) but seldom deliver real results over the longer term.

    It’s easy for governments to ban things, which is why they do it so often. But we should pay close attention to what Victorian Police Minister Anthony Carbine said in March:

    This is Australia’s first machete ban, and we agree with police that it must be done once and done right. It took the UK (United Kingdom) 18 months – we can do it in six.

    Lawmaking should never be a race. Nor should politicians be mere mouthpieces doing what police tell them.

    Police are the ones we turn to for protection when violence breaks out, but this does not mean they are the only ones we should go to when we are looking for the most effective ways to deal with problems.

    Tackling violence takes serious commitment to complex and intensive programs that focus on the root causes, particularly among at-risk families and disadvantaged, marginalised youth.

    This is hard work that takes a long time, includes many different stakeholders, and seldom sways votes. Focusing on the choice of weapon is simply a distraction.

    There is no question the sight of machete-wielding youths storming through a busy shopping centre is terrifying. People should be able to go about their business without fearing they will be attacked.

    But reducing violence takes a lot more than banning one particular weapon, as Victoria will likely find out.

    Dr Samara McPhedran does not does not work for, consult to, own shares in or receive funding from any company or organisation that might benefit from this article.

    ref. Australia’s first machete ban is coming to Victoria. Will it work, or is it just another political quick fix? – https://theconversation.com/australias-first-machete-ban-is-coming-to-victoria-will-it-work-or-is-it-just-another-political-quick-fix-257541

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: A not-so-modern epidemic: what 17th-century nuns can teach us about coping with loneliness

    Source: The Conversation (Au and NZ) – By Claire Walker, Associate Professor, School of Historical and Classical Studies, University of Adelaide

    La Religieuse Tenant La Sainte Croix (The Nun Holds the Cross), Jacques Callot, French,1621–35. The Metropolitan Museum of Art

    Is loneliness a modern epidemic as we are so often told? Did people in the past suffer similar feelings of isolation?

    The word “loneliness” was not common before the 19th century. Cultural historian Fay Bound Alberti argues it was rarely used before 1800.

    This does not mean people didn’t feel alone. They just had different names for it – and they didn’t always think it was bad. Modern people living hectic lives in bustling cities often yearn for peace and tranquillity; so did our forebears.

    From the hermits of the early Christian church escaping society for lives of solitary prayer, to medieval anchorites in secluded cells, isolation was a prerequisite for spiritual success.

    But were isolated monks, nuns and hermits also lonely, as we would understand the word today? And do early modern nuns offer solutions for our own loneliness epidemic?

    Searching for solitude

    Early Christian religious thinkers and medieval churchmen viewed voluntary loneliness positively, with successful practitioners becoming saints. But religious solitude was not without its problems.

    Holy recluses, far from escaping society, were pursued for spiritual advice. Some, like Simeon Stylites (390–459), went to extraordinary measures, living atop a pillar near Aleppo for 30-odd years to achieve solitude.

    Monasticism provided an alternative. Monastic rules, like that of Benedict of Nursia (480–547), institutionalised isolation. In Benedictine monasteries, solitude was created through seclusion from society, strict silence, and prohibition of close friendships.

    Yet, like hermits, monks and nuns couldn’t escape the world completely. Monasteries constituted vital spiritual resources, providing multiple services and conducting business for wider society.

    Nuns at Work, Follower of Alessandro Magnasco (Italian, Milanese, first half 18th century).
    The Metropolitan Museum of Art

    Over the centuries, reforming bishops believed there was too much interaction between monasteries and the wider community. This led to repeated church reforms from the 10th century onwards to secure separation.

    Male members of the clergy were particularly worried about nuns who were considered “less capable” of maintaining holy solitude. As a result, women had to observe strict enclosure behind convent walls, limiting their economic and spiritual capacity. Reforms in the 16th century upheld nuns’ incarceration.

    Many women resisted, but others embraced isolation as spiritually liberating.

    Isolation in exile

    Early modern English convents, exiled in Europe after Henry VIII’s dissolution of the monasteries, shed light on nuns’ experiences of loneliness.

    The convents were subject to traditional rules of enclosure and silence. To become nuns, women left their homeland, family and friends. They joined English houses, so they were not alone among strangers, but they had to remain emotionally distant from one another, despite living in a community where they did everything together.

    Women wanting spiritual fulfilment often sought additional solitude.

    Benedictine mystic Gertrude More (1606–33) praised prescribed periods of silence because in them she might hear her Lord’s whispers.

    Carmelite prioress Teresa of Jesus Maria Worsley (1601–42) took time from her busy administrative role and hid from the other nuns to pray in solitude.

    The Nun in Count Burckhardt, from the periodical Once a Week. After James McNeill Whistler, American. Associated with Dalziel Brothers, British. September 27 1862.
    The Metropolitan Museum of Art

    Not all women found seclusion and silence so fulfilling, however, with some experiencing bouts of spiritual doubt and poor mental health. Many missed their family and homeland.

    This was particularly common among young sisters and those in convent schools. In the 1660s, Catherine Aston returned to England to recover after suffering poor health and depression.

    Alone in a crowd

    Nuns’ diverse experiences of monastic solitude reflect modern urban loneliness.

    In 1812 Lord Byron expressed the contradictory nature of loneliness in the poem Childe Harold, juxtaposing the positive solitary contemplation of nature with its negative counterpart – aloneness “midst the crowd”.

    In the present day many people feel alone in cities, even domestic households, as Olivia Laing and Keith Snell have shown.

    How might this be countered? Do early modern nuns offer solutions?

    A study of 21st century Spanish monks and nuns found monastic training, prayer and silence create feelings of spiritual satisfaction and purpose which lessens loneliness.

    Prayer is not the answer for everyone because modern isolation is caused by multiple factors in a largely secular society. There are alternative paths to meditation, however, through yoga or mindfulness which can provide feelings akin to monks’ and nuns’ “spiritual satisfaction”.

    Similarly, the nuns’ sense of “purpose” might be achieved through nostalgia. Nostalgia is the longing for an idealised and unobtainable past – a time when life was better. Research by psychologists suggests nostalgia can be beneficial in counteracting loneliness, even enabling forward-looking and proactive behaviours.

    Nuns at Mass, Amedor, Spanish, 1900.
    Getty Museum

    This was certainly true for the nuns exiled in Europe following Henry VIII’s abolition of monasticism in England. They dreamt of a future when their convents would return to England, family and friends. All nuns prayed both communally and in private for this outcome.

    Some went further, engaging in missionary work and political intrigue to achieve their goal.

    We cannot know whether this stifled loneliness, but by combining the benefits of meditation and activism it likely fostered a shared sense of purpose.

    Just as Gertrude More and Teresa of Jesus Maria Worsley found solitude essential for spiritual satisfaction, activist nuns believed they might reverse the English reformation from their exiled convents. Solitude, prayer and political engagement gave them a sense of purpose.

    Everyone’s situation is unique. There is no single solution for resolving isolation in the contemporary world. But the knowledge that it can be positive is perhaps a step towards countering the modern epidemic.

    Claire Walker has received funding from the Australian Research Council.

    ref. A not-so-modern epidemic: what 17th-century nuns can teach us about coping with loneliness – https://theconversation.com/a-not-so-modern-epidemic-what-17th-century-nuns-can-teach-us-about-coping-with-loneliness-249487

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: Actually, Gen Z stand to be the biggest winners from the new $3 million super tax

    Source: The Conversation (Au and NZ) – By Brendan Coates, Program Director, Housing and Economic Security, Grattan Institute

    As debate rages about the federal government’s plan to lift the tax on earnings on superannuation balances over A$3 million, it’s worth revisiting why we offer super tax breaks in the first place, and why they need to be reformed.

    Tax breaks on super contributions mean less tax is paid on super savings than other forms of income. These tax breaks cost the federal budget nearly $50 billion in lost revenue each year.

    These tax breaks boost the retirement savings of super fund members. They also ensure workers don’t pay punitively high long-term tax rates on their super, since the impact of even low tax rates on savings compounds over time.

    But they disproportionately flow to older and wealthier Australians.

    Two thirds of the value of super tax breaks benefit the top 20% of income earners, who are already saving enough for their retirement.

    Few retirees draw down on their retirement savings as intended, and many are net savers – their super balance continues to grow for decades after they retire.

    By 2060, Treasury expects one-third of all withdrawals from super will be via bequests – up from one-fifth today.

    Superannuation in Australia was intended to help fund retirements. Instead, it has become a taxpayer-subsidised inheritance scheme.

    The tax breaks aren’t just inequitable; they are economically unsound. Generous tax breaks for super savers mean other taxes (such as income and company taxes) must be higher to make up the forgone revenue. That means the burden falls disproportionately on younger taxpayers.

    The government should go further

    The government’s plan to increase the tax rate on superannuation earnings for balances exceeding $3 million from 15% to 30% is one modest step towards fixing these problems. The tax would only apply to the amount over $3 million, not the entire balance.

    This reform will affect only the top 0.5% of super account holders – about 80,000 people – and save more than $2 billion a year in its first full year.

    Claims that not indexing the $3 million threshold will result in the tax affecting most younger Australians, or that it will somehow disproportionately affect younger generations, are simply nonsense.

    Rather than being the biggest losers from the lack of indexation, younger Australians are the biggest beneficiaries. It means more older, wealthier Australians will shoulder some of the burden of budget repair and an ageing population. Otherwise, younger generations would bear this burden alone.

    The facts speak for themselves: a mere 0.5% of Australians have more than $3 million in their super, and 85% of those are aged over 60.

    Even in the unlikely scenario where the threshold remains fixed until 2055 – or for ten consecutive parliamentary terms – it would still only affect the top 10% of retiring Australians. Treasurer Jim Chalmers has rightly pointed out that it is unlikely the threshold will never be lifted.

    Far from abandoning the proposed $3 million threshold, the government should go further and drop the threshold to $2 million, and only then index it to inflation, saving the budget a further $1 billion a year.

    There is no rationale for offering such generous earnings tax breaks on super balances between $2 million and $3 million.

    At the very least, if the $3 million threshold is maintained, it should not be indexed until inflation naturally reduces its real value to $2 million, which is estimated to occur around 2040.

    Sure, it’s complicated

    Levying a higher tax rate on the earnings of large super balances is complicated by the fact existing super earnings taxes are levied at the fund level, not on individual member accounts.

    And it’s true that levying a 15% surcharge on the implied earnings of the account over the year (the change in account balance, net of contributions and withdrawals) will impose a tax on unrealised capital gains, or paper profits.

    Taxing capital gains as they build up removes incentives to “lock in” investments to hold onto untaxed capital gains, as the Henry Tax Review recognised. But it can create cash flow problems for some self-managed super fund members who hold assets such as business premises or a farm in their fund.

    Yet there are seldom easy answers when it comes to tax changes.

    Most people with such substantial super balances are retirees who already maintain enough liquid assets to meet the minimum drawdown requirements.

    Indeed, self-managed super funds are legally obligated to have investment strategies that ensure liquidity and the ability to meet liabilities.

    In any case, the tax does not have to be paid from super. Australians with large super balances typically earn as much income from investments outside super. And the wealthiest 10% of retirees today rely more on income from outside super than income from super.

    Good policy is always the art of the compromise

    Australia faces the twin challenges of big budget deficits and stagnant productivity. Tax reform will be needed to respond to both.

    Good public policy, like politics, always requires some level of compromise.

    Super tax breaks should exist only where they support a policy aim. And on balance, trimming unneeded super tax breaks for the wealthiest 0.5% of Australians would make our super system fairer and our budget stronger.

    The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    ref. Actually, Gen Z stand to be the biggest winners from the new $3 million super tax – https://theconversation.com/actually-gen-z-stand-to-be-the-biggest-winners-from-the-new-3-million-super-tax-257450

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: Who really benefits from smart tech at home? ‘Optimising’ family life can reinforce gender roles

    Source: The Conversation (Au and NZ) – By Indra Mckie, Postdoctoral Researcher in Collaborative Human-AI Interaction Culture, University of Technology Sydney

    Ashlifier/Shutterstock

    Have you heard of the “male technologist” mindset? It may sound familiar, and you may even know such people personally.

    Design researchers Turkka Keinonen and Nils Ehrenberg
    have defined the male technologist as someone who is obsessed with concerns about energy, efficiency and reducing labour.

    This archetype became apparent in my PhD research when I interviewed 12 families about their use of early domestic robots and smart home devices Amazon Alexa and Google Home. One father over-engineered his smart home so much, his kids struggled to turn the lights on and off.

    The male technologist in the home, as seen in my research, reflects wider trends of the Silicon Valley “tech bro” archetype, the techno-patriarchy, and the growing influence of a tech oligarchy in the Western world.

    The male technologist often complicates and overcompensates with technology, raising the question: are these real problems tech can solve, or just quick fixes masking deeper issues?

    Long-standing patriarchal systems shape the gendered division of domestic labour.
    Andrea Piacquadio/Pexels

    It’s not about making men feel guilty

    The term “male technologist” isn’t about making men feel guilty for using technology to innovate. Anyone can adopt this mindset. It can even apply to institutions that prioritise innovation and efficiency over emotional insight, lived experience or community-based ways of creating change.

    It’s a reflection of how a masculine drive to solve surface-level problems can come before addressing patriarchal systems that have shaped the long-standing gendered division of domestic labour and “mental load”.

    Mental load is the invisible, ongoing effort of planning, organising and managing daily life that often goes unnoticed but is essential to keeping things running.

    Take one of my research participants, Hugo (name changed for privacy). A father of two, Hugo embodies this male technologist mindset by creating “business scenarios” to solve his family’s problems with smart home automation.


    Indra Mckie/The Conversation

    Treating family life like a system to optimise, Hugo noticed his wife looking stressed while cooking. So, he installed a smart clock with Alexa in the kitchen to help her manage multiple timers.

    Hugo saw it as an empathetic solution, tailored to the way she liked to cook. But instead of sharing the load of this domestic task, he “engineered” around it, offloading responsibility to smart devices.

    Smart home tech promises to save time, but it hasn’t solved who does what at home. Instead, it hands more power to those with digital know-how, letting them automate tasks they may never have done or fully understood in the first place.

    Typically, these tend to be men. A recent survey by Kaspersky showed 72% of men are the ones who set up their families’ smart devices, compared to 47% of women.

    Unfortunately, a recent Australian survey found women still do more unpaid domestic work than men. Even in households where women have full-time jobs, they spend almost four hours more on household chores per week than men do.

    Who really benefits in a smart home

    Amazon first released Alexa back in 2014, with Apple and Google quickly following with their own smart home speakers. In the past decade, some people have adopted the hype of the “smart home” to make life easier by controlling technology without needing to get off the couch.

    But smart technology can also affect access to shared spaces, create new forms of control over things and people in the home, and constrain human interactions. And it can be set up to reinforce the existing hierarchy within the household.


    Indra Mckie/The Conversation

    By his own admission, Hugo has over-engineered the home to the point where his children struggle to turn the lights on and off, having disabled the physical switches in favour of voice commands.

    My research looked at how automation is changing care giving and acts of service in the home. With “compassionate automation”, someone could use smart technology to support loved ones in thoughtful ways, such as setting up smart home routines or reminders to make daily life easier.

    But even when it comes from a place of care, tech-based help is not the same as human care. It may not always feel meaningful to the person receiving or providing it. As another participant in my research put it:

    I think there are still human interactions [..] that you probably don’t want AI to mediate for you.


    Indra Mckie/The Conversation

    So what is the alternative to a male technologist mindset? Feminist and queer technology studies offer a different lens. Researchers in these fields argue our interactions with technology are never neutral; they are shaped by gender, power and cultural norms.

    When we recognise this, we can imagine ways of designing and using tech in ways that emphasise care and relationships. Instead of setting up a smart timer in the kitchen, the technologist could ask his wife what she’s cooking and join her, using the voice assistant together to follow a recipe step by step.

    The ultimate fantasy of the male technologist is more toys to solve domestic labour problems at home.
    Gordenkoff/Shutterstock

    Looking ahead to the future of smart homes

    As Alexa+ rolls out later this year with a “smarter” generative AI brain, Google increases Gemini integration into its Home app, and tech companies race to build humanoid robots that can cook dinner and fold laundry, we’re seeing the ultimate fantasy of the male technologist come to life: more toys to presumably solve the problems of domestic labour at home.

    But if men are now taking on more of the digital load, will the mental load finally shift too? Or will they continue to automate the easy, visible tasks while the emotional and cognitive labour still goes unseen and unshared?

    Elon Musk has declared plans to launch several thousand Optimus robots – Tesla’s bid into the humanoid robot race.
    He expects the explosion of a new market of personal humanoid robots, generating US$10 trillion in revenue long-term and potentially becoming the most valuable part of Tesla’s business.

    But as homes get “smarter,” we have to ask: how is this reshaping family dynamics, relationships and domestic responsibility?

    It’s important to consider if outsourcing chores to technology really is about easing the load, or just engineering our way around it without addressing the deeper mental and relational work of household labour.

    Indra Mckie received the UTS Research Excellence Scholarship to complete her PhD research at the University of Technology Sydney.

    ref. Who really benefits from smart tech at home? ‘Optimising’ family life can reinforce gender roles – https://theconversation.com/who-really-benefits-from-smart-tech-at-home-optimising-family-life-can-reinforce-gender-roles-256477

    MIL OSI AnalysisEveningReport.nz