Category: Reportage

  • MIL-OSI Global: Caveman method skincare: how neglecting skincare completely can give you ‘cornflake’ build-up

    Source: The Conversation – UK – By Adam Taylor, Professor of Anatomy, Lancaster University

    Gorodenkoff/Shutterstock

    Social media has done it again – this time reviving a minimalist skincare trend known as the caveman method. Think of it as the paleo diet for your face: no cleansers, no moisturisers, no water. Just your skin, left completely to its own devices.

    Supporters claim it helps reduce breakouts, arguing that overuse of products is irritating their skin. But while simplifying your routine might have some short-term benefits, going completely product free, and especially water free, can put you at risk of a lesser known condition: dermatitis neglecta.

    Dermatitis neglecta was first described in a medical journal in 1995. It’s a skin condition that doesn’t involve inflammation but rather occurs when skin isn’t cleaned adequately over time. It’s most commonly seen in people with neurological or psychological conditions, or in people avoiding cleaning surgical wounds, skin sensitivity, or even poor hygiene.

    It often shows up on the face, chest and limbs, but can appear anywhere on the body. The hallmark? A pigmented, scaly build-up that looks like cornflakes.

    But what’s actually building up?

    Your skin is constantly renewing itself. As new skin cells form underneath, older ones are pushed up and eventually die due to lack of oxygen from the blood supply beneath.

    We shed about 500 million dead skin cells per day – roughly two grams’ worth. That’s not much, but if you’re not washing your face, even this small daily build-up can quickly lead to visible debris and dullness.

    This often overlooked layer of built-up skin can sometimes conceal underlying medical conditions, including cancer, that only become apparent once the excess is removed.

    Skin cancers are less common in people with darker skin tones but they often have worse outcomes, primarily due to delayed diagnosis, which makes the cancer harder to treat. In such cases, conditions like dermatitis neglecta may further obscure signs of disease, making early detection even more challenging.


    Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.


    But it’s not just dead cells at play. Your skin’s natural secretions, sweat and sebum, also contribute to this protective barrier.

    Sebum is an oily substance produced by sebaceous glands all over the body. It helps keep moisture in and has antimicrobial properties. The nose is the area with the highest sebum production, which explains its reputation for shininess. Sebum also plays a role in skin pH, helping keep the skin slightly acidic to ward off harmful bacteria.

    Sweat, meanwhile, also contains antimicrobial peptides that helps defend against pathogens. But if these secretions can’t reach or function properly at the skin’s surface – either because they’re blocked by build-up or not spread through cleansing – your natural defences may weaken, making it easier for bacteria or fungi to thrive.

    Skipping all skincare might sound natural, but it may disrupt these finely balanced systems. If the skin becomes overwhelmed, it can’t do its job – leading not just to clogged pores, but potential infection.

    Thankfully, dermatitis neglecta is relatively easy to treat. Mild cases clear up with warm soapy water. More stubborn build-up may require gentle cleansing with isopropyl alcohol. In extreme cases, dermatologists may prescribe keratolytics, creams that help break down and remove the thickened outer layers.

    Back to basics

    Let’s get one thing straight: you don’t need a ten-step routine. But, as well as keeping the skin clean, a few basic skincare practices go a long way.

    First, hydrate. Drinking water can improve skin hydration, especially if your intake has been low.

    Next, moisturise. A simple moisturiser with ingredients like hyaluronic acid or glycerin helps lock in moisture and support the skin’s natural barrier. You’ll often spot hyaluronic acid on product labels: it’s known for its ability to bind water to the skin.

    High molecular weight hyaluronic acid can help hydrate the surface of the skin and support it’s barrier function. But only low molecular weight hyaluronic acid can penetrate into the deeper layers, where it can help improve hydration more comprehensively and help reduce the appearance of fine lines. A blend of high and low molecular weight hyaluronic acid can offer both deep hydration and surface moisture retention.

    Humectants like sodium PCA also draw moisture from the air into the skin, helping to keep it soft and supple. This is particularly important for darker skin tones, which are more prone to transepidermal water loss, meaning they can lose moisture more quickly and may need extra hydration support.

    Finally, wear sunscreen – every day – no matter your skin tone. While melanin can offer some natural protection against UV damage, it’s not enough to prevent skin cancer, premature ageing, or pigmentation issues. Daily use of sunscreen is essential for everyone. UV rays damage collagen, the protein that keeps skin firm. They cause collagen to cross-link, making it stiff and contributing to wrinkles and sagging. Collagen has a half-life of around 15 years, so once it’s damaged, your skin takes a long time to recover.

    To maintain the skin’s young, fresh and healthy appearance collagen and other molecules need to be replaced and allowed to mature. But UV also physically damages the collagen formation and maturation process, making it more difficult for new collagen to form properly, further contributing to the aged appearance of skin. Sunscreen helps prevent this long-term ageing effect.

    Cheesy varnish

    If you think your skin has never been coated in build-up, think again. In the womb, your sebaceous glands produced a substance called vernix caseosa, Latin for “cheesy varnish”. This waxy coating, visible on many newborns, is made of sebum and dead skin. It moisturises, insulates and protects infants during birth – and it’s proof that build-up on your skin isn’t as unnatural as it might seem.

    Going back to basics can feel appealing, especially in a world overflowing with products. But your skin is a complex, hardworking organ that benefits from a little support.

    More research is needed to understand how skincare affects different people: factors like biological sex, skin tone, environment and genetics all play a role. But simple steps like drinking water, applying moisturiser, and wearing sunscreen can help your skin function at its best.

    So before you ditch everything in your bathroom, remember that “natural” doesn’t always mean “better”. Your skin evolved to protect you – but it still needs a little help now and then.

    Adam Taylor does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Caveman method skincare: how neglecting skincare completely can give you ‘cornflake’ build-up – https://theconversation.com/caveman-method-skincare-how-neglecting-skincare-completely-can-give-you-cornflake-build-up-256362

    MIL OSI – Global Reports

  • MIL-OSI Global: Post-sepsis syndrome: when the body recovers but the brain doesn’t

    Source: The Conversation – UK – By Steven W. Kerrigan, Professor of Precision Therapeutics, School of Pharmacy and Biomolecular Sciences, RCSI University of Medicine and Health Sciences

    A 3D rendering of the life-threatening condition sepsis Love Employee/Shutterstock

    Sepsis is a life-threatening condition triggered by the body’s extreme response to infection. It causes widespread inflammation, which can lead to tissue damage, organ failure and death.

    Thanks to modern medicine, survival rates have improved dramatically. But for many who survive, the battle isn’t over when they leave hospital. Instead, they enter a new and often overlooked phase of recovery marked by lingering, life-altering effects.

    Post-sepsis syndrome (PSS) affects up to half of all sepsis survivors and can persist for months or even years. It’s a complex mix of physical, cognitive and psychological symptoms. People may seem physically recovered yet struggle with overwhelming fatigue, chronic pain, muscle weakness and disrupted sleep.

    The most profound impacts, however, often show up in the brain. Many sepsis survivors experience cognitive problems that mirror those seen in traumatic brain injury or early dementia. These can include memory lapses, difficulty concentrating, slower thinking and impaired decision-making.


    Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.


    For some, these challenges are manageable. For others, they’re severe enough to interfere with work, education or independent living.

    One major culprit appears to be the body’s own inflammatory response. During sepsis, the immune system floods the body with inflammatory molecules – a so-called “cytokine storm”. This can damage the blood-brain barrier, allowing harmful substances and immune cells into the brain. The resulting neuroinflammation and oxygen deprivation can injure brain cells and disrupt normal function.

    Hidden psychological toll

    Anyone who survives sepsis can develop PSS, but some are more vulnerable than others. Risk factors include: older age, which increases the likelihood of cognitive decline; long ICU stays or the use of a ventilator, which can contribute to physical and mental complications; pre-existing mental health or cognitive conditions; and more severe inflammatory responses during sepsis, which are linked to lasting damage.

    Children are also at risk, as they may experience developmental or emotional challenges that affect their learning and social development for years.

    Many sepsis survivors go on to experience post-traumatic stress disorder (PTSD), anxiety or depression. These issues can be triggered by the trauma of a near-death experience, prolonged sedation, invasive treatments, or time spent in intensive care units (ICUs) – often while cut off from family and friends.

    In fact, “ICU delirium”, which affects up to 80% of patients on ventilators, has been strongly associated with long-term cognitive and psychological impairment. Sepsis survivors who experience this often recall vivid, terrifying hallucinations during their ICU stay. These memories can haunt them more than the physical illness itself.

    The recovery gap

    One of the biggest challenges for sepsis survivors is the lack of follow-up care. Unlike heart attack or stroke recovery, which typically involves coordinated rehabilitation, post-sepsis care is often fragmented. Patients can be discharged without a recovery plan and left to navigate a confusing and lonely road back to health.

    What’s needed are multidisciplinary post-sepsis clinics, where patients can access neurologists, psychologists, rehab specialists and social workers all under one roof. Early support, both psychological and cognitive, can dramatically improve long-term outcomes.

    Sepsis doesn’t just take a toll on survivors – it affects families, communities and healthcare systems. Many survivors cannot return to work, require ongoing care, and face financial hardship. In the US, sepsis costs an estimated US$60 billion annually (£50.8 billion), much of it spent on post-acute care and readmissions.

    A 2016 film inspired by the true story of Tom Ray, who lost his arms, legs and part of his face to sepsis.

    There’s also a growing concern that sepsis may raise the risk of long-term neurodegenerative diseases such as Alzheimer’s. More research is needed, but the links between inflammation, brain damage and cognitive decline are becoming harder to ignore.




    Read more:
    Thirty years on, our research linking viral infections with Alzheimer’s is finally getting the attention it deserves


    Globally, there is progress in helping people survive sepsis. But we must also ensure that sepsis survivors thrive afterwards.

    Here’s what I believe needs to happen now: encourage greater awareness of PSS among clinicians, patients and families; integrate post-sepsis care into chronic disease and rehabilitation programs; and generate more funding to research how and why PSS develops – and how to prevent or treat it.

    People recovering from sepsis often rely heavily on loved ones who need better support themselves. Survivors also need clearer, kinder help to get back to work and school, or just back to the everyday routines that once felt normal.

    Surviving sepsis is a triumph of modern medicine – but what comes after is still a neglected frontier. For too many, life after sepsis means battling invisible wounds that affect the brain, body and soul. Recognising, researching and responding to PSS isn’t just a clinical need – it’s a moral obligation. Survivors deserve more than survival. They deserve a chance to truly recover.

    Steven W. Kerrigan receives funding from Research Ireland, Health Research Board of Ireland, Irish Research Council and Enterprise Ireland. The author wishes to thank Liam Casey, a sepsis survivor, for his contribution to this article and for sharing his lived experience of PSS.

    ref. Post-sepsis syndrome: when the body recovers but the brain doesn’t – https://theconversation.com/post-sepsis-syndrome-when-the-body-recovers-but-the-brain-doesnt-256139

    MIL OSI – Global Reports

  • MIL-OSI Global: M&S cyberattacks used a little-known but dangerous technique – and anyone could be vulnerable

    Source: The Conversation – UK – By Hossein Abroshan, Senior Lecturer, School of Computing and Information Science, Anglia Ruskin University

    The cyberattack that has targeted Marks & Spencer’s (M&S) is the latest in a growing wave of cases involving something called sim-swap fraud. While the full technical details remain under investigation, a report in the Times suggests that cyber attackers used this method to access M&S internal systems, possibly by taking control of an employee’s mobile number and convincing IT staff to reset critical login credentials.

    Sim-swap fraud is not a new phenomenon, but it is becoming increasingly dangerous
    and more prevalent. According to CIFAS, the UK’s national fraud prevention service, Sim-swap incidents have surged from under 300 in 2022 to almost 3,000 in 2023. What had been mainly a risk to cryptocurrency investors or online influencers is now much more prevalent.

    This form of cyberattack shows how major companies and ordinary people can be compromised through a tactic that exploits human factors, such as trust and how we have built our digital identities around mobile phones.

    Sim-swap fraud begins when a scammer convinces a mobile operator to transfer a victim’s number to a new sim card, or even an esim (one that’s embedded in the device), under the scammer’s control.

    This can be done over the phone, through an online chat, or even with the help of a
    bribed insider. Once the number is transferred, all calls and texts intended for the victim are redirected to the scammer. This includes those crucial verification codes used for logging into email, banking, messaging apps such as WhatsApp, and government services such as HMRC.

    This alone would be dangerous. But what makes sim-swap fraud so influential is
    that the cyber scammer often already has access to a patchwork of personal data
    about their target. That information may have been collected from data breaches,
    phishing attacks, low-reputation websites, or even the victim’s social media.

    People often underestimate the extent to which they reveal themselves online: a birthday posted on Instagram, a phone number included in a job posting, or a home address used in an online giveaway. Scammers combine this data to build a convincing profile, enough to fool a mobile operator’s customer service staff into believing they’re talking to the real account holder.

    How the sim-swap fraud works

    Once the scammer gains control of a number, the consequences are extensive.
    Attackers can access sensitive information, including personal documents and
    request and receive password reset links for the user’s other accounts. They can log in to WhatsApp or Telegram accounts, read private messages, impersonate the user, and even contact friends or family members to conduct further scams.

    The victims might see false messages posted in their names or fraudulent transactions made from their accounts. This can lead to financial loss, reputation damage, as well as emotional and mental health issues on the part of the victims.

    In the case of M&S, attackers apparently used this access to manipulate internal
    processes and gain access to sensitive systems. This highlights a broader risk:
    many companies still rely on phone numbers as a secondary verification method for
    staff, making their systems vulnerable to the same cyberattack used against
    individuals.

    How sim-Swap fraud works.
    Hossein Abroshan

    Reducing the risk

    While real-time detection of mobile number hijacking remains difficult, taking specific steps can significantly reduce the likelihood of being targeted and victimised. People should avoid sharing personal data unnecessarily, especially across multiple platforms and, very importantly, on unknown or untrusted websites.

    Many attackers don’t obtain all the necessary information from a single source. Instead, they collect it incrementally, using public profiles, marketing databases and past leaks to form a comprehensive picture.

    Being mindful of where you share your phone number, birthday or other identifiers can make it harder for others to impersonate you. It is also crucial to learn how phishing works and how to recognise it, so you will not submit your sensitive information to phishing or fake websites.

    Avoiding SMS-based authentication, where possible, is another key step. Many
    services now support authenticator apps, such as Google Authenticator, Microsoft Authenticator, Due or Authy, which are not tied to your mobile number. For mobile
    accounts themselves, setting up a unique pin or password to your account, which
    must be provided to authorise any changes, can add an extra layer of protection. This makes it harder for someone to initiate a sim swap without that code. However, users alone cannot fulfil this duty.

    Mobile network operators must strengthen identity verification practices, moving beyond basic questions about names and addresses that can be easily gathered or guessed. Banks and other financial institutions should reconsider using SMS or, at the very least, SMS-only as the default method for sensitive authentication. And companies, particularly those handling personal data or financial assets, need to train their IT and customer service teams to recognise the signs of identity based attacks.

    Sim-swap fraud is effective not because it’s highly technical, but because it exploits our trust in phone numbers for identity verification. The M&S case and similar examples show how fragile that trust can be – and why securing our mobile identities is no longer optional.

    Hossein Abroshan does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. M&S cyberattacks used a little-known but dangerous technique – and anyone could be vulnerable – https://theconversation.com/mands-cyberattacks-used-a-little-known-but-dangerous-technique-and-anyone-could-be-vulnerable-256739

    MIL OSI – Global Reports

  • MIL-OSI Global: Southern Africa’s rangelands do many jobs, from feeding cattle to storing carbon: a review of 60 years of research

    Source: The Conversation – Africa – By Kevin Kirkman, Professor of Grassland Science, University of KwaZulu-Natal

    South Africa’s rangelands have always had great value for the country. These areas offer more than just grazing for livestock. They provide services like purifying water, storing carbon and conserving biodiversity.

    The grassland biome (28%), along with the savanna (32.5%) and the Nama-Karoo (19.5%), are collectively referred to as rangelands. They make up almost 80% of the land area of South Africa.

    Their ecological services haven’t always been fully appreciated. Research into rangelands has evolved in response to environmental changes, human needs and scientific discoveries.

    Commercial livestock production was the main concern when academics, researchers and practitioners met for the first congress of the Grassland Society of Southern Africa in 1966. Less than 15% of South Africa’s land surface area is arable. The only agricultural production possible on the balance of the land is livestock production from natural rangeland. Livestock production is thus a cornerstone of agriculture and food production in South Africa.

    Six decades on, the Grassland Society has reflected – through a special issue of its journal, the African Journal of Range and Forage Science – on how it has tackled research challenges and adapted to shifting perceptions of rangelands.

    Research has explored aspects of global change, bush encroachment and other changes in rangeland composition and function. Land transformation is another research area. Peri-urban sprawl, open-cast mining, timber plantations and other developments reduce and fragment rangeland. The result is increased pressure on the remaining, intact rangelands.

    Widening scope

    A review of research over the 60 years shows that early efforts focused mainly on forage production to support livestock industries. Research topics included rotational grazing and burning, as well as reinforcing rangelands by adding nutrients, forage grasses and legumes.

    By the 1980s, it became clear that rangelands offered more than just grazing – they were vital ecosystems.

    In the early 1990s, around the onset of democracy in South Africa, local researchers became part of global conversations around rangeland ecology. In doing so, they started to use the international terminology, instead of the old Dutch-derived word “veld”.

    This shift was not just about geography, but about scope. Rangelands were increasingly seen as multifaceted ecosystems critical in the fight against climate change. Increasing temperatures, increasing atmospheric carbon dioxide levels and changing rainfall patterns pose a threat to all ecosystems. Understanding the response of rangelands is increasingly important in devising management strategies to adapt to these changes.

    Scientists expanded their attention to preserving soil health, restoring degraded landscapes, and maintaining biodiversity. Issues like overgrazing, soil erosion and invasive species gained recognition in southern Africa. Degradation of rangelands in South Africa was first highlighted in the mid 1700s, and became a “mainstream” issue in the 1930s. Replacing a diverse group of wild animals with a single species of grazer, such as cattle, is the reason generally given for degradation. Fire has also been linked to it (often unfairly).

    The Grassland Society responded by promoting ideas like adaptive grazing management (making decisions in response to conditions, rather than following a recipe approach). It also encouraged integrating indigenous knowledge with scientific research to create more sustainable and resilient land-use systems. This has helped shape land management practices across the region.

    Many southern African rangelands face the challenge of balancing grazing with biodiversity conservation. Research on conservation agriculture and integrating livestock and wildlife systems is helping farmers and conservationists to find common ground. Wildlife, both in the conservation and the game production contexts, plays a critical role in South Africa’s economy. Tourism is one of the major contributors.

    Land management is particularly important in the Mediterranean-climate regions of South Africa, where poor crop farming practices have damaged soil health. The research is guiding the development of more sustainable farming systems focused on soil regeneration and biodiversity.

    A key indicator of ecosystem degradation is a decline in grassland forbs (herbaceous plants that are not grasses). They are highly sensitive to grazing pressure. So the role of wildflowers in ecosystem health and animal wellbeing has also become an important research area.

    Climate change, fire suppression and overgrazing drive woody plant encroachment, where grasslands are turning into shrublands. This calls for integrated management approaches that consider fire, grazing and even controlled rewilding.

    Fire is a natural element in many grassland ecosystems, and research has helped advance understanding of how it can be monitored and controlled to reduce risks while promoting healthy rangelands.

    People and grasslands

    Rangeland management has important social dimensions. Research is addressing issues such as land tenure, governance, community management systems on communal rangelands and indigenous knowledge in management decisions. These topics are essential for creating sustainable solutions that account for people’s livelihoods and needs.

    In addition to these ecological, social and management advances, the Grassland Society of Southern Africa has worked to develop the next generation of rangeland scientists and practitioners. Through its congresses, workshops and journal publications, the society continues to foster dialogue across disciplines and communities. Its 60th congress will be held in July 2025.

    Kevin Kirkman receives funding from the National Research Foundation.

    Helga van der Merwe receives funding from the National Research Foundation.

    Craig Morris does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Southern Africa’s rangelands do many jobs, from feeding cattle to storing carbon: a review of 60 years of research – https://theconversation.com/southern-africas-rangelands-do-many-jobs-from-feeding-cattle-to-storing-carbon-a-review-of-60-years-of-research-254736

    MIL OSI – Global Reports

  • MIL-OSI Global: Light is the science of the future – the Africans using it to solve local challenges

    Source: The Conversation – Africa – By Andrew Forbes, Professor, University of the Witwatersrand

    Light-based technologies have wide practical applications. Wikimedia Commons, CC BY

    Light is all around us, essential for one of our primary senses (sight) as well as life on Earth itself. It underpins many technologies that affect our daily lives, including energy harvesting with solar cells, light-emitting-diode (LED) displays and telecommunications through fibre optic networks.

    The smartphone is a great example of the power of light. Inside the box, its electronic functionality works because of quantum mechanics. The front screen is an entirely photonic device: liquid crystals controlling light. The back too: white light-emitting diodes for a flash, and lenses to capture images.

    We use the word photonics, and sometimes optics, to capture the harnessing of light for new applications and technologies. Their importance in modern life is celebrated every year on 16 May with the International Day of Light.

    Scientists on the African continent, despite the resource constraints they work under, have made notable contributions to photonics research. Some of these have been captured in a recent special issue of the journal Applied Optics. Along with colleagues in this field from Morocco and Senegal, we introduced this collection of papers, which aims to celebrate excellence and show the impact of studies that address continental issues.

    A spotlight on photonics in Africa

    Africa’s history in formal optics stems back thousands of years, with references to lens design already recorded in ancient Egyptian writings.

    In more recent times, Africa has contributed to two Nobel prizes based on optics. Ahmed Zewail (Egyptian born) watched the ultrafast processes in chemistry with lasers (1999, Nobel Prize for Chemistry) and Serge Harouche (Moroccan born) studied the behaviour of individual particles of light, photons (2012, Nobel Prize for Physics).

    Unfortunately, the African optics story is one of pockets of excellence. The highlights are as good as anywhere else, but there are too few of them to put the continent on the global optics map. According to a 2020 calculation done for me by the Optical Society of America, based on their journals, Africa contributes less than 1% to worldwide journal publications with optics or photonics as a theme.

    Yet there are great opportunities for meeting continental challenges using optics. Examples of areas where Africans can innovate are:

    • bridging the digital divide with modern communications infrastructure

    • optical imaging and spectroscopy for improvements in agriculture and monitoring climate changes

    • harnessing the sun with optical materials for clean energy

    • bio-photonics to solve health issues

    • quantum technologies for novel forms of communicating, sensing, imaging and computing.

    The papers in the special journal issue touch on a diversity of continent-relevant topics.

    One is on using optics to communicate across free-space (air) even in bad weather conditions. This light-based solution was tested using weather data from two African cities, Alexandria in Egypt and Setif in Algeria.

    Another paper is about tiny quantum sources of quantum entanglement for sensing. The authors used diamond, a gem found in South Africa and more commonly associated with jewellery. Diamond has many flaws, one of which can produce single photons as an output when excited. The single photon output was split into two paths, as if the particle went both left and right at the same time. This is the quirky notion of entanglement, in this case, created with diamonds. If an object is placed in any one path, the entanglement can detect it. Strangely, sometimes the photons take the left-path but the object is in the right-path, yet still it can be detected.




    Read more:
    Quantum entanglement: what it is, and why physicists want to harness it


    One contributor proposes a cost-effective method to detect and classify harmful bacteria in water.

    New approaches in spectroscopy (studying colour) for detecting cell health; biosensors to monitor salt and glucose levels in blood; and optical tools for food security all play their part in optical applications on the continent.

    Another area of African optics research that has important applications is the use of optical fibres for sensing the quality of soil and its structural integrity. Optical fibres are usually associated with communication, but a modern trend is to use the existing optical fibre already laid to sense for small changes in the environment, for instance, as early warning systems for earthquakes. The research shows that conventional fibre can also be used to tell if soil is degrading, either from lack of moisture or some physical shift in structure (weakness or movement). It is an immediately useful tool for agriculture, building on many decades of research.

    The diverse range of topics in the collection shows how creative researchers on the continent are in using limited resources for maximum impact. The high orientation towards applications is probably also a sign that African governments want their scientists to work on solutions to real problems rather than purely academic questions. A case in point is South Africa, which has a funded national strategy (SA QuTI) to turn quantum science into quantum technology and train the workforce for a new economy.

    Towards a brighter future

    For young science students wishing to enter the field, the opportunities are endless. While photonics has no discipline boundaries, most students enter through the fields of physics, engineering, chemistry or the life sciences. Its power lies in the combination of skills, blending theoretical, computational and experimental, that are brought to bear on problems. At a typical photonics conference there are likely to be many more industry participants than academics. That’s a testament to its universal impact in new technologies, and the employment opportunities for students.

    The last century was based on electronics and controlling electrons. This century will be dominated by photonics, controlling photons.

    Professor Zouheir Sekkat of University Mohamed V, Rabat, and director of the Pole of Optics and Photonics within MAScIR of University Mohamed VI Polytechnic Benguerir, Morocco, contributed to this article.

    The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    ref. Light is the science of the future – the Africans using it to solve local challenges – https://theconversation.com/light-is-the-science-of-the-future-the-africans-using-it-to-solve-local-challenges-256031

    MIL OSI – Global Reports

  • MIL-OSI Global: Social platform Stocktwits and other sources of ‘alternative data’ may be hurting financial analysts’ long-term forecasts

    Source: The Conversation – France – By Thierry Foucault, Professeur de Finance, HEC Paris Business School

    Since the beginning of the century, the number of satellites orbiting Earth has increased more than 800%, from less than 1,000 to more than 9,000. This profusion has had a number of strange and disturbing repercussions. One of them is that companies are selling data from satellite images of parking lots to financial analysts. Analysts then use this information to help gauge a store’s foot traffic, compare a retailer to competitors and estimate its revenue.

    This is just one example of the new information, or “alternative data”, that is now available to analysts to help them make their predictions about future stock performance. In the past, analysts would make predictions based on firms’ public financial statements.


    A weekly e-mail in English featuring expertise from scholars and researchers. It provides an introduction to the diversity of research coming out of the continent and considers some of the key issues facing European countries. Get the newsletter!

    According to our research, the plethora of new sources of data has improved short-term predictions but worsened long-term analysis, which could have profound consequences.

    Tweets, twits and credit card data

    In a paper on alternative data’s effect on financial forecasting, we counted more than 500 companies that sold alternative data in 2017, a number that ballooned from less than 50 in 1996. Today, the alternative data broker Datarade lists more than 3,000 alternative datasets for sale.

    In addition to satellite images, sources of new information include Google, credit card statistics and social media such as X or Stocktwits, a popular X-like platform where investors share ideas about the market. For instance, Stocktwits users share charts showing the evolution of the price of a given stock (e.g. Apple stock) and explanations of why the evolution predicts a price increase or decrease. Users also mention the launch of a new product by a firm and whether it makes them bullish or bearish about the firm’s stock.

    Using data from the Institutional Brokers’ Estimate System (I/B/E/S) and regression analyses, we measured the quality of 65 million equity analysts’ forecasts from 1983 to 2017 by comparing analysts’ predictions with the actual earnings per share of companies’ stock.

    We found, as others had, that the availability of more data explains why stock analysts have become progressively better at making short-term projections. We went further, however, by asking how this alternative data affected long-term projections. And we found that over the same period that saw a rise in accuracy of short-term projections, there was a drop in validity of long-term forecasts.

    More data, but limited attention

    Because of its nature, alternative data – information about firms in the moment – is useful mostly for short-term forecasts. Longer-term analysis – from one to five years into the future – is a much more important judgment.

    Previous papers have proved the common-sense proposition that analysts have a limited amount of attention. If analysts have a large portfolio of firms to cover, for example, their scattered concentration begins to yield diminishing returns.

    We wanted to know whether the increased accuracy of short-term forecasts and declining accuracy of long-term predictions – which we had observed in our analysis of the I/B/E/S data – was due to a concomitant proliferation of alternative sources for financial information.

    To investigate this proposition, we analyzed all discussions of stocks on Stocktwits that took place between 2009 and 2017. As might be expected, certain stocks like Apple, Google or Walmart generated much more discussion than those of small companies that aren’t even listed on the Nasdaq.

    We conjectured that analysts who followed stocks that were heavily discussed on the platform – and so, who were exposed to a lot of alternative data – would experience a larger decline in the quality of their long-term forecasts than analysts who followed stocks that were little discussed. And after controlling for factors such as firms’ size, years in business and sales growth, that’s exactly what we found.

    We inferred that because analysts had easy access to information for short-term analysis, they directed their energy there, which meant they had less attention for long-term forecasting.

    The broader consequences of poor long-term forecasting

    The consequences of this inundation of alternative data may be profound. When assessing a stock’s value, investors must take into account both short- and long-term forecasts. If the quality of long-term forecasts deteriorates, there is a good chance that stock prices will not accurately reflect a firm’s value.

    Moreover, a firm would like to see the value of its decisions reflected in the price of its stock. But if a firm’s long-term decisions are incorrectly taken into account by analysts, it might be less willing to make investments that will only pay off years away.

    In the mining industry, for instance, it takes time to build a new mine. It’s going to take maybe nine, 10 years for an investment to start producing cash flows. Companies might be less willing to make such investments if, say, their stocks may be undervalued because market participants have less accurate forecasts of these investments’ impacts on firms’ cash flows – the subject of another paper we are working on.

    The example of investment in carbon reduction is even more alarming. That kind of investment also tends to pay off in the long run, when global warming will be an even bigger issue. Firms may have less incentive to make the investment if the worth of that investment is not quickly reflected in their valuation.

    Practical applications

    The results of our research suggest that it might be wise for financial firms to separate teams that research short-term results and those that make long-term forecasts. This would alleviate the problem of one person or team being flooded with data relevant to short-term forecasting and then also expected to research long-term results. Our findings are also noteworthy for investors looking for bargains: though there are downsides to poor long-term forecasting, it could present an opportunity for those able to identify undervalued firms.

    Thierry Foucault a reçu des financements du European Research Council (ERC).

    ref. Social platform Stocktwits and other sources of ‘alternative data’ may be hurting financial analysts’ long-term forecasts – https://theconversation.com/social-platform-stocktwits-and-other-sources-of-alternative-data-may-be-hurting-financial-analysts-long-term-forecasts-244102

    MIL OSI – Global Reports

  • MIL-OSI Global: Assisted dying bill: religious MPs were more likely to oppose law change in first round of voting

    Source: The Conversation – UK – By David Jeffery, Senior Lecturer in British Politics, University of Liverpool

    MPs are due to vote for a second time on the terminally ill adults (end of life) bill in parliament – a law that would legalise assisted suicide in England and Wales.

    The third reading stage will take place after a debate on Friday May 16 and would test MPs’ commitment to a change they initially supported at second reading in November 2024. In this first vote, the bill passed with 331 votes to 276 (with 35 abstentions), but in subsequent stages, the process has been more controversial. Emotions are running high and pressure groups have been vocal on both sides.

    As with many issues of morality, this is a free vote – MPs are not told what to do by their party. And after the second reading in November, MPs could, and did, give a range of reasons for how they voted, including their own experiences of loved ones’ final days, discussions with constituents, the experiences of other countries with assisted suicide – and also their religious views.


    Want more politics coverage from academic experts? Every week, we bring you informed analysis of developments in government and fact check the claims being made.

    Sign up for our weekly politics newsletter, delivered every Friday.


    In that first vote, there were clear patterns in voting relating to religious affiliation. MPs with no religion were much more likely to support assisted dying.

    In this group, 76% voted for, while just 18% voted against. Christian MPs overall were more likely to oppose the bill, with 57% voting against with the most pronounced opposition coming from Catholics, who were 74% opposed.

    Muslim MPs were even more likely to vote against, with 84% of them on the no side. Jewish and Sikh MPs were both roughly twice as likely to support the bill as to oppose it, whereas Hindu MPs were more likely to oppose than support by the same margin. The one Buddhist MP – Suella Braverman – voted against.

    Beyond their own demographic, political or religious position, the views of their constituents are also expected to influence how MPs vote. To explore this, I conducted a regression analysis (a statistical method to find a relationship between factors) that included a range of constituency variables, such as the proportion of white residents and the percentage of each religious group (along with those identifying as non-religious).

    I also considered the percentage of constituents with no formal qualifications, graduates, and those reporting some form of disability. In the full model, which incorporated all these variables, none of the religious variables were found to be statistically significant, suggesting that localised religious lobbying did not have a measurable effect on MPs’ voting behaviour.

    However, an interesting finding is that MPs with a higher proportion of disabled people in their constituency were more likely to vote for assisted dying. It is not clear if this relationship is causal, suggesting they had been lobbied by their constituents to support the bill, or a correlation between disabled people being more likely to live in Labour constituencies.

    How MPs voted on assisted dying, November 2024

    Characteristic Overall Yes No Abstain
    Total 642 331 (52%) 276 (43%) 35 (5%)
    Female 261 143 (55%) 107 (41%) 11 (4.2%)
    Ethnic MP 90 30 (33%) 57 (63%) 3 (3.3%)
    LGBT 71 49 (69%) 18 (25%) 4 (5.6%)
    Elected As
    Labour 411 236 (57%) 155 (38%) 20 (4.9%)
    Conservative 121 23 (19%) 93 (77%) 5 (4.1%)
    Liberal Democrat 72 61 (85%) 11 (15%) 0 (0%)
    Scottish National Party 9 0 (0%) 0 (0%) 9 (100%)
    Independent 6 0 (0%) 6 (100%) 0 (0%)
    Democratic Unionist Party 5 0 (0%) 5 (100%) 0 (0%)
    Reform UK 5 3 (60%) 2 (40%) 0 (0%)
    Green Party 4 4 (100%) 0 (0%) 0 (0%)
    Plaid Cymru 4 3 (75%) 1 (25%) 0 (0%)
    Social Democratic & Labour Party 2 1 (50%) 0 (0%) 1 (50%)
    Alliance 1 0 (0%) 1 (100%) 0 (0%)
    Traditional Unionist Voice 1 0 (0%) 1 (100%) 0 (0%)
    Ulster Unionist Party 1 0 (0%) 1 (100%) 0 (0%)
    MP Religion
    None 234 179 (76%) 43 (18%) 12 (5.1%)
    Christian (all) 351 132 (38%) 199 (57%) 20 (5.7%)
    Catholic 35 7 (20%) 26 (74%) 2 (5.7%)
    Muslim 25 2 (8.0%) 21 (84%) 2 (8.0%)
    Jewish 13 8 (62%) 4 (31%) 1 (7.7%)
    Sikh 12 8 (67%) 4 (33%) 0 (0%)
    Hindu 6 2 (33%) 4 (67%) 0 (0%)
    Buddhist 1 0 (0%) 1 (100%) 0 (0%)

    Note: the vote tallies differ from that given by the parliament website because I have included tellers for both sides, and correctly assigned MPs who voted in both lobbies as abstentions.

    In the first vote, female MPs were slightly more likely to vote for assisted dying than against it. LGBT MPs leaned heavily towards support (with 69% voting in favour of the law change). And minority ethnic MPs leaned heavily in the opposite directions – with 63% voting against.

    Perhaps predictably, given the prime minister’s open support for assisted dying, Labour MPs supported the bill, with 57% voting in favour and 38% against.

    The Liberal Democrats were overwhelmingly supportive – 85% backed it – whereas 77% of Conservative MPs voted against. All Northern Irish unionist parties – as well as the independent unionist MP – voted against the bill, with no abstentions.

    Reform UK MPs were split, with two against and three in favour (albeit one of the three, the now-suspended Rupert Lowe, only after a survey of his own constituents).

    But there is an interesting story unfolding on the left of politics. The 2024 general election saw challenges to Labour from both the Green Party and so-called Gaza independents. In this free vote, we see the contrasting social views between these two groups play out.

    All Green MPs supported assisted dying, while all Gaza independents – and Jeremy Corbyn – opposed it. This divide echoes Maria Sobolewska and Robert Ford’s framework in Brexitland, which distinguishes between “conviction identity liberals” and “ethnic minority ‘necessity liberals’”.

    The latter group aligns with conviction liberals on issues of discrimination due to self-interest, but often diverges on broader socially liberal issues such as assisted dying. Issues like assisted dying lay bare the tensions within this coalition.

    Identifying religion in parliament

    Religion is a personal matter so there is no official database that records the religious affiliation of MPs. It is therefore often impossible to test how religious views interact with voting behaviour. To address this gap, I built a dataset using a three-step methodology to determine MPs’ religious affiliation.

    Among MPs (excluding the Speaker and Sinn Fein MPs, who don’t take their seats), 54.7% (351) are Christian, including 5.5% (35) who are Catholic; 36.4% (234) have no religion; 3.9% (25) are Muslim; 2% (13) are Jewish; 1.9% (12) are Sikh; 0.9% (6) are Hindu; and 0.2% (1) is Buddhist.

    To work this out, I look first to see if an MP is a member of a religiously based group, such as Christians in Parliament. They are classified as belonging to that religion. Second, if an MP has publicly stated their religious beliefs – say, in a speech or interview – they are also classified accordingly.

    Labour MP John Healey is sworn in with a bible.
    Flickr/UK Parliament, CC BY-NC-ND

    These first two steps, however, cover only a fraction of MPs. Fortunately, all MPs are required to take an oath of allegiance to the Crown when sworn in. This oath can be made on a religious text or as a non-religious affirmation, and crucially MPs can choose which text to swear on, making this decision a meaningful and publicly visible indication of belief.

    That brings us to step three: the religious text (or lack thereof) used in the swearing-in ceremony is taken as an additional source of evidence for classification.

    These three sources are used in order of priority. For example, Tim Farron is a member of Christians in Parliament and has spoken openly about his faith, yet he chose to affirm without using a religious text. Even so, he is classified as Christian based on the first two criteria.

    What has been particularly interesting in this case has been the different voting patterns between Christian groups. I was able to set these groups apart because when MPs swear in, Catholics usually request specific versions of the Bible – such as the New Jerusalem Bible – whereas others might simply ask for “the Bible” and are given the King James Version.

    Treating Catholics as a distinct category allows for greater nuance in the analysis of the religious composition of parliament. A full breakdown of the religion of MPs, and the data used for this project, can be found here.

    We’ll soon be able to see how these markers interact with voting in the third reading.

    David Jeffery does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Assisted dying bill: religious MPs were more likely to oppose law change in first round of voting – https://theconversation.com/assisted-dying-bill-religious-mps-were-more-likely-to-oppose-law-change-in-first-round-of-voting-256503

    MIL OSI – Global Reports

  • MIL-OSI Global: Do people really want to know their risk of getting Alzheimer’s?

    Source: The Conversation – UK – By Claudia Cooper, Professor of Psychological Medicine, Queen Mary University of London

    Tricky Shark/Shutterstock.com

    A new study has highlighted the complex emotions and ethical dilemmas of learning your future risk of Alzheimer’s disease. Among 274 healthy research participants from the US aged 65 and over, 40% declined to receive their personal risk estimates – despite having initially expressed an interest in doing so.

    These risk estimates were based on demographic data, brain imaging and blood biomarkers, offering an 82 to 84% accuracy in predicting the likelihood of developing Alzheimer’s disease within five years. By comparison, age alone can predict this risk with 79% accuracy.

    So the value of these tests is modest in people without any cognitive symptoms, and there are potential risks to disclosing them. People told they are at increased risk of dementia describe how this can feel like an illness in itself – or being in limbo between health and disease – and cause distress.

    Participants who did not want to be tested cited the uncertainty of the result, the burden of knowing, and their negative experiences of witnessing Alzheimer’s disease in others. Those with a family history of Alzheimer’s were less likely to want to know their results – perhaps because of greater exposure to these negative experiences.


    Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.


    Black participants were less likely to want to know, too, which the researchers suggest could relate to greater experiences of stress, stigma and discrimination, making the prospect of a positive test result feel more threatening.

    Perhaps the question here is not why more people didn’t want to know the result, but whether researchers should routinely offer them at all, given the lack of certainty of the results and the potential for distress.

    Another issue is their limited usefulness for people without symptoms. Addressing lifestyle risk factors, such as eating a healthy diet and getting regular exercise, can reduce cognitive decline, a message the public is increasingly aware of. But knowing your risk doesn’t change the advice.

    In contrast to areas like breast cancer, where people at high risk of the disease can be offered preventative measures, such as drugs, surgery or enhanced screening, there are no comparable interventions to reduce dementia risk in people without symptoms.

    The authors of the new study explain that researchers used to be cautious about not sharing test results with participants in Alzheimer’s studies. But now there’s a growing expectation that people will be given their results. A proposed “bill of rights” for dementia research participants includes the right to get their results and have them clearly explained.

    It’s hard to explain how uncertain these results can be. People often worry about getting dementia in general, not just Alzheimer’s, which makes up about two-thirds of all cases. Some people who are told they have a low risk of Alzheimer’s may still develop another form of dementia, such as vascular dementia.

    The wider science that produced these future risk estimates has enabled the development of new diagnostic technologies unimaginable ten years ago. Similar blood tests can detect Alzheimer’s disease pathology in people with cognitive symptoms with over 90% accuracy, potentially enabling more accurate and timely dementia diagnoses.

    Blood tests

    Two major UK research programmes are piloting these blood tests in the NHS to support the more accurate diagnoses of some forms of dementia, including Alzheimer’s disease. Improved and earlier detection is needed: a third of people with dementia in England and Northern Ireland are never diagnosed.

    The benefits of the first drugs to slow the progression of Alzheimer’s disease are modest. In the UK, the National Institute for Health and Care Excellence hasn’t yet been convinced that these drugs are worth the cost for the NHS.

    The NHS is trialling blood tests to spot early signs of Alzheimer’s.
    AntonSAN/Shutterstock.com

    Some might question a focus on identifying future risks for dementia before we have good treatments. But developing better treatments depends on the new scientific discoveries that are helping us detect Alzheimer’s earlier. Finding a treatment for an illness requires a detailed understanding of how that illness develops.

    We are closer to delivering accurate detection of Alzheimer’s disease than curative treatment. This presents a dilemma of how much to know about personal risk. Rights-based approaches situate this dilemma with the participant, to decide whether to know rather than researchers to decide whether to tell.

    For researchers, disclosing results compassionately and clearly is difficult and for some, the knowledge will cause distress, however well it is conveyed. The option to receive results should come with warnings.

    Claudia Cooper receives funding from the National Institute for Health and Care Research (NIHR) Dementia and Neurodegeneration Policy Research Unit (NIHR206110) and is supported by an NIHR Senior Investigator award (NIHR205009). The views expressed are those of the author and not necessarily those of the NIHR, the NHS or the Department of Health and Social Care. She received funding from ESRC/NIHR for the APPLE-Tree secondary dementia prevention programme from 2019-24 (ES/S010408/1). She works as a Professor of Psychological Medicine at Wolfson Institute of Population Health, Queen Mary University of London.

    ref. Do people really want to know their risk of getting Alzheimer’s? – https://theconversation.com/do-people-really-want-to-know-their-risk-of-getting-alzheimers-256340

    MIL OSI – Global Reports

  • MIL-OSI Global: Bitter Honey by Lola Akinmade Åkerström explores how mothers carry their histories into their daughters’ lives

    Source: The Conversation – UK – By Olumayokun Ogunde, PhD Candidate in English, City St George’s, University of London

    In Bitter Honey, novelist Lola Akinmade Åkerström explores the emotional undercurrents of motherhood and daughterhood. The novel reflects on how the past bears down on the present. How mothers carry their histories into their daughters’ lives – often uninvited, sometimes unrecognised.

    My research is concerned with narratives that crack open the heart of African motherhood, stories that strive not only to expose pain, but to understand it. Bitter Honey gestures towards this emotional terrain.

    One particular line is emblematic of this exploration: “‘When I was your age, I moved to Sweden without my mother. With nobody.’ Tina has heard this story a million times.” It captures both the weariness of inherited trauma and the fragility of the desire for understanding that threads through the novel.

    Bitter Honey begins with the promise of protagonist Tina’s rising stardom. Alone in a dressing room, navigating fame and the sudden reappearance of her absentee father, Tina’s story has all the markings of a Bildungsroman (a coming-of-age novel shaped by psychological and moral growth). But the novel’s emotional nucleus is not fame, nor even fatherhood – it’s Tina’s mother, Nancy. Or at least, it wants to be.


    Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.


    Nancy’s story is one of deep and curdled regret. Akinmade crafts a portrait of a woman who once stood at the cusp of a glamorous new world, having fallen in love with Malik, an ambassador’s son who offers her access to elite circles, state dinners and the Swedish prime minister. But it is Lars, her white Swedish professor, who slowly unpicks the seams of her life.

    The novel promises a sense of romantic tension, inviting the reader to feel torn between Malik’s genuine warmth and Lars’s sophistication. But no such ambivalence materialises.

    Lars is not charming. He is jealous, controlling and ultimately predatory. Akinmade’s portrayal of Lars makes it clear: he is not a romantic dilemma, he is a colonising force. Nancy’s life with him is one of slow suffocation, and her daughter Tina is born of that rupture.

    Throughout the novel, there are subtle allusions and at times more overt depictions of Tina’s struggle with her mixed heritage. However, these moments feel overwritten, particularly in lines such as Tina’s desire to “fully wear her mixed skin”.

    While the phrasing may aim for poetic resonance, for me, it comes across as reductive. The metaphor inadvertently simplifies a complex and embodied experience, raising uneasy questions. Can identity be worn? Is it something that can be adorned, removed or chosen at will?

    Akinmade appears to be engaging with the constructedness of race and the illusion of agency within African diasporic identity. But Tina’s exploration of these themes lacks depth. There remains a striking incongruity between how she understands herself and how the world perceives her.

    At times her lack of critical self-awareness is jarring. Particularly when set against the more richly developed and emotionally layered portrayal of Nancy.

    Love and regret

    Where Akinmade excels is in her rendering of Nancy. Her character is more vividly drawn, more emotionally accessible than Tina’s. We see her consumed by grief and fear, mothering from a place of survival rather than nurture.

    “She would have resisted him. Even if it meant Tobias and Tina vanishing into thin air, never existing.” This is the agonising truth of Nancy’s lifetime: that her children are reminders of her own loss of agency. Her love is knotted with regret.

    There’s an urgent question running through Bitter Honey. What does it mean to parent when your life has been violently derailed by structures beyond your control?

    This legacy of cultural dislocation is a theme Akinmade touches on but stops short of fully exploring. Nancy, as an immigrant mother, carries a kind of preemptive grief. Her decisions are shaped not just by personal trauma but by a constant anticipation of harm. The immigrant mother often exists in survival mode, where care is expressed not through softness, but vigilance.

    “You figured I have no agency without him?” A line Tina delivers in a moment of confrontation typifies the novel’s uneven dialogue. Akinmade at times stumbles into phrasing that feels stilted or overwrought, reducing what could be moments of real emotional depth into awkward exchanges. Yet her broader ambition, to map generational wounds and diasporic complexity, is clear.

    The novel’s scope is wide. We move between Sweden and the United States, from the 70s to 2006, witnessing how each locale produces different shades of diasporic identity.

    Akinmade is particularly attuned to how Gambian communities shift across contexts – Gambians in Sweden are not like those in London or in New York. This specificity highlights that place informs not only experience but the perception of self.

    Ultimately, Bitter Honey is at its most compelling when it slows down, when it allows Nancy’s grief to speak plainly. One of the novel’s most poignant lines arrives when Nancy warns Tina before she signs with an American label that brands her the “Swedish siren”.

    “The world gives you your heart’s desires, then violently rips it away from your hands when you’re most vulnerable. Please stay vigilant.” Here, Akinmade captures the cruel irony of diasporic ambition, the way success can echo colonial exploitation, offering visibility at the cost of safety.

    Through Tina, the reader is kept at a remove from the raw reality of Nancy. The moments where we begin to glimpse the true texture of her life, her regret, her protectiveness, her survival, are all too fleeting.

    What would their lives look like without this fear? This is the novel’s quiet, unanswered question. Are these maternal guardrails protection or shackles? Bitter Honey doesn’t offer a resolution. But in asking, it reveals the aching legacy that mothers like Nancy pass down: not just trauma, but the impossible task of surviving without softness.

    Olumayokun Ogunde does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Bitter Honey by Lola Akinmade Åkerström explores how mothers carry their histories into their daughters’ lives – https://theconversation.com/bitter-honey-by-lola-akinmade-akerstrom-explores-how-mothers-carry-their-histories-into-their-daughters-lives-254527

    MIL OSI – Global Reports

  • MIL-OSI Global: Mrs Dalloway at 100: Virginia Woolf’s timeless novel is a work of pandemic fiction

    Source: The Conversation – UK – By Anna Snaith, Professor of Twentieth-Century Literature, King’s College London

    Virginia Woolf’s Mrs Dalloway, set on a June day in 1923, is unusual in that its two protagonists – society hostess Clarissa Dalloway and shell-shocked veteran Septimus Smith – never meet.

    Published 100 years ago on May 14 1925, the novel follows Clarissa as she prepares to host a party. She is visited by a former suitor, Peter Walsh, who has just returned from India. Her movements on London’s streets are intertwined with those of her husband, Richard, and daughter, Elizabeth, as well as a host of minor characters.

    Simultaneously, Septimus is experiencing what we would now understand as post-traumatic stress disorder (PTSD) caused by his service in the first world war. His sense of London as an apocalyptic war zone is exacerbated by his treatment at the hands of his doctors and their refusal to “hear” his trauma.

    Mrs Dalloway has inspired and continues to inspire numerous creative responses and reworkings, such as Michael Cunningham’s novel The Hours (1998) and Wayne McGregor’s triptych ballet Woolf Works (2015). The novel now has its own biography by Mark Hussey due to be published next month and DallowayDay celebrations that echo James Joyce’s Bloomsday.

    A century on, Mrs Dalloway speaks in so many ways to our own moment of militarisation, neo-imperialism and political crisis. In her diary, Woolf wrote that she wanted to “criticise the social system and to show it at work” and the novel offers an often excoriating critique of the military industrial complex of interwar Britain.


    This article is part of Rethinking the Classics. The stories in this series offer insightful new ways to think about and interpret classic books and artworks. This is the canon – with a twist.


    In her representation of returned soldier Septimus Smith’s PTSD, Woolf complicates the characters’ refrain that the “war is over” and the collective refusal to acknowledge the trauma of trench warfare. She was ahead of her time as a woman writing about war and in her literary depiction of the term and experience of “shell shock” so soon after the conflict when the condition was still often understood to be cowardice and malingering.

    Septimus’s trauma connects to the unspecified “illness” experienced by Clarissa, wife of a Conservative MP, preparing to host a party that evening. Woolf takes this privileged figure, who appears in her first novel The Voyage Out (1915) as a satirical cameo, and in this iteration offers the reader her rich inner life: her complex stream of thoughts, sensations and philosophical musings.

    The original book jacket.
    Wiki Commons

    Woolf’s acquaintance Kitty Maxse may have been the model for Clarissa. Kitty fell down the stairs to her death, raising the possibility of suicide. Instead, Woolf has Septimus commit suicide when he is faced with the threat of incarceration and the “rest cure”. News of the tragedy interrupts Clarissa’s party, but she understands his act: “Death was defiance. Death was an attempt to communicate … Somehow it was her disaster – her disgrace.”

    Clarissa feels herself, like Septimus, to be expendable: “She had the oddest sense of being herself invisible; unseen; unknown; there being no more marrying; no more having of children … this being Mrs Dalloway; not even Clarissa anymore.”

    Clarissa is 52 and, while the menopause is not mentioned directly, Woolf touches here in such a prescient way on the medicalisation and pathologising of women’s health. The novel is radical in its centring of a middle-aged protagonist – the novel form bends as it is uncoupled from the marriage plot. Woolf’s complex treatment of ageing – “she felt very young; at the same time unspeakably aged” – and the sense of both loss and possibility is acutely felt.

    Clarissa’s conformity to social expectations includes the suppression of her queer desires. Alone in her upstairs room, she reminisces about her “falling in love with women” and more specifically, her kiss with Sally Seton: “the most exquisite moment of her whole life … the whole world might have turned upside down!” Again, in her representation of queer lives, Woolf overturned the status quo.

    Mrs Dalloway and the pandemic

    In its engagement with feminist and queer politics, then, the novel has enduring appeal. But its post-COVID appreciation as a pandemic novel has meant that the novel has been read afresh by a whole new audience. Woolf and Clarissa are both survivors of the post-first world war influenza pandemic (known as the Spanish flu), which infected a third of the global population and caused an estimated 50-100 million deaths.

    We learn that Clarissa had “grown very white since her illness”, “her heart, affected, they said, by influenza”. Her sheer joy at walking London’s summer streets and mixing with crowds of passersby is a legacy of the pandemic as is the sense of loss and tolling of bells that echo through the novel.

    Critic Elizabeth Outka in Viral Modernism: the Influenza Pandemic and Interwar Literature (2019) has read the pandemic into the novel’s mobile and multifarious perspective.

    [It has] a narrative perspective that could move as nimbly among bodies as a virus, a plot defined less by linear timelines and more by temporal and experiential fluidity, and a structure that could express the delirious, hallucinatory reality that infused the culture.

    Clarissa has a poignant sense of the horror (“it was very, very dangerous to live even one day”) and joy (“in the triumph and the jingle … was what she loved; life; London; this moment of June”) of existence.

    The legacy of the war is present not only in Septimus’s trauma but in a wider civilian trepidation. In one scene, a skywriting aeroplane recalls the aerial and aural threat of wartime air raids over London. In another, a backfiring car sounds to Clarissa like a “violent explosion” or a pistol shot.

    The novel both registers the collective trauma of war but finds solace in the noisy, connective dynamism and diversity of urban life. Perhaps it is in Woolf’s acknowledgement of both the enormity and the minutiae of daily existence that this novel continues to speak to a contemporary readership.


    Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.


    Anna Snaith does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Mrs Dalloway at 100: Virginia Woolf’s timeless novel is a work of pandemic fiction – https://theconversation.com/mrs-dalloway-at-100-virginia-woolfs-timeless-novel-is-a-work-of-pandemic-fiction-256642

    MIL OSI – Global Reports

  • MIL-OSI Global: How 7,000 steps a day could help reduce your risk of cancer

    Source: The Conversation – UK – By Mhairi Morris, Senior Lecturer in Biochemistry, Loughborough University

    PeopleImages.com – Yuri A/Shutterstock

    Physical inactivity costs the UK an estimated £7.4 billion each year — but more importantly, it costs lives. In today’s increasingly sedentary world, sitting too much is raising the risk of many serious diseases, including cancer. But could something as simple as walking offer real protection?

    It turns out the answer may be yes.

    A growing body of research shows that regular physical activity can lower the risk of cancer. Now, recent findings from the University of Oxford add more weight to that idea. According to a large study involving over 85,000 people in the UK, the more steps you take each day, the lower your chances of developing up to 13 different types of cancer.

    In the study, participants wore activity trackers that measured both the amount and intensity of their daily movement. On average, researchers followed up with participants six years later. They found a clear pattern: more steps meant lower cancer risk, regardless of how fast those steps were taken.


    Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.


    The benefits began to appear at around 5,000 steps a day – anything below that didn’t seem to offer much protection.

    At 7,000 steps, the risk of developing cancer dropped by 11%. At 9,000 steps, it dropped by 16%. Beyond 9,000 steps, the benefits levelled off. The difference in risk reduction became marginal, and varied slightly between men and women.

    These findings support the popular recommendation of aiming for 10,000 steps a day – not just for general health, but potentially for cancer prevention too. These associations also held up when results were adjusted for demographic, BMI and other lifestyle factors, such as smoking, suggesting that the observed changes in cancer risk were indeed down to the average number of daily steps a participant took.

    Step intensity was also analysed – essentially, how fast participants were walking. Researchers found that faster walking was linked with lower cancer risk. However, when total physical activity was taken into account, the speed of walking no longer made a statistically significant difference. In other words: it’s the total amount of walking that counts, not how brisk it is.

    Likewise, replacing sitting time with either light or moderate activity lowered cancer risk – but swapping light activity for moderate activity didn’t offer additional benefits. So just moving more, at any pace, appears to be what matters most.

    The researchers looked at 13 specific cancers, including oesophageal, liver, lung, kidney, gastric, endometrial, myeloid leukaemia, myeloma, colon, head and neck, rectal, bladder and breast.

    Over the six year follow-up period, around 3% of participants developed one of these cancers. The most common were colon, rectal, and lung cancers in men, and breast, colon, endometrial, and lung cancers in women.

    Higher physical activity levels were most strongly linked to reduced risk of six cancers: gastric, bladder, liver, endometrial, lung and head and neck.

    Break it up

    Previous studies have relied on self-reported activity logs, which can be unreliable – people often forget or misjudge their activity levels. This study used wearable devices, providing a more accurate picture of how much and how intensely people were moving.

    The study also stands out because it didn’t focus solely on vigorous exercise. Many past studies have shown that intense workouts can reduce cancer risk – but not everyone is able (or willing) to hit the gym hard. This new research shows that even light activity like walking can make a difference, making cancer prevention more accessible to more people.

    Walking just two miles a day – roughly 4,000 steps, or about 40 minutes of light walking – could make a significant impact on your long-term health. You don’t have to do it all at once either. Break it up throughout the day by: taking the stairs instead of the lift; having a stroll at lunchtime; walking during phone calls; parking a bit further away from your destination.

    Getting more steps into your routine, especially during middle age, could be one of the simplest ways to lower your risk of developing certain cancers.

    Of course, the link between physical activity and cancer is complex. More long-term research is needed, especially focused on individual cancer types, to better understand why walking helps – and how we can make movement a regular part of cancer prevention strategies.

    But for now, the message is clear: sit less, move more – and you could walk your way toward better health.

    Mhairi Morris does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. How 7,000 steps a day could help reduce your risk of cancer – https://theconversation.com/how-7-000-steps-a-day-could-help-reduce-your-risk-of-cancer-255564

    MIL OSI – Global Reports

  • MIL-OSI Global: From boomers to Gen Z: How to solve the public sector succession crisis

    Source: The Conversation – Canada – By W. Dominika Wranik, Professor, Faculty of Management, Dalhousie University

    Public servants are the backbone of Canadian government. Canadians expect them to act in the best interest of society, to uphold Canadian democratic institutions, to steward public monies and to deliver programs and services.

    But as retirements surge, how can governments attract young people to work for them? It’s difficult when governments suffer from poor reputations, low public trust and offer working conditions that may not appeal to young people.

    What do young Canadians want from their careers, and what will it take for public service to win them over?

    This issue, among others concerning Canadian public servants, are currently being studied at the Professional Motivations Research Lab at Dalhousie University. The lab is led by the lead author of this piece, Dominika Wranik, whose work focuses on measuring and explaining the motivations of professionals in the public service.

    The lab’s insights shed light on the factors that influence how young people make decisions about whether to work for the public sector.

    Looming labour shortage

    In 1966, there were 7.7 working-age individuals for every senior in Canada. But in 2022, the ratio dropped to 3.4 and is projected to drop further over the next decade.

    A labour shortage will create increased competition for top talent between the public and private sector, an issue for governments as research has shown a growing disinterest among youth in pursuing civil service careers.

    Recruitment to the public service is further complicated by declining perceptions of competence and trust in Canadian public institutions. With studies demonstrating that applicants’ perceptions of an organization’s competence affect their attraction to working there, Canadian governments also run the risk of losing potential applicants who don’t view Canada’s public institutions as being competent or trustworthy.

    These challenges come as young Canadians enter the workforce with more career options than ever before, and different expectations from previous generations.

    Salary not the sole motivator

    Young Canadians are not solely interested in high incomes, but also in workplaces that provide a healthy work/life balance and align with their values.

    Data collected in 2024, for example, shows that 87 per cent of British Columbians between the ages of 18 and 34 prefer employers that are socially and environmentally responsible, with 61 per cent stating they would only work for such companies.

    This means Canadian governments are currently finding themselves in a perilous situation, where rising suspicion about their trustworthiness and competence, paired with growing disinterest in the public sector as a whole, means they’re not positioned well to navigate an impending labour shortage.

    Strengthening their capacity to attract and recruit the next generation of workers is therefore imperative, not only for upholding public institutions, but also for rebuilding trust in government.

    In the effort to resolve this issue and enhance recruitment to the public service, Canadian government officials must pore over existing research into the factors that determine why youth and those just entering the labour market — people between the ages of 13 to 27, known as Gen Z — pursue or refrain from pursing public service jobs.

    Some research suggests the three variables that potentially predict whether a member of Gen Z is inclined to pursue a career in the public sector are:

    Perceptions

    In terms of perceptions of the public sector, a recent study found that when choosing between the public and private sectors, university students in Norway and Poland were most influenced by their views of the public sector.

    The more positive the outlook — for example, that public sector work is considered less bureaucratic and less inefficient — the higher the preference to work in the public sector, and vice versa.

    This finding was echoed by racialized minorities in the United States. A 2022 study found that Black, Asian and Latinx young adults between the ages of 18-36 were largely turned off by government work due to perceptions that they weren’t represented or well-served by their “largely white, male and wealthy” local, state or federal government representatives.

    In Canada, a study led by the Public Policy Forum discovered that perceptions of the nature of government work also had a significant impact on a student’s decision to pursue a career in the public sector. Students who chose to enter the public service cited “opportunities to examine a wide range of complex challenges and help create policy solutions that can have a positive impact on many communities.”

    Motivations

    In terms of having public service motivation (PSM) — which refers to an individual’s inclination to serve the public interest — studies have found that members of Gen Z are more likely to be drawn to the public sector if they are high in PSM.

    Specifically, a study of Gen Z students in criminal justice programs found that those who identified with PSM tenets — such as “meaningful public service is very important to me” and “making a difference in society means more to me than personal achievements” — had a significantly higher likelihood of choosing the public sector over the private sector.

    Similarly, an interdisciplinary sample of undergraduate students with higher levels of PSM — and who therefore identified with the PSM dimensions of self-sacrifice, compassion and commitment to public values — were more likely to have a preference for the public sector.

    Job attributes

    Preferred job attributes also influence the employment choices of members of Gen Z. The aforementioned research on Norwegian and Polish youth and another 2017 study by Canada’s Public Policy Forum (2017) find that when Gen Z students are interested in public sector work, it’s due to the semblance of financial and job security.

    Given the growing disinterest among the Canadian population in pursuing employment in the public sector, new insights about what attracts Gen Z workers to the public sector should be required reading by governments across Canada.




    Read more:
    Public service reflections: Why the role of civil servants must evolve to ensure public trust


    Understanding Gen Z’s misgiving about public sector work will help better position governments to compete with the private sector to recruit the next generation of employees.

    With perceptions of government competence and trustworthiness continuing to fall, it is imperative that Canadian public policymakers take significant steps to engage with Gen Z students and workers to create employment conditions that are attractive and aligned with their values.

    The next generation of government leaders in Canada are currently in high school, college or university classrooms across the country, meaning that research centred in educational institutions is uniquely positioned to uncover valuable regarding how public sector employment is perceived.

    Therefore, government-led engagement that is conducted through town halls, workshops and focus groups can help strengthen trust in government while familiarizing Gen Z students with government careers.

    W. Dominika Wranik receives funding from the Social Sciences and Humanities Research Council. In the past, she has held funding from the Canadian Institutes of Health Research, Mitacs, Research Nova Scotia, and the EU Horizon 2020, as well as short-term funding from several provincial and federal government departments. Dr. Wranik serves as an expert consultant for Canada’s Drug Agency (CDA-AMC).

    Alec Brooks and Payton Nicol do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    ref. From boomers to Gen Z: How to solve the public sector succession crisis – https://theconversation.com/from-boomers-to-gen-z-how-to-solve-the-public-sector-succession-crisis-255077

    MIL OSI – Global Reports

  • MIL-OSI Global: Trump’s tariff threat to foreign films overlooks the value of multilingual cinema

    Source: The Conversation – Canada – By Gaelle Planchenault, Associate Professor of French Media, Culture, and Applied Linguistics, Simon Fraser University

    With the 78th Cannes International Film Festival underway this week, there is little doubt that one topic will be central to conversations among filmmakers, sales agents and journalists: United States President Donald Trump’s threat to impose a 100 per cent tax on foreign-made films.

    Amid an ongoing tariff war, Trump’s proposal — which may ultimately remain an empty threat — goes beyond economic protectionism. It is cultural protectionism. It also reflects language ideologies that have long constrained the American film industry and American engagement with multilingual cinema.

    Experts have offered various theories about the motivations behind this threat, as well as why it may ultimately prove unwise. In the rush to brace for impact, we often forget the values behind these extreme positions aren’t new. More importantly, we must also remember why it’s vital to protect these cultural expressions.

    As a linguist, I see a clear connection between this proposal and one of the administration’s actions earlier this year, when Trump signed an executive order designating English as the country’s sole official language. This move reflected a deeply rooted monolingual ideology that has long influenced both the U.S. language policy and education systems.




    Read more:
    Trump’s English language order upends America’s long multilingual history


    Monolingual ideology

    Such language ideology reflects a belief in the superiority of monolingualism, a view that American linguist Rosina Lippi-Green links to the “myth of Standard American English.”

    This myth is grounded in the subordination by one dialect, believed to be of higher quality and status, over other languages and dialects. According to Lippi-Green, the enforcement of this ideology follows a systematic process: language is mystified, authority is claimed and a series of negative consequences ensue. Misinformation is generated, targeted languages are trivialized, non-conformers are vilified or marginalized and threats are made.

    Such authority and threats are recognizable in this most recent threat to make access to foreign films difficult. The issue is not just about the economic dimension of foreign-made films. It is also about the perceived threat posed by the presence and influence of other languages. At its core, this reflects a fear or rejection of linguistic diversity.

    In the film industry, this monolingual ideology is closely tied to glottophobic attitudes, also referred to by some scholars as linguicism. These terms define the misrepresentation and negative stereotyping of speakers of languages other than English.

    Hollywood, in particular, has a long history of portraying foreign or heritage languages in stereotypical and often derogatory ways. Consider, for instance, the German-speaking characters in Second World War films, or more recent depictions of Arabic, Mexican Spanish or Russian speakers.

    These portrayals illustrate a tendency to depict other languages as menacing — a point that was also made in the American president’s claim that foreign films pose a “threat” because they constitute “messaging and propaganda.”

    Linguistic stereotyping

    It’s not just characters who speak other languages who have been misrepresented in American films. Those who speak English as a second language — that is with an accent or with a syntax that is marked by their first language — were often played by white actors and subject to similar derogatory stereotypes.

    Linguists have identified patterns in these linguistic representations, referring to them as Injun English, Mock Spanish or yellow voices, among others.

    Lippi-Green has famously argued that such linguistic depictions are ways to reinforce standard language ideologies through linguistic stereotyping in media, including popular Disney cartoons. They effectively teach American children how to discriminate.

    In my work, I examined French-accented English to demonstrate that these representations reflect broader cultural anxieties. Ultimately, this rhetoric reveals more about the U.S. relationship with linguistic diversity than it does about the communities being portrayed.

    Trump has made reference to “any and all movies coming into our country that are produced in foreign lands.” But it remains unclear how such measures would impact streaming platforms and the diverse range of films they currently offer.

    Hollywood has come a long way since the heydays of linguicism, gradually embracing a more inclusive and multilingual cinematic landscape. Today, films that present a more diverse linguistic landscape are increasingly common. And audiences are accustomed to having access to a wide selection of international content.

    The global success of the French series Call My Agent is just one example. Among others are popular French spy thrillers and romances, Swedish thrillers, Japanese anime and Korean dystopian series.

    The pleasure of watching foreign films

    For years, foreign language films have been recognized as an invaluable resource for language learning. This fact is supported by language learning apps that increasingly recommend users to view TV programs or movies to support learning. Movies and TV provide access to a variety of dialects as well as authentic forms of language.

    As a professor of French media and linguistics, I often use films to teach students about French language and culture. But beyond their educational benefits, foreign-language films offer unique esthetic and emotional pleasures.

    A press image for the show Call My Agent.
    Netflix

    Watching a film is to engage with sound and image. The language itself enhances the immersive experience, contributing to the authenticity of the storytelling. For example, one of my students told me he enjoys turning on closed captions in French. These are also known as SDH: Subtitles for the Deaf and Hard-of-Hearing. He does this not just for the dialogue but because they capture the full cinematic experience, including the naming of sounds.

    Restricting access to these cultural products would trap viewers in an ideological echo chamber, where only one language is heard and validated.

    Fictional representations play a powerful role in shaping and reinforcing real-world attitudes. Monolingual representations potentially foster linguistic discrimination and intolerance toward any word uttered with an accent or in another language. In short, such restrictions could pave the way for a partial and stunted society.

    Gaelle Planchenault does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Trump’s tariff threat to foreign films overlooks the value of multilingual cinema – https://theconversation.com/trumps-tariff-threat-to-foreign-films-overlooks-the-value-of-multilingual-cinema-256323

    MIL OSI – Global Reports

  • MIL-OSI Global: Pacific voyagers’ remarkable environmental knowledge allowed for long-distance navigation without Western technology

    Source: The Conversation – USA – By Richard (Rick) Feinberg, Professor Emeritus of Anthropology, Kent State University

    An outrigger canoe would typically have several paddlers and one navigator. AP Photo/David Goldman

    Wet and shivering, I rose from the outrigger of a Polynesian voyaging canoe. We’d been at sea all afternoon and most of the night. I’d hoped to get a little rest, but rain, wind and an absence of flat space made sleep impossible. My companions didn’t even try.

    It was May 1972, and I was three months into doctoral research on one of the world’s most remote islands. Anuta is the easternmost populated outpost in the Solomon Islands. It is a half-mile in diameter, 75 miles (120 kilometers) from its nearest inhabited neighbor, and remains one of the few communities where inter-island travel in outrigger canoes is regularly practiced.

    A documentary team made a recent visit to Anuta.

    My hosts organized a bird-hunting expedition to Patutaka, an uninhabited monolith 30 miles away, and invited me to join the team.

    We spent 20 hours en route to our destination, followed by two days there, and sailed back with a 20-knot tail wind. That adventure led to decades of anthropological research on how Pacific Islanders traverse the open sea aboard small craft, without “modern” instruments, and safely arrive at their intended destinations.

    Wayfinding techniques vary, depending upon geographic and environmental conditions. Many, however, are widespread. They include mental mapping of the islands in the sailors’ navigational universe and the location of potential destinations in relation to the movement of stars, ocean currents, winds and waves.

    Western interest in Pacific voyaging

    Disney’s two “Moana” movies have shined a recent spotlight on Polynesian voyaging. European admiration for Pacific mariners, however, dates back centuries.

    In 1768, the French explorer Louis Antoine de Bougainville named Sāmoa the “Navigators’ Islands.” The famed British sea captain James Cook reported that Indigenous canoes were as fast and agile as his ships. He welcomed Tupaia, a navigational expert from Ra‘iātea, onto his ship and documented Tupaia’s immense geographic knowledge.

    European explorers were impressed by the navigational skills of the people they encountered in the Pacific islands.
    Science & Society Picture Library via Getty Images

    In 1938, Māori scholar Te Rangi Hīroa (aka Sir Peter Buck) authored “Vikings of the Sunrise,” outlining Pacific exploration as portrayed in Polynesian legend.

    In 1947, Thor Heyerdahl, a Norwegian explorer and amateur archaeologist, crossed from Peru to the Tuamotu Islands aboard a balsa wood raft that he named Kon-Tiki, sparking further interest and inspiring a sequence of experimental voyages.

    Ten years later Andrew Sharp, a New Zealand-based historian and prominent naysayer, argued that accurate navigation over thousands of miles without instruments is impossible. Others responded with ethnographic studies showing that such voyages were both historic fact and current practice. In 1970, Thomas Gladwin published his findings on the Micronesian island of Polowat in “East Is a Big Bird.” Two years later, David Lewis’ “We, the Navigators” documented wayfinding techniques across much of Oceania.

    Many anthropologists, along with Indigenous mariners, have built on Gladwin’s and Lewis’ work.

    A final strand has been experimental voyaging. Most celebrated is the work of the Polynesian Voyaging Society. They constructed a double-hull voyaging canoe named Hōkūle‘a, built from modern materials but following a traditional design. In 1976, led by Micronesian navigator Mau Piailug, they sailed Hōkūle‘a over 2,500 miles, from Hawai‘i to Tahiti, without instruments. In 2017, Hōkūle‘a completed a circumnavigation of the planet.

    In traversing Earth’s largest ocean, one can travel thousands of miles and see nothing but sky and water in any direction. Absent a magnetic compass, much less GPS, how is it possible to navigate accurately to the intended destination?

    Looking to the stars

    Most Pacific voyagers rely on celestial navigation. Stars rise in the east, set in the west, and, near the equator, follow a set line of latitude. If a known star either rises or sets directly over the target island, the helmsman can align the vessel with that star.

    However, there are complications.

    Which stars are visible, as well as their rising and setting points, changes throughout the year. Therefore, navigation requires detailed astronomical understanding.

    Also, stars are constantly in motion. One that is positioned directly over the target island will soon either rise too high to be useful or sink below the horizon. Thus, a navigator must seek other stars that follow a similar trajectory and track them as long as they are visible and low on the horizon. Such a sequence of guide stars is often called a “star path.”

    Of course, stars may not align precisely with the desired target. In that case, instead of aiming directly toward the guide star, the navigator keeps it at an appropriate angle.

    A navigator must modify the vessel’s alignment with the stars to compensate for currents and wind that may push the canoe sideways. This movement is called leeway. Therefore, celestial navigation requires knowledge of the currents’ presence, speed, strength and direction, as well as being able to judge winds’ strength, direction and effect on the canoe.

    During daylight, when stars are invisible, the Sun may serve a similar purpose. In early morning and late afternoon, when the Sun is low in the sky, sailors use it to calculate their heading. Clouds, however, sometimes obscure both Sun and stars, in which case voyagers rely on other cues.

    Navigating requires deep understanding of waves, in the form of both swells and seas.
    AP Photo/Esteban Felix

    Waves, wind and other indicators

    A critical indicator is swells. These are waves produced by winds that blow steadily across thousands of miles of open sea. They maintain their direction regardless of temporary or local winds, which produce differently shaped waves called “seas.”

    The helmsman, feeling swells beneath the vessel, gleans the proper heading, even in the dark. In some locations, as many as three or four distinct swell patterns may exist; voyagers distinguish them by size, shape, strength and direction in relation to prevailing winds.

    Once sailors near their target island, but before it is visible, they must determine its precise location. A common indicator is reflected waves: swells that hit the island and bounce back to sea. The navigator feels reflected waves and sails toward them. Pacific navigators who have spent their lives at sea appear quite confident in their reliance on reflected waves. I, by contrast, find them difficult to differentiate from waves produced directly by the wind.

    Birds headed for home at the end of the day provide a clue about where land lies.
    Ecaterina Leonte/Photodisc via Getty Images

    Certain birds that nest on land and fish at sea are also helpful. In early morning, one assumes they’re flying from the island; in late afternoon, they’re likely returning to their nesting spots.

    Navigators sometimes recognize a greenish tint to the sky above a not-yet-visible island. Clouds may gather over a volcanic peak.

    And sailors in the Solomon Islands’ Vaeakau-Taumako region report underwater streaks of light known as te lapa, which they say point toward distant islands. One well-known researcher has expressed confidence in te lapa’s existence and utility. Some scholars have suggested that it could be a bioluminescent or electromagnetic phenomenon. On the other hand, despite a year of concerted effort, I was unable to confirm its presence.

    Estimating one’s position at sea is another challenge. Stars move along a given parallel and indicate one’s latitude. To gauge longitude, by contrast, requires dead reckoning. Navigators calculate their position by keeping track of their starting point, direction, speed and time at sea.

    Some Micronesian navigators estimate their progress through a system known as etak. They visualize the angle between their canoe, pictured as stationary, and a reference island that is off to one side and represented as moving. Western researchers have speculated on how etak works, but there is no consensus yet.

    For millennia, Pacific voyagers have relied on techniques such as these to reach thousands of islands, strewn throughout our planet’s largest ocean. They did so without Western instruments. Instead, they held sophisticated knowledge and shared understandings, passed by word of mouth, through countless generations.

    Richard (Rick) Feinberg has, in the past, received research funding from the National Science Foundation, the National Institute for Mental Health, and Kent State University. He is a member of the American Anthropological Association, the Association of Senior Anthropologists, and the Association for Social Anthropology in Oceania. He has maintained connections with people of the islands on which he has conducted research.

    ref. Pacific voyagers’ remarkable environmental knowledge allowed for long-distance navigation without Western technology – https://theconversation.com/pacific-voyagers-remarkable-environmental-knowledge-allowed-for-long-distance-navigation-without-western-technology-247547

    MIL OSI – Global Reports

  • MIL-OSI Global: M&S cyberattacks used a little-known but dangerous technique

    Source: The Conversation – UK – By Paul Rincon, Commissioning Editor, Science, Technology and Business

    Richard OD / Shutterstock

    The cyberattack that has targeted Marks & Spencer’s (M&S) is the latest in a growing wave of cases involving something called sim-swap fraud. While the full technical details remain under investigation, a report in the Times suggests that cyber attackers used this method to access M&S internal systems, possibly by taking control of an employee’s mobile number and convincing IT staff to reset critical login credentials.

    Sim-swap fraud is not a new phenomenon, but it is becoming increasingly dangerous
    and more prevalent. According to CIFAS, the UK’s national fraud prevention service, Sim-swap incidents have surged from under 300 in 2022 to almost 3,000 in 2023. What had been mainly a risk to cryptocurrency investors or online influencers is now much more prevalent.

    This form of cyberattack shows how major companies and ordinary people can be compromised through a tactic that exploits human factors, such as trust and how we have built our digital identities around mobile phones.

    Sim-swap fraud begins when a scammer convinces a mobile operator to transfer a victim’s number to a new sim card, or even an esim (one that’s embedded in the device), under the scammer’s control.

    This can be done over the phone, through an online chat, or even with the help of a
    bribed insider. Once the number is transferred, all calls and texts intended for the victim are redirected to the scammer. This includes those crucial verification codes used for logging into email, banking, messaging apps such as WhatsApp, and government services such as HMRC.

    This alone would be dangerous. But what makes sim-swap fraud so influential is
    that the cyber scammer often already has access to a patchwork of personal data
    about their target. That information may have been collected from data breaches,
    phishing attacks, low-reputation websites, or even the victim’s social media.

    People often underestimate the extent to which they reveal themselves online: a birthday posted on Instagram, a phone number included in a job posting, or a home address used in an online giveaway. Scammers combine this data to build a convincing profile, enough to fool a mobile operator’s customer service staff into believing they’re talking to the real account holder.

    How the sim-swap fraud works

    Once the scammer gains control of a number, the consequences are extensive.
    Attackers can access sensitive information, including personal documents and
    request and receive password reset links for the user’s other accounts. They can log in to WhatsApp or Telegram accounts, read private messages, impersonate the user, and even contact friends or family members to conduct further scams.

    The victims might see false messages posted in their names or fraudulent transactions made from their accounts. This can lead to financial loss, reputation damage, as well as emotional and mental health issues on the part of the victims.

    In the case of M&S, attackers apparently used this access to manipulate internal
    processes and gain access to sensitive systems. This highlights a broader risk:
    many companies still rely on phone numbers as a secondary verification method for
    staff, making their systems vulnerable to the same cyberattack used against
    individuals.

    How sim-Swap fraud works.
    Hossein Abroshan

    Reducing the risk

    While real-time detection of mobile number hijacking remains difficult, taking specific steps can significantly reduce the likelihood of being targeted and victimised. People should avoid sharing personal data unnecessarily, especially across multiple platforms and, very importantly, on unknown or untrusted websites.

    Many attackers don’t obtain all the necessary information from a single source. Instead, they collect it incrementally, using public profiles, marketing databases and past leaks to form a comprehensive picture.

    Being mindful of where you share your phone number, birthday or other identifiers can make it harder for others to impersonate you. It is also crucial to learn how phishing works and how to recognise it, so you will not submit your sensitive information to phishing or fake websites.

    Avoiding SMS-based authentication, where possible, is another key step. Many
    services now support authenticator apps, such as Google Authenticator, Microsoft Authenticator, Due or Authy, which are not tied to your mobile number. For mobile
    accounts themselves, setting up a unique pin or password to your account, which
    must be provided to authorise any changes, can add an extra layer of protection. This makes it harder for someone to initiate a sim swap without that code. However, users alone cannot fulfil this duty.

    Mobile network operators must strengthen identity verification practices, moving beyond basic questions about names and addresses that can be easily gathered or guessed. Banks and other financial institutions should reconsider using SMS or, at the very least, SMS-only as the default method for sensitive authentication. And companies, particularly those handling personal data or financial assets, need to train their IT and customer service teams to recognise the signs of identity based attacks.

    Sim-swap fraud is effective not because it’s highly technical, but because it exploits our trust in phone numbers for identity verification. The M&S case and similar examples show how fragile that trust can be – and why securing our mobile identities is no longer optional.

    ref. M&S cyberattacks used a little-known but dangerous technique – https://theconversation.com/mands-cyberattacks-used-a-little-known-but-dangerous-technique-256601

    MIL OSI – Global Reports

  • MIL-OSI Global: The US and China have reached a temporary truce in the trade wars, but more turbulence lies ahead

    Source: The Conversation – Global Perspectives – By Peter Draper, Professor, and Executive Director: Institute for International Trade, and Jean Monnet Chair of Trade and Environment, University of Adelaide

    Defying expectations, the United States and China have announced an important agreement to de-escalate bilateral trade tensions after talks in Geneva, Switzerland.

    The good, the bad and the ugly

    The good news is their recent tariff increases will be slashed. The US has cut tariffs on Chinese imports from 145% to 30%, while China has reduced levies on US imports from 125% to 10%. This greatly eases major bilateral trade tensions, and explains why financial markets rallied.

    The bad news is twofold. First, the remaining tariffs are still high by modern standards. The US average trade-weighted tariff rate was 2.2% on January 1 2025, while it is now estimated to be up to 17.8%. This makes it the highest tariff wall since the 1930s.

    Overall, it is very likely a new baseline has been set. Bilateral tariff-free trade belongs to a bygone era.

    Second, these tariff reductions will be in place for 90 days, while negotiations continue. Talks will likely include a long list of difficult-to-resolve issues. China’s currency management policy and industrial subsidies system dominated by state-owned enterprises will be on the table. So will the many non-tariff barriers Beijing can turn on and off like a tap.

    China is offering to purchase unspecified quantities of US goods – in a repeat of a US-China “Phase 1 deal” from Trump’s first presidency that was not implemented. On his first day in office in January, amid a blizzard of executive orders, Trump ordered a review of that deal’s implementation. The review found China didn’t follow through on the agriculture, finance and intellectual property protection commitments it had made.

    Unless the US has now decided to capitulate to Beijing’s retaliatory actions, it is difficult to see the US being duped again.

    Failure to agree on these points would reveal the ugly truth that both countries continue to impose bilateral export controls on goods deemed sensitive, such as semiconductors (from the US to China) and processed critical minerals (from China to the US).

    Moreover, in its so-called “reciprocal” negotiations with other countries, the US is pressing trading partners to cut certain sensitive China-sourced goods from their exports destined for US markets. China is deeply unhappy about these US demands and has threatened to retaliate against trading partners that adopt them.

    A temporary truce

    Overall, the announcement is best viewed as a truce that does not shift the underlying structural reality that the US and China are locked into a long-term cycle of escalating strategic competition.




    Read more:
    Why Trump fails to understand China’s trade war tactics, and what his negotiators should be reading


    That cycle will have its ups (the latest announcement) and downs (the tariff wars that preceded it). For now, both sides have agreed to announce victory and focus on other matters.

    For the US, this means ensuring there will be consumer goods on the shelves in time for Halloween and Christmas, albeit at inflated prices. For China, it means restoring some export market access to take pressure off its increasingly ailing economy.

    As neither side can vanquish the other, the likely long-term result is a frozen conflict. This will be punctuated by attempts to achieve “escalation dominance”, as that will determine who emerges with better terms. Observers’ opinions on where the balance currently lies are divided.

    Along the way, and to use a quote widely attributed to Winston Churchill, to “jaw-jaw is better than to war-war”. Fasten your seat belts, there is more turbulence to come.

    Where does this leave the rest of us?

    Significantly, the US has not (so far) changed its basic goals for all its bilateral trade deals.

    Its overarching aim is to cut the goods trade deficit by reducing goods imports and eliminating non-tariff barriers it says are “unfairly” prohibiting US exports. The US also wants to remove barriers to digital trade and investments by tech giants and “derisk” certain imports that it deems sensitive for national security reasons.

    The agreement between the US and UK last week clearly reflects these goals in operation. While the UK received some concessions, the remaining tariffs are higher, at 10% overall, than on April 2 and subject to US-imposed import quotas. Furthermore, the UK must open its market for certain goods while removing China-originating content from steel and pharmaceutical products destined for the US.

    For Washington’s Pacific defence treaty allies, including Australia, nothing has changed. Potentially difficult negotiations with the Trump administration lie ahead, particularly if the US decides to use our security dependencies as leverage to wring concessions in trade. Japan has already disavowed linking security and trade, and their progress should be closely watched.

    The US has previously paused high tariffs on manufacturing nations in South-East Asia, particularly those used by other nations as export platforms to avoid China tariffs. Vietnam, Cambodia and others will face sustained uncertainty and increasingly difficult balancing acts. The economic stakes are higher for them.

    They, like the Japanese, are long-practised in the subtle arts of balancing the two giants. Still, juggling ties with both Washington and Beijing will become the act of an increasingly high-wire trapeze artist.

    The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    ref. The US and China have reached a temporary truce in the trade wars, but more turbulence lies ahead – https://theconversation.com/the-us-and-china-have-reached-a-temporary-truce-in-the-trade-wars-but-more-turbulence-lies-ahead-256448

    MIL OSI – Global Reports

  • MIL-OSI Global: Lady Gaga bomb plot: Thwarted plan lifts veil on the gamification of hate and gendered nature of online radicalization

    Source: The Conversation – Global Perspectives – By David Nemer, Associate Professor in the Department of Media Studies, University of Virginia

    Lady Gaga performs at Copacabana Beach on May 3, 2025, in Rio de Janeiro, Brazil. Kevin Mazur/WireImage for Live Nation

    The more than 2 million people who attended Lady Gaga’s free concert on Copacabana Beach on May 3, 2025, had no idea of a plot that, if successful, would have turned the event into a tragedy fueled by hate. Just hours before a sea of admirers waved fans in sync with the singer during the event, the Rio de Janeiro Civil Police thwarted a planned attack involving Molotov cocktails and improvised bombs – and targeting the American singer’s LGBTQ following.

    Two people have since been arrested over the plot, which was organized by users of digital platforms such as Discord. The intent, authorities say, was radicalizing and recruiting teenagers to carry out the planned attack.

    Those responsible hoped to entice these young people into actions that would gain online notoriety.

    More than 2 million people are said to have attended the Lady Gaga concert in Rio.
    Daniel Ramalho/AFP via Getty Images

    Although authorities were able to prevent the attack, the incident stands as a stark warning about the growth of hate networks among youth − and how platforms fuel the radicalization of teenagers, especially boys and young men.

    As experts in the anthropology of technology and information science, we see something deeply generational about this phenomenon. The recent Netflix series “Adolescence” broke viewership records by portraying an environment in which young people live in hyperconnected online spheres, absent of state oversight and parental supervision. In these spheres, bullying toxic masculinity permeates, and violence – often targeted at women and sexual minorities – is normalized.

    The show was set in the U.K., but it holds up a mirror to the world. Data from polling company Gallup reveals a growing ideological divide between young men and women in Gen Z across the globe. Too often, that divide, in which young men and boys are turning against progressive values, is being expressed through actions associated with the “manosphere,” such as misogyny and incel behavior.

    Platforms for hate

    In the United States, women aged 18 to 30 are now 30 percentage points more liberal than their male counterparts, according to Gallup’s surveys. In Germany, where a right-wing coalition recently won national elections and the extreme-right AfD party is rising in popularity at an alarming rate, the gap is also 30 points. In Poland, although the far-right left power at the end of 2023 after eight years, nearly half of men ages 18 to 21 support far-right parties − compared with just one-sixth of women in the same age range.

    This polarization is emerging just as online platforms such as Discord, TikTok and Reddit have become formative spaces of identity.

    Instead of promoting diversity, however, many of these platforms have been used as machines for producing and spreading hate. The 2021 study Mapping Discord’s Darkside, published in the journal New Media & Society, shows that despite marketing efforts to distance itself from the far right, Discord hosts thousands of servers associated with neo-Nazi, misogynistic, racist, transphobic and conspiratorial discourse. Researchers identified 2,741 such servers − with more than 850,000 active members.

    These networks end up functioning as recruitment hubs, where young people − especially boys − are lured in by edgy memes, promises of belonging and identity games based on excluding others. Discord’s structure, which prioritizes privacy and decentralization, has become fertile ground for the emergence of what scholar Adrienne Massanari calls “toxic technocultures.”

    Services such as Disboard − an informal search engine for Discord servers − are used to recruit teens into communities that glorify Nazism, encourage hatred toward women and people from the LGBTQ+ community, and even offer “services” for coordinated attacks on other servers. And this appears to be the case in the thwarted attack on the Lady Gaga concert.

    Presenting a challenge

    A significant factor in the success of these radicalizing environments is gamification − the use of gamelike elements such as challenges, rewards and leaderboards in nongame contexts. When applied to social networks and extremist forums, gamification turns engagement into competition and hate speech into a playful challenge.

    This practice makes the entrance into extremism more palatable for young, impressionable people by masking violence behind seemingly harmless mechanics. As noted in the European Commission’s 2021 report Gamification and Online Hate Speech, gamification has become a powerful tool for normalizing and spreading hate, particularly among young people seeking recognition and belonging.

    This process, known as “bottom-up gamification,” occurs when users create the rules, symbolic rewards and challenges. For example, by turning hate speech into “challenges” that involve humiliating women or people from the LGBTQ+ community online, the dehumanization of targets is presented in playful, viral ways.

    Turning hate into entertainment

    The investigation into the foiled attack on Lady Gaga’s Copacabana concert revealed exactly this mechanism: The attack was treated as a “collective challenge,” with youths recruited to build Molotov cocktails and explosive backpacks in order to gain notoriety on social media.

    The logic of gamification also creates a structure of “achievement” and “scoring” that fosters competition and reinforces radical ideology. As shown in the 2022 study by criminologists Suraj Lakhani and Susann Wiedlitzka, attacks such as the 2019 mosque attack in Christchurch, New Zealand, in which 51 people were killed, were planned and executed with strong inspiration from gaming, including live broadcasts similar to “Let’s Play” sessions, in which people offer live commentary during walk-throughs of games, typically first-person shooting games, and viewer comments that treat the number of deaths as a “score.”

    More than 50 people were killed in the terrorist attack on Christchurch mosques in New Zealand on March 15, 2019.
    Omer Kablan/Anadolu Agency via Getty Images

    This aestheticization of violence serves as a bonding element among young men in digital spaces, especially those who already feel marginalized or frustrated and who find in these games of hate a sense of belonging and affirmation. In this way, gamification transforms hate into entertainment, strengthening ties in toxic communities and making it harder to recognize the behavior as extremism.

    Turning a generation off hate

    Society is, we believe, facing a dual challenge: the need for moderation of platforms and for support for measures preventing men and boys from being drawn into toxic digital spaces.

    The gender divide within Gen Z is no small matter, too. It reflects, in broad terms, a rift between a generation of young women who, empowered by #MeToo and other feminist movements, have embraced progressive causes, and a generation of men who, threatened by their perceived diminished power in this new environment, are being co-opted by far-right and misogynistic discourse in digital spaces.

    This gap has real consequences in personal relationships, in schools and for democracy at large. But it also reveals something that we believe must be stated clearly: Platform regulation is not just a technical issue. The future of a generation cannot be built on algorithms that reward hate and radicalization.

    This article is a translated and adapted version of a story that was originally published by The Conversation Brazil on May 8, 2025.

    The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    ref. Lady Gaga bomb plot: Thwarted plan lifts veil on the gamification of hate and gendered nature of online radicalization – https://theconversation.com/lady-gaga-bomb-plot-thwarted-plan-lifts-veil-on-the-gamification-of-hate-and-gendered-nature-of-online-radicalization-256199

    MIL OSI – Global Reports

  • MIL-OSI Global: Territorial concessions will be central to any Ukraine peace deal, and to Russia’s long-term plan

    Source: The Conversation – Global Perspectives – By Stefan Wolff, Professor of International Security, University of Birmingham

    If the Ukrainian president, Volodymyr Zelensky, and his Russian counterpart, Vladimir Putin, meet in Istanbul on May 15, territory – and who controls it – will be high on their agenda.

    Putin offered to start direct talks between Russia and Ukraine at a press conference on May 11. Donald Trump pushed Zelensky to accept this offer in a social media post, saying that “Ukraine should agree to this, IMMEDIATELY.”

    The Ukrainian president, still buoyed by a meeting with the British, French, German and Polish leaders that called for an unconditional 30-day ceasefire, agreed shortly afterwards.

    Russia has said it wants to focus on the Istanbul communique of March 2022 and a subsequent draft agreement that was negotiated, but never adopted, by the two sides in April 2022.


    Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.


    These 2022 negotiations focused on Ukraine becoming a permanently neutral state and on which nations would provide security guarantees for any deal. They also relegated discussions over Crimea to separate negotiations with a ten-to-15-year timeframe.

    Russia uses the phrase “the current situation on the ground” as thinly disguised code for territorial questions that have become more contentious over the past three years. This relates to Russian gains on the battlefield and the illegal annexation of four Ukrainian regions in September 2022 (in addition to Crimea, which Russia also illegally annexed in 2014).

    Russia’s position, as articulated recently by the country’s foreign minister, Sergey Lavrov, is that “the international recognition of Crimea, Sevastopol, the DPR, the LPR, the Kherson and Zaporozhye regions as part of Russia is … imperative”.

    This is clearly a non-starter for Ukraine, as repeatedly stated by Zelensky. There could, however, be some flexibility on accepting that some parts of sovereign Ukrainian territory are under temporary Russian control. This has been suggested by both Trump’s Ukraine envoy, Keith Kellogg, and Kyiv’s mayor, Vitali Klitschko.


    Institute for the Study of War.

    Black Sea’s strategic value

    The territories that Russia currently occupies, and claims, in Ukraine have varying strategic, economic and symbolic value for Moscow and Kyiv. The areas with the greatest strategic value include Crimea and the territories on the shores of the Azov Sea, which provide Russia with a land corridor to Crimea.

    The international recognition of Crimea as part of Russia, as apparently suggested under the terms of an agreement hashed out by Putin and Trump’s envoy Steve Witkoff, could expand the areas of the Black Sea that Russia can claim to legally control.

    This could then be used by the Kremlin as a launchpad for renewed attacks on Ukraine and to threaten Nato’s eastern maritime flank in Romania and Bulgaria. Any permanent recognition of Russia’s control of these territories is, therefore, unacceptable to Ukraine and its European partners.




    Read more:
    Russia-China ties on full display on Victory Day – but all is not as well as Putin is making out


    Donetsk and Luhansk are of lower strategic value, compared with Crimea and the Kherson and Zaporizhzhia regions. However, they do have economic value because of the substantial resources located there. These include some of the mineral and other resources that were the subject of a separate deal which the US and Ukraine concluded on April 30.

    They also include Europe’s largest nuclear power plant in Zaporizhzhia and a large labour force among their estimated population of between 4.5 million to 5.5 million people who will be critical to Ukraine’s post-war reconstruction.

    Beyond the strategic and economic value of the illegally occupied territories, the symbolism that both sides attach to their control is the most significant obstacle to any deal, given how irreconcilable Moscow’s and Kyiv’s positions are. For both sides, control of these territories, or loss thereof, is what defines victory or defeat in the war.

    Putin may be able to claim that some territorial gains in Ukraine since the start of the full-scale invasion in February 2022 are a victory for Russia. But even for him any compromise that would see Russia give up territory that it has conquered – often at exceptionally high cost – would be a risky gamble for the stability of his regime.

    Anything less than the complete restoration of the country’s territorial integrity in its 1991 borders would imply recognition of defeat in the war for Ukraine. This would critically threaten the stability of the Zelensky government, whose political programme rests on exactly the premise of a return to the 1991 borders.

    Long-term consequences

    As a result, the Ukrainian leadership has become hostage to its own information strategy, which has placed the “return of all territories” at the top of the criteria for victory. This is a goal widely shared among Ukrainians, according to a poll conducted by the Razumkov Center in March 2025. But it will be hard to achieve.




    Read more:
    US-Ukraine minerals deal looks better for Kyiv than expected – but Trump is an unpredictable partner


    Apart from the potential domestic fall-out from any territorial compromises that Ukraine may be forced to make, there is another reason why the territorial question has become so intractable.

    Beyond any strategic, economic and symbolic value that the occupied Ukrainian territories hold from the Kremlin’s perspective, control over territory has always been an instrument for Russia to pursue its broader geopolitical agenda of exercising influence over its neighbours – from Moldova, to Georgia, Armenia and Ukraine.

    It is also important to remember that Russia’s territorial claims in Ukraine have gradually expanded since 2014. Until September 2022, when it annexed the other four regions, Russia laid claim to Crimea only.

    There is no guarantee that any territorial concession from Kyiv now would put a permanent end to Moscow’s territorial expansionism. It is therefore worrying that Trump envoy Witkoff, in an interview with the Breitbart news website, reiterated the US view that the two sides need to find compromises on who controls which territories.

    Russia’s aggression against Ukraine was not a war over territory as such, but was part of Moscow’s agenda to restore the sphere of influence that it lost at the end of the cold war. This agenda is far from finished.

    The strategy of both Moscow and Washington to focus on territorial consequences may lead to a ceasefire. But it will not address the fundamental issue of how to deal with a vengeful and revisionist autocracy on Europe’s doorsteps.

    Stefan Wolff is a past recipient of grant funding from the Natural Environment Research Council of the UK, the United States Institute of Peace, the Economic and Social Research Council of the UK, the British Academy, the NATO Science for Peace Programme, the EU Framework Programmes 6 and 7 and Horizon 2020, as well as the EU’s Jean Monnet Programme. He is a Trustee and Honorary Treasurer of the Political Studies Association of the UK and a Senior Research Fellow at the Foreign Policy Centre in London.

    Tetyana Malyarenko does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Territorial concessions will be central to any Ukraine peace deal, and to Russia’s long-term plan – https://theconversation.com/territorial-concessions-will-be-central-to-any-ukraine-peace-deal-and-to-russias-long-term-plan-256347

    MIL OSI – Global Reports

  • MIL-OSI Global: Pope Leo XIV’s link to Haiti is part of a broader American story of race, citizenship and migration

    Source: The Conversation – USA – By Chelsea Stieber, Associate Professor of French Studies, Tulane University

    Pope Leo XIV appears before thousands of journalists on May 12, 2025, in Vatican City. Vatican Media via Vatican Pool/Getty Images

    Early coverage of Pope Leo XIV has explored the first American pontiff’s Chicago upbringing, as well as the many years he spent in Peru, first as a missionary and then as a bishop.

    Genealogist Jari Honora broke the story of the pope’s ancestors’ connection to the Creole of color community in New Orleans. A family historian at the Historic New Orleans Collection’s Williams Research Center, Honora has given research presentations to my graduate students and consulted with me on my own work. In his research on Leo’s lineage, he was also able to find several official documents that list Haiti as the birthplace of his maternal grandfather, Joseph Norval Martinez.

    The pope’s Creole lineage in Louisiana is interesting enough. But many commentators have strained to make sense of the link to Haiti, if they mention it at all.

    As an expert in 19th-century Haiti, I study the period during which Leo’s ancestors likely traveled between Haiti and New Orleans before migrating to Chicago. Their story is part of a broader American story of race, citizenship and migration.

    A grandfather born in Haiti

    It’s worth noting that Leo’s genealogy is not entirely straightforward.

    At least one record indicates Joseph Norval as having been born in Louisiana. And a 1910 census seems to reinvent the family lineage: Martinez is now “Martina,” Joseph’s birthplace is “S. Domingo,” and he is supposedly Maltese.

    Nevertheless, far more documents – numerous census records as well as his marriage certificate – identify Martinez’s place of birth as Haiti. An 1866 passenger list for a ship bound for New Orleans from Haiti, despite some inconsistencies, does indeed appear to list members of the Martinez family, including his father and three siblings.

    Just because Leo’s grandfather was born in Haiti, it didn’t mean he was Haitian. Instead, he belonged to a class of people in New Orleans known as Creoles of color.

    A three-pronged racial order

    It’s important to understand the historical complexity of the Creole identity in New Orleans and in Louisiana, and its continued significance today.

    The descriptor “Creole of color” is somewhat anachronistic; it emerges at the end of the 19th century in Louisiana to categorize the descendants of a historically subordinate class known as free people of color, or “gens de couleur libres” in French.

    Portrait of a Free Woman of Color by François Jacques Fleischbein.
    Courtesy of the Historic New Orleans Collection

    It has its origins in the tripartite racial order of the French and Spanish colonial periods in the Americas, when authorities created a hierarchy of legal classes: enslaved people, free people of African descent, and white people.

    In theory, free people of color encompassed a range of people. It could describe formerly enslaved people; people who had never been enslaved; people born in Africa; or people with extended, mixed-race American families.

    In 19th-century Louisiana, the term generally referred to people of mixed racial ancestry who were born with free status, though at varying degrees of removal from slavery. They generally spoke French and were Catholic.

    Though they were subject to repressive laws and could never become citizens and gain the right to vote, free people of color could own, inherit and sell property, including enslaved people. Most worked as artisans and shopkeepers, and a handful became quite wealthy through trade and real estate.

    The Martinez family fits squarely within this community.

    Census records from 1850 list Jacques Martinez – Joseph Norval Martinez’s father and Leo’s maternal great-grandfather – as a tailor and modest property owner in New Orleans. They were never enslaved but do not appear to have been enslavers, either.

    Life gets worse for people of color

    So why was Joseph Norval Martinez born in Haiti?

    At some point, his parents probably felt they had to leave New Orleans.

    Despite their relative prosperity, free people of color in Louisiana and throughout the United States were being subjected to increasing legal restrictions, repression and violence in the years leading up to the Civil War.

    This situation worsened in the 1840s and ‘50s, as white Southerners worked to further restrict citizenship and rights along hard racial lines. The 1857 Dred Scott Supreme Court decision affirmed that any people descended from Africa, including free people of color, had no right to citizenship.

    For those who remained in the South, the outbreak of the Civil War in 1861 would have made life even more difficult.

    In the first half of the 19th century, many free people of color in Louisiana emigrated to France. But the two main options in the 1860s were Haiti and Mexico.

    However, at the time of the Martinez family’s departure, Mexico was embroiled in conflict with France. Haiti, meanwhile, was crafting an ambitious plan to attract immigrants.

    After the 1804 Haitian Revolution – the uprising against French colonizers that led to the creation of Haiti – the nation became the first in the world to permanently ban slavery. For this reason, many people of color viewed Haiti as a beacon of freedom and equality.

    Indeed, Haiti long promoted itself as a free soil republic: Any person with African descent would enjoy freedom and, eventually, Haitian citizenship. Several Haitian presidents staged immigration campaigns to attract enslaved and formerly enslaved laborers from the United States.

    Fabre Geffrard served as president of Haiti from 1859 to 1867.
    Heritage Art/Heritage Images via Getty Images

    In response to worsening conditions for people of color in the U.S., Haitian President Fabre Geffrard launched a particularly ambitious campaign, setting up Haitian Emigration bureaus and staffing them with agents in New York, Boston, New Orleans and other major cities. Louisiana newspapers advertised Geffrard’s immigration plan, which included land concessions for families and individuals. Geffrard’s focus was on attracting agricultural laborers – not the kind of work the Martinez family would likely be enticed to take on. Still, skilled artisans were welcomed as immigrants.

    It was within this context that the Martinez family probably departed New Orleans for Haiti. At present there is scant information about their voyage, but the journey would have echoed many family histories of migration from Louisiana to Haiti in the 1860s.

    Based on my study of census and notarial archives, it appears the Martinez family left sometime after the birth of daughter Adele in New Orleans in December 1861 and before the birth of Joseph Norval in Haiti in 1864.

    The promise of Reconstruction crumbles

    The Martinez family didn’t stay in Haiti long.

    According to the passenger list, they returned to New Orleans in February 1866.

    As was the experience for many émigrés to Haiti, they may have found the conditions difficult. It’s also possible that the successes of wartime Reconstruction in Louisiana encouraged them to reestablish their lives in New Orleans.

    They returned to a state transformed by the abolition of slavery. Free people of color were at the forefront of the fight for civil rights and key architects behind a progressive, egalitarian state constitution that called for equal access to education for all citizens.

    The Martinez children likely benefited – albeit briefly – from that provision. The 1870 census records show them all enrolled in school: Michel (14), Girard (12), Adele (9) and young Joseph Norval (6).

    They would also witness the violent backlash to Reconstruction, which was especially intense in Louisiana. In 1866, a white mob laid siege to those attempting to amend the state’s constitution to enfranchise Black voters, in what became known as the Mechanics Institute Massacre. In the ensuing years, the state was gripped by ever more violence.

    A sketch of the Mechanics Institute Massacre in an issue of Harper’s Weekly.
    The Historic New Orleans Collection

    Joseph Norval Martinez married Louise Baquié in 1887, and they went on to have six children, all girls, in New Orleans. He worked as a cigar maker – a common enterprise for free men of color during the period – and later as a clerk.

    The family was subjected to increasing segregation with the Separate Car Act, an 1890 Louisiana statute that separated train cars by race. The Supreme Court went on to uphold the Louisiana statute in 1896, enshrining the “separate but equal” doctrine throughout the South.

    An American tale

    Martinez and Baquié remained in New Orleans until 1910, at which point they joined the millions of other Black Americans who migrated from the South to the North and the West in the early decades of the 20th century, in what became known as the Great Migration. A significant portion, including Martinez and Baquié, ended up in Chicago.

    Their youngest daughter, Mildred Anges Martinez – Leo’s mother – was born there.

    Joseph Norval Martinez’s census records tell a complex story about the history of race in the U.S. Prior to 1900, he is listed as “m” for “mulatto.” In the 1900 census, he is listed as Black. And then in the 1910 census, he is listed as white.

    The Martinez family could not dictate the racial descriptors assigned to them in the census, but they had some claim over birthplace and lineage. Against the backdrop of segregation, disenfranchisement and violence, Martinez appears to have claimed a lineage – Maltese – that the 1910 census categorized as white.

    It is this – and so much more – that makes theirs a truly American story.

    One thing we do know: Martinez reverted back to his original lineage after he and his family settled in Chicago. The 1920 census lists Martinez’s birthplace of record as Haiti.

    Chelsea Stieber does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Pope Leo XIV’s link to Haiti is part of a broader American story of race, citizenship and migration – https://theconversation.com/pope-leo-xivs-link-to-haiti-is-part-of-a-broader-american-story-of-race-citizenship-and-migration-256425

    MIL OSI – Global Reports

  • MIL-OSI Global: How does the EPA know a pesticide is safe to use in my yard?

    Source: The Conversation – USA – By Jeffrey Gore, Professor of Agricultural Science and Plant Protection, Mississippi State University

    A mosquito-control technician sprays a mixture including insecticides in a yard in Michigan. AP Photo/John Flesher

    Environmental Protection Agency head Lee Zeldin has said he wants the federal agency to accelerate scientific safety evaluations of various chemicals, including pesticides.

    The EPA reportedly has more than 500 pending reviews of proposed new pesticides and more than 12,000 overdue reevaluations of pesticides currently in use. The agency is under pressure from the chemical and agricultural industries to catch up, while health and environmental advocates demand it maintain high safety standards.

    The review process is careful for a reason – and perhaps the only real method of speeding it up is the one Zeldin has proposed: reassigning staff so there are more people to share the work.

    As a faculty member at a land-grant university who has studied the effectiveness of commercial and experimental pesticides in the southern U.S., I have seen how the federal pesticide regulatory process identifies risks to humans and the environment and mitigates them with specific use instructions. Here’s how the process works.

    First, what is a pesticide?

    The EPA, which regulates pesticides in the U.S., defines a pesticide as any substance or mixture of substances intended to prevent, destroy, repel or mitigate any pest, such as weeds, insects and organisms, that attack plants.

    Pesticides are often referred to as toxins when found in food, water bodies or other places where they are not intended. But just because something is detected doesn’t mean it’s harmful to humans or wildlife. Toxicity depends on how much of the substance a person or animal is exposed to, how they are exposed to it – such as breathing it, or getting it on their skin – and for how long.

    The Department of Agriculture began regulating pesticides in 1947 with the Federal Insecticide, Fungicide, and Rodenticide Act. Most of the department’s interest was whether a particular pesticide was effective against the target pests.

    In 1970, the newly formed EPA took over responsibility for pesticides. It shifted its focus to the safety of consumers, farmworkers and the environment after the Federal Environmental Pesticide Control Act took effect in 1972.

    A wide range of pesticides are available to consumers for use in their homes and yards.
    Jeffrey Greenberg/Universal Images Group via Getty Images

    Risk-benefit analysis

    Federal law requires the EPA to evaluate both the risks and the benefits of each pesticide – and to revisit that analysis at least every 15 years for every pesticide used in the U.S.

    The EPA determines whether the risks to people, animals or the environment are too high for the benefits the pesticide provides and whether any of those risks can be reduced. Sometimes a chemical’s risk can be lessened by recommending mitigation strategies such as wearing protective clothing, reducing environmental spread by barring the use of pesticides near the edges of a property, or decreasing the amount of a pesticide that’s legal to use.

    In its analysis of any given pesticide, the EPA requires a massive amount of data from the manufacturer about what ingredients the pesticide contains and how they work. The agency also reviews scientific research on the pesticide and uses its own scientists and independent experts to evaluate any studies that were submitted by the manufacturer.

    The EPA uses all the available data on a pesticide to evaluate the dose that would be toxic to a range of organisms, as well as what residues the pesticide may leave on plants, in the soil and in water. The data is incorporated into computer models that estimate the potential amount of the chemical that may come in contact with humans, animals and the environment. Those models’ results are then combined with toxicity data to determine risk.

    The models used by EPA scientists are very conservative. They often use significant overestimates of exposure, which means that when the models determine the risk of a pesticide is below a particular level, they are evaluating the risk posed by far higher quantities of the chemical than will ever actually be used. The risk from the amount actually used, therefore, is even less likely to cause harm.

    The EPA also provides opportunities for public comment on a pesticide and uses that information in its evaluations as well.

    Pesticides are commonly used in commercial agriculture.
    Charlie Neibergall/AP

    Additional scrutiny

    The Endangered Species Act also requires the EPA to evaluate the effects of pesticides on threatened and endangered species.

    If a pesticide is found to potentially be dangerous to a protected species or its habitat, the EPA will discuss those findings with the U.S. Fish and Wildlife Service and the National Marine Fisheries Service, which enforce the Endangered Species Act, and determine what to do to ensure the species aren’t harmed.

    The law’s requirement to reevaluate each pesticide every 15 years is based on the fact that science evolves and information becomes more precise. New data can shed light on potential risks and benefits, and even lead to pesticides being banned or more closely restricted.

    Until recently, for instance, pesticide residues on plants, food and in the environment were measured in parts per million. Newer equipment can measure even smaller amounts, determining parts per billion, which is as precise as identifying one single second in 32 years. Some chemicals can even be measured in parts per trillion, equivalent to one drop of water in 20 Olympic-size swimming pools. That means exposures can be more accurately measured. While some chemicals can be toxic in very small concentrations, most pesticides can be detected at levels that do not pose a biological risk.

    Allowing a pesticide to be used

    If the EPA determines that a pesticide’s risks outweigh its benefits, then its staff will conduct additional analyses to determine how to mitigate the risks enough to justify using it. If that’s not possible, the EPA will reject the application and not allow the pesticide to be used in the U.S.

    If the agency determines that the benefits outweigh the risks, the EPA approves the pesticide for sale and use in the U.S. The law requires the pesticide come with a label providing a strict set of guidelines for how, when and where to use the pesticide.

    The guidelines define amounts and timing for applying the pesticide safely, and specific restrictions or protection strategies to control the target pests while eliminating or minimizing harm to the environment, workers and the public.

    The EPA also makes information on pesticides available to the public, so anyone can find out how to use them safely. Using the pesticide without following those directions is a violation of federal law.

    Jeffrey Gore receives funding from the USDA-ARS and has received funding from various state and national commodity boards, and chemical and biotechnology companies in the past.

    Jeffrey Gore served on the EPA’s Farm, Ranch and Rural Communities Committee from 2019 to 2024.

    ref. How does the EPA know a pesticide is safe to use in my yard? – https://theconversation.com/how-does-the-epa-know-a-pesticide-is-safe-to-use-in-my-yard-256027

    MIL OSI – Global Reports

  • MIL-OSI Global: Detroit’s next mayor can do these 3 things to support neighborhoods beyond downtown

    Source: The Conversation – USA – By Deyanira Nevárez Martínez, Assistant Professor of Urban and Regional Planning, Michigan State University

    Detroit stands at a pivotal moment.

    Mayor Mike Duggan is preparing to leave office after 11 years at the end of 2025. The city’s next leader will inherit not only a revitalizing downtown but also neighborhoods like Belmont, Petosky-Otsego and Van Steuban that are grappling with housing instability and decades of neglect and disinvestment.

    My research on housing insecurity, homelessness and urban governance, along with broader scholarship on equitable development, suggests that Detroit’s future depends on more than marquee developments like the Michigan Central Station Development. It depends on strengthening neighborhoods from the ground up.

    Here are three strategies that could help Detroit’s next mayor build a just and resilient city by focusing on transitional neighborhoods:

    Stabilize housing and prevent displacement

    Stable housing is the foundation of thriving communities.

    Yet, housing instability in Detroit is both widespread and deeply entrenched. Before the pandemic, roughly 13% of Detroiters, or about 88,000 people, had been evicted or forced to move within the previous year. Families with children faced the highest risk.

    Many Detroiters had little choice but to remain in deteriorating housing, crowd into shared living arrangements or relocate elsewhere because of an estimated shortfall of 24,000 habitable housing units.

    While building more housing is essential, preventing displacement requires more than new construction. It also demands policies that preserve affordability and protect tenants. Researchers have found that household stabilization policies, such as legal representation in eviction court, rent control and property tax relief, have the most immediate impact.

    In Detroit, addressing the wave of expiring Low-Income Housing Tax Credit, or LIHTC, units remains an urgent priority. When units reach the end of their compliance period in this federal program, typically 15 years, owners are no longer required to maintain affordable rents and can raise prices. This “conversion to market rate” often results in the loss of affordable housing for low-income residents.

    In response to a projected loss of 10,000 units by 2023, Detroit launched the Preservation Partnership that secured affordability commitments for about 4,000 units. However, it remains difficult to determine exactly how many of the at-risk units were ultimately lost, and when, due to reporting lags, inconsistencies and overlapping affordability programs.

    Despite the city’s efforts, a 2023 analysis found that a substantial affordability gap persists, with many households unable to comfortably afford market-rate housing without spending more than 30% of their income, which is the standard set by the Department of Housing and Urban Development for affordability.

    The Michigan State Housing Development Authority continues to support affordable housing through tax credit allocations. However, a growing number of LIHTC properties in areas experiencing redevelopment are reaching the end of their affordability periods, putting them at risk of converting to market rate. National estimates suggest that nearly 350,000 units could lose affordability by 2030 and over 1 million by 2040 without sustained local and regional preservation efforts.

    Stabilizing Detroit’s housing market means ensuring that those who stayed during the hardest times are not pushed out as reinvestment takes hold. To achieve this, the next mayor could expand rental assistance and support tenant organizing efforts. This is particularly needed in transitional neighborhoods where renters come together to fight unfair evictions, improve housing conditions and push for more stable rents.

    Reclaim and reimagine vacant land for community benefit

    Many view Detroit’s vast tracks of vacant land, estimated in the hundreds of thousands of parcels, as blight. But they could also be seen as a public asset and a generational opportunity if brought together with the right public strategies.

    Land trusts can turn empty lots into valuable neighborhood spaces. A land trust is a nonprofit that holds land for the community and keeps housing affordable over the long term, a key to preventing displacement.

    Research also shows that greening strategies can improve community health, cohesion and equity. Cities like Philadelphia and Cleveland have launched urban greening initiatives that transform vacant lots into community gardens, small parks and tree-filled spaces. Research shows that these projects can help stabilize property values and strengthen neighborhoods by reducing blight, encouraging investment and creating safer, more attractive environments.

    Detroit has a land bank, a public agency that manages vacant and foreclosed properties. The city has also invested in some green infrastructure. But experts say that these efforts require stronger city leadership, teamwork across departments and real input from residents. These are areas where Detroit still has room to grow.

    By collaborating with residents to cocreate a land use vision, the next mayor could prioritize community ownership and ecological restoration instead of speculative redevelopment.

    Invest in social infrastructure

    Neighborhood strength is about more than buildings — it’s about people.

    As the Brookings Institution notes, economic opportunity is key to long-term safety, and investing in youth is a proven violence reduction strategy.

    Detroit’s neighborhoods have long faced a lack of investment in schools, recreation centers and social services. This leaves families vulnerable and fuels cycles of poverty and criminalization. Under these conditions, young people, especially Black and brown youth, are more likely to be policed, punished and pushed into the criminal justice system.

    A 2021 study found that the Detroit Public Schools Community District reported 2% of its students experienced homelessness, despite 16% of households with children reporting recent eviction or forced moves. This gap reveals major service and awareness gaps. And when families fall through those gaps, it’s often children who suffer the most.

    Addressing these gaps requires investing in mental health services, youth development programs and violence prevention, rather than relying solely on policing or incarceration. These approaches recognize that true public safety comes from access to stable jobs, quality education and supportive services that meet people’s health, housing and social needs. Some of the most effective strategies include restorative justice in schools and outreach to older adults and residents experiencing homelessness.

    These are not luxuries. They are essential infrastructure for neighborhood vitality.

    The work ahead

    Detroit is often held up as a cautionary tale of urban decline, or more recently, as a comeback story driven by downtown revitalization. But in my opinion, its true test lies in what comes next: whether the city can translate momentum into equity for the communities that have long been left behind.

    The next mayor has the chance to shift the narrative by centering housing justice, reclaiming land for public good and investing in the people who make Detroit a city worth fighting for.

    Read more of our stories about Detroit.

    Deyanira Nevárez Martínez is a trustee of the Lansing School District Board of Education and is currently a candidate for the Lansing City Council Ward 2.

    ref. Detroit’s next mayor can do these 3 things to support neighborhoods beyond downtown – https://theconversation.com/detroits-next-mayor-can-do-these-3-things-to-support-neighborhoods-beyond-downtown-254755

    MIL OSI – Global Reports

  • MIL-OSI Global: How your genes interact with your environment changes your disease risk − new research counts the ways

    Source: The Conversation – USA – By Arun Durvasula, Assistant Professor of Population and Public Health Sciences, University of Southern California

    Nature and nurture both determine how likely you are to develop a particular disease. Hiroshi Watanabe/DigitalVision via Getty Images

    Sitting in my doctor’s examination room, I was surprised when she told me, “Genetics don’t really matter for chronic disease.” Rather, she continued, “A person’s lifestyle, what they eat, and how much they exercise, determine whether they get heart disease.”

    As a researcher who studies the genetics of disease, I don’t fully disagree – lifestyle factors play a large role in determining who gets a disease and who doesn’t. But they are far from the entire story. Since scientists mapped out the human genome in 2003, researchers have learned that genetics also play a large role in a person’s disease risk.

    Studies that focus on estimating disease heritability – that is, how much genetic differences explain differences in disease risk – usually attribute a substantial fraction of disease variation to genetics. Mutations across the entire genome seem to play a role in diseases such as Type 2 diabetes, which is about 17% heritable, and schizophrenia, which is about 80% heritable. In contrast to diseases such as Tay-Sachs or cystic fibrosis, where mutations in a single gene cause a disease, chronic diseases tend to be polygenic, meaning they’re influenced by multiple mutations at many genes across the whole genome.

    Every complex disease has both genetic and environmental risk factors. Most researchers study these factors separately because of technical challenges and a lack of large, uniform datasets. Although some have devised techniques to overcome these challenges, they haven’t yet been applied to a comprehensive set of diseases and environmental exposures.

    In our recently published research, my colleague Alkes Price and I developed tools to leverage newly available datasets to quantify the joint effects that genetic and environmental risk factors have on the biology underlying disease.

    Aspirin, genetics and colon cancer

    To illustrate the effect gene-environment interactions have on disease, let’s consider the example of aspirin use and colon cancer.

    In 2001, researchers at the Fred Hutchinson Cancer Research Center were studying how regularly taking aspirin decreased the risk of colon cancer. They wondered whether genetic mutations that slowed down how quickly the body broke down aspirin – meaning aspirin levels in the body would stay high longer – might increase the drug’s protective effect against colon cancer. They were right: Only patients with slow aspirin metabolism had a decreased risk of colon cancer, indicating that the effectiveness of a drug can depend on a person’s genetics.

    This raises the question of how genetics and different combinations of environmental exposures, such as the medications a patient is taking, can affect a person’s disease risk and how effective a treatment will be for them. How many cases of genetic variations directly influencing a drug’s effectiveness are there?

    Rather than ‘nature versus nurture,’ a more accurate way of describing gene-environment interactions is ‘nature through nurture.’

    The gene-environment interaction of colon cancer and aspirin is unusual. It involves a mutation at a single location in the genome that has a big effect on colon cancer risk. The past 25 years of human genetics have shown researchers that these sorts of large-effect mutations are rare.

    For example, an analysis found that the median effect of a genetic variant on height is only 0.14 millimeters. Instead, there are usually hundreds of variations that each have small but cumulative effects on a person’s disease risk, making them hard to find.

    How could researchers detect these small gene-environment interactions across hundreds of spots in the genome?

    Polygenic gene-environment interactions

    We started by looking for cases where genetic variants across the genome showed different effects on a person’s biology in different environments. Rather than trying to detect the small effects of each genetic variant one at a time, we aggregated data across the entire genome to turn these small individual effects into a large, genome-wide effect.

    Using data from the UK Biobank – a large database containing genetic and health data from about 500,000 people – we estimated the influence of millions of genetic variants on 33 complex traits and diseases, such as height and asthma. We grouped people based on environmental exposures such as air pollution, cigarette smoking and dietary patterns. Finally, we developed statistical tests to study how the effects of genetics on disease risk and biomarker levels varied with these exposures.

    We found three types of gene-environment interactions.

    First, we found 19 pairs of complex traits and environmental exposures that are influenced by genetic variants across the genome. For example, the effect of genetics on white blood cell levels in the body differed between smokers and nonsmokers. When we compared the effects of genetic mutations between the two groups, the strength of gene-environment interaction suggested that smoking changes the way genetics influence white blood cell counts.

    Second, we looked for cases where the heritability of a trait varies depending on the environment. In other words, rather than some genetic variants having different effects in different environments, all of them are made stronger in some environments. For example, we found that the heritability of body mass index – the ratio of weight to height – increased by 5% for the most active people. This means genetics plays a larger role in BMI the more active you are. We found 28 such trait-environment pairs, including HDL cholesterol levels and alcohol consumption, as well as neuroticism and self-reported sleeplessness.

    Third, we looked for a type of gene-environment interaction called proportional or joint amplification. Here, genetic effects grow with increased environmental exposures, and vice versa. This results in a relatively equal balance of genetic and environmental effects on a trait. For example, as self-reported time spent watching television increased, both genetic and environmental variance increased for a person’s waist-to-hip ratio. This likely reflects the influence of other behaviors related to time spent watching television, such as decreased physical exercise. We found 15 such trait-environment pairs, including lung capacity and smoking, and glucose levels and alcohol consumption.

    Environmental factors, such as cigarette smoke and the medications you take, can interact with your genes in unexpected ways.
    jaouad.K/iStock via Getty Images Plus

    We also looked for cases where biological sex, instead of environmental exposures, influenced interactions with genes. Previous work had shown evidence of these gene-by-sex interactions, and we found additional examples of the effects of biological sex on all three types of gene-environment interactions. For example, we found that neuroticism had genetic effects that varied across sex.

    Finally, we also found that multiple types of gene-environment interactions can affect the same trait. For example, the effects of genetics on systolic blood pressure varied by sex, indicating that some genetic variants have different effects in men and women.

    New gene-environment models

    How do we make sense of these distinct types of gene-environment interactions? We argue that they can help researchers better understand the underlying biological mechanisms that lead from genetic and environmental risks to disease, and how genetic variation leads to differences in disease risk between people.

    Genes related to the same function work together in a unit called a pathway. For example, we can say that genes involved in making heme – the component of red blood cells that carries oxygen – are collectively part of the heme synthesis pathway. The resulting amounts of heme circulating in the body influence other biological processes, including ones that could lead to the development of anemia and cancer. Our model suggests that environmental exposures modify different parts of these pathways, which may explain why we saw different types of gene-environment interactions.

    In the future, these findings could lead to treatments that are more personalized based on a person’s genome. For example, clinicians might one day be able to tell whether someone is more likely to decrease their risk of heart disease by taking weight loss drugs or by exercising.

    Our results show how studying gene-environment interactions can tell researchers not only about which genetic and environmental factors increase your risk of disease, but also what goes wrong in the body where.

    Arun Durvasula has received funding from the National Institutes of Health and the National Institute of Science.

    ref. How your genes interact with your environment changes your disease risk − new research counts the ways – https://theconversation.com/how-your-genes-interact-with-your-environment-changes-your-disease-risk-new-research-counts-the-ways-252139

    MIL OSI – Global Reports

  • MIL-OSI Global: Challenges to high-performance computing threaten US innovation

    Source: The Conversation – USA – By Jack Dongarra, Emeritus Professor of Computer Science, University of Tennessee

    Oak Ridge National Laboratory’s Frontier supercomputer is one of the world’s fastest. Oak Ridge Leadership Computing Facility, CC BY

    High-performance computing, or HPC for short, might sound like something only scientists use in secret labs, but it’s actually one of the most important technologies in the world today. From predicting the weather to finding new medicines and even training artificial intelligence, high-performance computing systems help solve problems that are too hard or too big for regular computers.

    This technology has helped make huge discoveries in science and engineering over the past 40 years. But now, high-performance computing is at a turning point, and the choices the government, researchers and the technology industry make today could affect the future of innovation, national security and global leadership.

    High-performance computing systems are basically superpowerful computers made up of thousands or even millions of processors working together at the same time. They also use advanced memory and storage systems to move and save huge amounts of data quickly.

    With all this power, high-performance computing systems can run extremely detailed simulations and calculations. For example, they can simulate how a new drug interacts with the human body, or how a hurricane might move across the ocean. They’re also used in fields such as automotive design, energy production and space exploration.

    Lately, high-performance computing has become even more important because of artificial intelligence. AI models, especially the ones used for things such as voice recognition and self-driving cars, require enormous amounts of computing power to train. High-performance computing systems are well suited for this job. As a result, AI and high-performance computing are now working closely together, pushing each other forward.

    Lawrence Livermore National Laboratory’s supercomputer El Capitan is currently the world’s fastest.

    I’m a computer scientist with a long career working in high-performance computing. I’ve observed that high-performance computing systems are under more pressure than ever, with higher demands on the systems for speed, data and energy. At the same time, I see that high-performance computing faces some serious technical problems.

    Technical challenges

    One big challenge for high-performance computing is the gap between how fast processors are and how well memory systems can keep up with the processors’ output. Imagine having a superfast car but being stuck in traffic – it doesn’t help to have speed if the road can’t handle it. In the same way, high-performance computing processors often have to wait around because memory systems can’t send data quickly enough. This makes the whole system less efficient.

    Another problem is energy use. Today’s supercomputers use a huge amount of electricity, sometimes as much as a small town. That’s expensive and not very good for the environment. In the past, as computer parts got smaller, they also used less power. But that trend, called Dennard scaling, stopped in the mid-2000s. Now, making computers more powerful usually means they use more energy too. To fix this, researchers are looking for new ways to design both the hardware and the software of high-performance computing systems.

    There’s also a problem with the kinds of chips being made. The chip industry is mainly focused on AI, which works fine with lower-precision math like 16-bit or 8-bit numbers. But many scientific applications still need 64-bit precision to be accurate. The greater the bit count, the more digits to the right of the decimal point a chip can process, hence the greater precision. If chip companies stop making the parts that scientists need, then it could become harder to do important research.

    This report discusses how trends in semiconductor manufacturing and commercial priorities may diverge from the needs of the scientific computing community, and how a lack of tailored hardware could hinder progress in research.

    One solution might be to build custom chips for high-performance computing, but that’s expensive and complicated. Still, researchers are exploring new designs, including chiplets – small chips that can be combined like Lego bricks – to make high-precision processors more affordable.

    A global race

    Globally, many countries are investing heavily in high-performance computing. Europe has the EuroHPC program, which is building supercomputers in places such as Finland and Italy. Their goal is to reduce dependence on foreign technology and take the lead in areas such as climate modeling and personalized medicine. Japan built the Fugaku supercomputer, which supports both academic research and industrial work. China has also made major advances, using homegrown technology to build some of the world’s fastest computers. All of these countries’ governments understand that high-performance computing is key to their national security, economic strength and scientific leadership.

    The U.S.-China supercomputer rivalry explained.

    The United States, which has been a leader in high-performance computing for decades, recently completed the Department of Energy’s Exascale Computing Project. This project created computers that can perform a billion billion operations per second. That’s an incredible achievement. But even with that success, the U.S. still doesn’t have a clear, long-term plan for what comes next. Other countries are moving quickly, and without a national strategy, the U.S. risks falling behind.

    I believe that a U.S. national strategy should include funding new machines and training for people to use them. It would also include partnerships with universities, national labs and private companies. Most importantly, the plan would focus not just on hardware but also on the software and algorithms that make high-performance computing useful.

    Hopeful signs

    One exciting area for the future is quantum computing. This is a completely new way of doing computation based on the laws of physics at the atomic level. Quantum computers could someday solve problems that are impossible for regular computers. But they are still in the early stages and are likely to complement rather than replace traditional high-performance computing systems. That’s why it’s important to keep investing in both kinds of computing.

    The good news is that some steps have already been taken. The CHIPS and Science Act, passed in 2022, provides funding to expand chip manufacturing in the U.S. It also created an office to help turn scientific research into real-world products. The task force Vision for American Science and Technology, launched on Feb. 25, 2025, and led by American Association for the Advancement of Science CEO Sudip Parikh, aims to marshal nonprofits, academia and industry to help guide the government’s decisions. Private companies are also spending billions of dollars on data centers and AI infrastructure.

    All of these are positive signs, but they don’t fully solve the problem of how to support high-performance computing in the long run. In addition to short-term funding and infrastructure investments, this means:

    • Long-term federal investment in high-performance computing R&D, including advanced hardware, software and energy-efficient architectures.
    • Procurement and deployment of leadership-class computing systems at national labs and universities.
    • Workforce development, including training in parallel programming, numerical methods and AI-HPC integration.
    • Hardware road map alignment, ensuring commercial chip development remains compatible with the needs of scientific and engineering applications.
    • Sustainable funding models that prevent boom-and-bust cycles tied to one-off milestones or geopolitical urgency.
    • Public-private collaboration to bridge gaps between academic research, industry innovation and national security needs.

    High-performance computing is more than just fast computers. It’s the foundation of scientific discovery, economic growth and national security. With other countries pushing forward, the U.S. is under pressure to come up with a clear, coordinated plan. That means investing in new hardware, developing smarter software, training a skilled workforce and building partnerships between government, industry and academia. If the U.S. does that, the country can make sure high-performance computing continues to power innovation for decades to come.

    Jack Dongarra receives funding from the NSF and the DOE.

    ref. Challenges to high-performance computing threaten US innovation – https://theconversation.com/challenges-to-high-performance-computing-threaten-us-innovation-255188

    MIL OSI – Global Reports

  • MIL-OSI Global: Taking intermittent quizzes reduces achievement gaps and enhances online learning, even in highly distracting environments

    Source: The Conversation – USA – By Jason C.K. Chan, Professor of Psychology, Iowa State University

    More Americans are learning remotely. Drazen/E+ via Getty Images

    Inserting brief quiz questions into an online lecture can boost learning and may reduce racial achievement gaps, even when students are tuning in remotely in a distracting environment.

    That’s a main finding of our recent research published in Communications Psychology. With co-authors Dahwi Ahn, Hymnjyot Gill and Karl Szpunar, we present evidence that adding mini-quizzes into an online lecture in science, technology, engineering or mathematics – collectively known as STEM – can boost learning, especially for Black students.

    In our study, we included over 700 students from two large public universities and five two-year community colleges across the U.S. and Canada. All the students watched a 20-minute video lecture on a STEM topic. Each lecture was divided into four 5-minute segments, and following each segment, the students either answered four brief quiz questions or viewed four slides reviewing the content they’d just seen.

    This procedure was designed to mimic two kinds of instructions: those in which students must answer in-lecture questions and those in which the instructor regularly goes over recently covered content in class.

    All students were tested on the lecture content both at the end of the lecture and a day later.

    When Black students in our study watched a lecture without intermittent quizzes, they underperformed Asian, white and Latino students by about 17%. This achievement gap was reduced to a statistically nonsignificant 3% when students answered intermittent quiz questions. We believe this is because the intermittent quizzes help students stay engaged with the lecture.

    To simulate the real-world environments that students face during online classes, we manipulated distractions by having some participants watch just the lecture; the rest watched the lecture with either distracting memes on the side or with TikTok videos playing next to it.

    Surprisingly, the TikTok videos enhanced learning for students who received review slides. They performed about 8% better on the end-of-day tests than those who were not shown any memes or videos, and similar to the students who answered intermittent quiz questions. Our data further showed that this unexpected finding occurred because the TikTok videos encouraged participants to keep watching the lecture.

    For educators interested in using these tactics, it is important to know that the intermittent quizzing intervention only works if students must answer the questions. This is different from asking questions in a class and waiting for a volunteer to answer. As many teachers know, most students never answer questions in class. If students’ minds are wandering, the requirement of answering questions at regular intervals brings students’ attention back to the lecture.

    This intervention is also different from just giving students breaks during which they engage in other activities, such as doodling, answering brain teaser questions or playing a video game.

    Why it matters

    Online education has grown dramatically since the pandemic. Between 2004 and 2016, the percentage of college students enrolling in fully online degrees rose from 5% to 10%. But by 2022, that number nearly tripled to 27%.

    Relative to in-person classes, online classes are often associated with lower student engagement and higher failure and withdrawal rates.

    Research also finds that the racial achievement gaps documented in regular classroom learning are magnified in remote settings, likely due to unequal access to technology.

    Our study therefore offers a scalable, cost-effective way for schools to increase the effectiveness of online education for all students.

    What’s next?

    We are now exploring how to further refine this intervention through experimental work among both university and community college students.

    As opposed to observational studies, in which researchers track student behaviors and are subject to confounding and extraneous influences, our randomized-controlled study allows us to ascertain the effectiveness of the in-class intervention.

    Our ongoing research examines the optimal timing and frequency of in-lecture quizzes. We want to ensure that very frequent quizzes will not hinder student engagement or learning.

    The results of this study may help provide guidance to educators for optimal implementation of in-lecture quizzes.

    The Research Brief is a short take on interesting academic work.

    Jason C.K. Chan receives funding from the USA National Science Foundation.

    Zohara Assadipour does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Taking intermittent quizzes reduces achievement gaps and enhances online learning, even in highly distracting environments – https://theconversation.com/taking-intermittent-quizzes-reduces-achievement-gaps-and-enhances-online-learning-even-in-highly-distracting-environments-254046

    MIL OSI – Global Reports

  • MIL-OSI Global: Trump is making it easier to fire federal workers, but they have some legal protections – 3 essential reads

    Source: The Conversation – USA – By Amy Lieberman, Politics + Society Editor, The Conversation

    An estimated 2% of federal civil servants could soon find their jobs are no longer secure under the Trump administration. iStock/Getty Images Plus

    The Trump administration is moving ahead with policy changes that would make it easier to fire some federal workers.

    The Office of Personnel Management, or OPM, filed proposed regulations in the Federal Register on April 23, 2025, that would reclassify about 50,000 career civil servants as “at-will” employees.

    Trump’s first administration attempted similar changes, known as by some as Schedule F but those plans were not implemented.

    An estimated 2% of nearly all of the 3 million federal workers would then experience a shift in how the government classifies their jobs, renaming their classification “Schedule Policy/Career.”

    It is not entirely clear which workers will be reclassified, since the process is largely at Trump’s discretion.

    “This will allow agencies to quickly remove employees from critical positions who engage in misconduct, perform poorly, or undermine the democratic process by intentionally subverting Presidential directives,” the Office of Personnel Management proposal reads.

    Trump supports these changes and says they can help remove corrupt or unqualified workers. Critics maintain that the changes will allow the administration to fire federal employees the administration sees as not supporting its agenda.

    Trump is expected to sign another executive order in the next few weeks that would formally reclassify certain federal job positions as Schedule Policy/Career.

    Here are three stories from The Conversation’s archive about the rights of federal civil servants.

    Former U.S. Agency for International Development employees terminated by the Trump administration collect their belongings at USAID headquarters in February 2025.
    Chip Somodevilla/Gety Images

    1. When a president fired half of the civil service

    Before Trump was elected to a second term in November 2024, he promised he would fire as many as 50,000 civil servants and replace them with people loyal to him.

    Nearly 200 years before that, President Andrew Jackson took office in 1828 and promptly fired about half of the government’s civil service. He replaced these employees with political loyalists. This shift became known as the spoils system.

    “The result was not only an utterly incompetent administration, but widespread corruption,” write Sidney Shapiro, a professor of law at Wake Forest University, and Joseph P. Tomain, a professor of law at the University of Cincinnati.

    Samuel Swartwout, for example, was a Jackson former Army friend whom he selected to serve as collector of customs in New York. The job was well paid and prestigious, and “involved collecting taxes and fees on imported goods that arrived in the nation’s busiest port.”

    “But a congressional investigation showed that Swartwout had stolen a little more than US$1.2 million during his tenure, or about $40 million in today’s dollars,” Shapiro and Tomain write.

    Jackson also found that he could not legally influence hiring at all federal agencies, including the U.S. Post Office, and easily place his own high-level appointees there.

    Today, some federal workers, including U.S. Border Patrol agents, would be exempt from Trump’s reclassification plans.




    Read more:
    Donald Trump wants to reinstate a spoils system in federal government by hiring political loyalists regardless of competence


    An 1830 political cartoon by Thomas Nast about civil service reform shows five people bowing down at a statue of Andrew Jackson.
    Fotosearch/Getty Images

    2. Federal workers have protections against partisan attacks

    Federal workers have had federal legal protections for their hiring and firing in place since the 1880s. This has helped federal employees thwart moves by presidents like Jackson aiming to “control a lot of workers who would serve the president,” and not the American people, according to James L. Perry, a scholar of public affairs at Indiana University, Bloomington.

    The 1883 Pendleton Act ensures that “government workers are hired based on their skills and abilities, not their political views,” Perry says. Congress updated this law in 1978 with the Civil Service Reform Act, which provides additional “protections for workers against being fired for political reasons.”

    “Those rules cover about 99% of staff in the federal civil service. Currently, there are just about 4,000 political appointees,” Perry told Jeff Inglis, an editor at The Conversation U.S., in February 2025.

    Perry points out that the Trump administration’s proposed restructuring would also likely be unpopular among Americans. As many as 87% of Americans have said they want a merit-based, politically neutral civil services, according to Perry




    Read more:
    Trump’s moves to strip employment protections from federal workers threaten to make government function worse – not better


    .

    3. A precarious moral and ethical tightrope

    Leading into Trump’s second term, federal government workers were advised by colleagues to “stay calm and keep their heads down,” and draw minimal attention to their work. This includes not directly using terms like climate change and human rights, which they correctly thought the administration would target, according to Jaime L. Kucinskas, a sociologist at Hamilton College.

    There were some unknowns about how Trump’s second administration would act. But many civil servants also likely understood that “this pressure is real” under the new administration and could affect their day-to-day work, Kucinskas writes.

    Kucinskas interviewed 66 career civil servants from 2017 through 2020. A number of these workers told Kucinskas that working under the first Trump administration caused their mental health and morale to decline. The experience also worsened their productivity and innovation at work.

    “Among a sizable proportion of the people I spoke with, the pressures at work became too much; about a quarter of those I spoke with quit during the first Trump administration,” Kucinskas wrote in January 2025.

    Some civil servants chose to not speak openly about their work experiences with the first Trump administration, including mid-level civil service workers who watched as political appointees “fought over policy agendas levels above them,” according to Kucinskas. Other employees tried to simply keep their work moving, regardless of the politics at play.

    “Yet, even among those who felt most alone, I found they had many experiences in common with others who also felt isolated in trying to walk a precarious moral and ethical tightrope between their desire to faithfully serve the elected president – under chaotic leadership and insufficient and sometimes questionably legal guidance,” Kucinskas wrote, “and do quality work upholding the law and benefiting the nation and the American public




    Read more:
    Civil servants brace for a second Trump presidency


    .”

    This story is a roundup of articles from The Conversation’s archives.

    ref. Trump is making it easier to fire federal workers, but they have some legal protections – 3 essential reads – https://theconversation.com/trump-is-making-it-easier-to-fire-federal-workers-but-they-have-some-legal-protections-3-essential-reads-256313

    MIL OSI – Global Reports

  • MIL-OSI Global: South African companies aren’t innovating enough: why support during tough economic times matters

    Source: The Conversation – Africa – By Amy Kahn, Research Specialist at the Centre for Science, Technology and Innovation Indicators, Human Sciences Research Council

    South Africa’s innovation fund, announced by President Cyril Ramaphosa in the 2025 state of the nation address, was a response to the country’s urgent need for inclusive and sustainable economic growth.

    Evidence from South Africa shows that public financial support for innovation influences the investment that businesses make in innovation.

    The fund will focus on providing venture capital to tech start-ups from higher education institutions. In practice, its activities will complement several programmes that offer different forms of investment for innovation. These include the long-standing research and development tax incentives; the Technology Acquisition and Development Fund; and the SA SME Fund.

    For these programmes to be effective, it’s important to understand the factors that either prohibit or enable innovation activity and innovation in businesses.

    The South African Business Innovation Survey provides unique data on innovation activity and performance in the industry and services sectors. It’s performed over a three-year cycle by the Human Sciences Research Council’s Centre for Science, Technology and Innovation Indicators for the Department of Science, Technology and Innovation.

    Analysis of data from 2019-2021 provides important evidence for designing effective innovation policy support.

    A key finding of the survey was that 62% of South African businesses carried out innovation activities between 2019 and 2021. This was noticeably lower than in the previous (2014-2016) survey round, when the rate was 70%. The reason might be the impact of the COVID-19 pandemic. Many businesses said that they had to make changes to their existing innovation activities between 2019 and 2021.

    It is expected that the innovation-active rate may rise again in the next round. (Data for the 2022-2024 reference period will be collected in 2025.)

    These results show that support for businesses is more pressing during times of economic crisis. It allows them to adapt and mitigate the negative impacts on their innovation projects.

    South Africa’s business innovation picture

    Less than two-thirds of South African businesses were innovation-active during 2019-2021. In addition, a significant proportion had innovation activities that did not result in product or process innovations.

    An innovation-active business is one that undertakes activities intended to result in an innovation. Examples include research and experimental development, training or acquiring new equipment or machinery.

    An innovation can be a new or improved product (including goods or services), introduced to the market. Or it can be a new or improved business process, implemented by the business.

    Businesses that are innovation-active make a greater contribution to the economy and society compared with businesses that don’t innovate. The most recent Business Innovation Survey found that the computer sector had the highest proportion of businesses with innovation activities. It also found that innovation-active businesses had more skilled labour and greater access to external knowledge than other businesses.

    Building human capabilities was an important component of innovation activity. Nearly half (47%) of innovation-active businesses reported training as an activity.

    Businesses that did not carry out formal innovation activities (such as R&D or patenting), and did not collaborate with other institutions, were most likely to have abandoned or not completed their innovation activities.

    Innovations tended to be incremental rather than radical. More businesses with product innovations reported improving existing goods and services rather than making new goods and services available to their customers. Only 10% of product innovators had “new to the world” innovations. Just over 50% had innovations that were new to their business only.

    Innovation-active businesses were more likely to sell their goods and services in international markets. Businesses with novel product innovations that were attractive to international markets were likely to be from the technical sectors and acquired more intellectual property rights.

    Over a third (36%) of innovative businesses considered the high costs of innovating to be highly important. Competition and the dominance of established businesses were also commonly cited barriers. Just over 40% of businesses that operated in domestic markets only, and innovated by modifying existing products from elsewhere, had more than 50 competitors. Businesses that introduced new-to-market (more novel) products faced less competition.

    Innovation has two types of social effects. New goods or services can affect the lives of consumers and end users; and the innovation that happens within a business can have positive impacts on employees.

    The survey revealed both effects. The most important outcomes of innovations were improved working conditions, improved quality of goods and services, and improved quality of life and well-being.

    Growing South Africa’s innovation economy

    Encouraging innovation requires targeted incentives for business. But can the precision of the support be improved?

    We make a number of recommendations:

    • Support mechanisms, including funding, should be tailored for different targets. This can be done by grouping businesses according to the types of activities they undertake to innovate.

    • Businesses should also be grouped according to their R&D and collaboration activities. That makes it possible to design more targeted support mechanisms.

    For example, we recommend that businesses that perform R&D and that collaborate with others require interventions to support those activities.

    • Improve South Africa’s R&D as a proportion of its GDP. At the moment it is too low. Countries that innovate with a healthy ratio of gross domestic expenditure on R&D have delivered robust economic growth. Government can promote business R&D through policy tools like tax incentives.

    • Policy instruments for businesses that do not perform R&D or collaborate should encourage knowledge-intensive innovation and building interactive capabilities.

    • Group businesses based on their innovation outcomes to help design more tailored support. We suggest several examples of policy interventions based on the novelty of innovations, market reach, and the ability of businesses to develop innovations in-house.

    Finally, policymakers should recognise that most businesses aren’t able to produce radical innovations. Support should rather help them take smaller innovative steps.

    Gerard Ralphs and Katharine McKenzie contributed to the research for this article.

    The Human Sciences Research Council (HSRC) receives funding from the Department of Science, Technology and Innovation (DSTI) to conduct the Business Innovation Survey (BIS). Amy Kahn is the project manager of the BIS.

    ref. South African companies aren’t innovating enough: why support during tough economic times matters – https://theconversation.com/south-african-companies-arent-innovating-enough-why-support-during-tough-economic-times-matters-253881

    MIL OSI – Global Reports

  • MIL-OSI Global: As US ramps up fossil fuels, communities will have to adapt to the consequences − yet climate adaptation funding is on the chopping block

    Source: The Conversation – USA – By Bethany Bradley, Professor of Biogeography and Spatial Ecology, UMass Amherst

    Salt marshes protect shorelines, but they’re already struggling to survive sea-level rise. John Greim/LightRocket via Getty Images

    It’s no secret that warming temperatures, wildfires and flash floods are increasingly affecting lives across the United States. With the U.S. government now planning to ramp up fossil fuel use, the risks of these events are likely to become even more pronounced.

    That leaves a big question: Is the nation prepared to adapt to the consequences?

    For many years, federally funded scientists have been developing solutions to help reduce the harm climate change is causing in people’s lives and livelihoods. Yet, as with many other science programs, the White House is proposing to eliminate funding for climate adaptation science in the next federal budget, and reports suggest that the firing of federal climate adaptation scientists may be imminent.

    As researchers and directors of regional Climate Adaptation Science Centers, funded by the U.S. Geological Survey since 2011, we have seen firsthand the work these programs do to protect the nation’s natural resources and their successes in helping states and tribes build resilience to climate risks.

    Here are a few examples of the ways federally funded climate adaptation science conducted by university and federal researchers helps the nation weather the effects of climate change.

    Protecting communities against wildfire risk

    Wildfires have increasingly threatened communities and ecosystems across the U.S., exacerbated by worsening heat waves and drought.

    In the Southwest, researchers with the Climate Adaptation Science Centers are developing forecasting models to identify locations at greatest risk of wildfire at different times of year.

    Knowing where and when fire risks are highest allows communities to take steps to protect themselves, whether by carrying out controlled burns to remove dry vegetation, creating fire breaks to protect homes, managing invasive species that can leave forests more prone to devastating fires, or other measures.

    The solutions are created with forest and wildland managers to ensure projects are viable, effective and tailored to each area. The research is then integrated into best practices for managing wildfires. The researchers also help city planners find the most effective methods to reduce fire risks in wildlands near homes.

    Wildland firefighters and communities have limited resources. They need to know where the greatest risks exist to take preventive measures.
    Ethan Swope/Getty Images

    In Hawaii and the other Pacific islands, adaptation researchers have similarly worked to identify how drought, invasive species and land-use changes contribute to fire risk there. They use these results to create maps of high-risk fire zones to help communities take steps to reduce dry and dead undergrowth that could fuel fires and also plan for recovery after fires.

    Protecting shorelines and fisheries

    In the Northeast, salt marshes line large parts of the coast, providing natural buffers against storms by damping powerful ocean waves that would otherwise erode the shoreline. Their shallow, grassy waters also serve as important breeding grounds for valuable fish.

    However, these marshes are at risk of drowning as sea level rises faster than the sediment can build up.

    As greenhouse gases from burning fossil fuels and from other human activities accumulate in the atmosphere, they trap extra heat near Earth’s surface and in the oceans, raising temperatures. The rising temperatures melt glaciers and also cause thermal expansion of the oceans. Together, those processes are raising global sea level by about 1.3 inches per decade.

    Adaptation researchers with the Climate Adaptation Science Centers have been developing local flood projections for the regions’ unique oceanographic and geophysical conditions to help protect them. Those projections are essential to help natural resource managers and municipalities plan effectively for the future.

    Researchers are also collaborating with local and regional organizations on salt marsh restoration, including assessing how sediment builds up each marsh and creating procedures for restoring and monitoring the marshes.

    Saving salmon in Alaska and the Northwest

    In the Northwest and Alaska, salmon are struggling as temperatures rise in the streams they return to for spawning each year. Warm water can make them sluggish, putting them at greater risk from predators. When temperatures get too high, they can’t survive. Even in large rivers such as the Columbia, salmon are becoming heat stressed more often.

    Adaptation researchers in both regions have been evaluating the effectiveness of fish rescues – temporarily moving salmon into captivity as seasonal streams overheat or dry up due to drought.

    In Alaska, adaptation scientists have built broad partnerships with tribes, nonprofit organizations and government agencies to improve temperature measurements of remote streams, creating an early warning system for fisheries so managers can take steps to help salmon survive.

    Managing invasive species

    Rising temperatures can also expand the range of invasive species, which cost the U.S. economy billions of dollars each year in crop and forest losses and threaten native plants and animals.

    Researchers in the Northeast and Southeast Climate Adaptation Science Centers have been working to identify and prioritize the risks from invasive species that are expanding their ranges. That helps state managers eradicate these emerging threats before they become a problem. These regional invasive species networks have become the go-to source of climate-related scientific information for thousands of invasive species managers.

    The rise in the number of invasive species projected by 2050 is substantial in the Northeast and upper Midwest. Federally funded scientists develop these risk maps and work with local communities to head off invasive species damage.
    Regional Invasive Species and Climate Change Network

    The Northeast is a hot spot for invasive species, particularly for plants that can outcompete native wetland and grassland species and host pathogens that can harm native species.

    Without proactive assessments, invasive species management becomes more difficult. Once the damage has begun, managing invasive species becomes more expensive and less effective.

    Losing the nation’s ability to adapt wisely

    A key part of these projects is the strong working relationships built between scientists and the natural resource managers in state, community, tribal and government agencies who can put this knowledge into practice.

    With climate extremes likely to increase in the coming years, losing adaptation science will leave the United States even more vulnerable to future climate hazards.

    Bethany Bradley receives funding from the US Geological Survey as the University Director of the Northeast Climate Adaptation Science Center.

    Jia Hu has receives funding from the US Geological Survey as the University Director of the Southwest Climate Adaptation Science Center.

    Meade Krosby receives funding from the US Geological Survey as the University Director of the Northwest Climate Adaptation Science Center.

    ref. As US ramps up fossil fuels, communities will have to adapt to the consequences − yet climate adaptation funding is on the chopping block – https://theconversation.com/as-us-ramps-up-fossil-fuels-communities-will-have-to-adapt-to-the-consequences-yet-climate-adaptation-funding-is-on-the-chopping-block-256307

    MIL OSI – Global Reports

  • MIL-OSI Global: Mark Carney’s cabinet: A course correction on gender, but there’s more work ahead

    Source: The Conversation – Canada – By Jeanette Ashe, Visiting Senior Research Fellow, King’s College London

    Canadian Prime Minister Mark Carney has unveiled his federal cabinet in his first major opportunity to define his newly elected government’s direction.

    For academics and activists concerned with gender equity, the cabinet announcement was a crucial litmus test for Carney’s approach to inclusive governance. Overall, Carney demonstrated a significant course correction with cabinet appointments that reflect a clear commitment to gender parity going forward.

    Carney entered office amid mounting scrutiny. His first cabinet, swiftly formed following his swearing-in as prime minister to replace Justin Trudeau, broke with his predecessor’s near decade-long tradition of gender-balanced cabinets.

    Controversially, Carney also eliminated the Minister for Women and Gender Equality (WAGE) upon taking office in March. This decision prompted sharp criticism from feminist organizations, including the Canadian Research Institute for the Advancement of Women, Women’s Shelters Canada, YWCA Canada and Action Canada for Sexual Health and Rights.

    Demanded a reversal

    They wrote and signed an open letter to Carney in March at the annual gathering of the United Nations Commission on the Status of Women.

    These groups viewed the removal of WAGE not only as a symbolic loss but as one with tangible, negative policy implications for millions of women and gender-diverse individuals across Canada. They argued: “Gender equality is not an afterthought; it is the backbone of a strong economy and resilient society.”

    Investing in feminist policies, including health care, childcare and pharmacare is, in other words, good for business, they said.

    In response to this organized feminist pushback, Carney has revised his approach. His cabinet comprises 28 full ministers: 14 women and 15 men, including the prime minister. In addition, Carney appointed 10 junior ministers as secretaries of state: four women and six men. WAGE has also now been restored as a full ministry.

    Men hold the most substantive posts

    While reinstating gender parity in cabinet marks an improvement, it is not without caveats. While women now make up almost half of both cabinet tiers, it’s not sufficient. Substantive representation, in which women hold influential decision-making positions, is lacking.

    A closer look reveals Carney’s appointments may be seen as a form of gender-washing — symbolically inclusive, but not substantively so.




    Read more:
    Gender washing: seven kinds of marketing hypocrisy about empowering women


    Notably, men hold five of the six most powerful positions in his core cabinet: finance, justice and attorney general, government House leader, president of the King’s Privy Council and president of the Treasury Board. Only one of these key roles — foreign affairs — was awarded to a woman, Anita Anand.

    This reflects persistent gender trends identified by scholars like Roosmarijn de Geus and Peter Loewen, who found in 2021 that women are under-represented in Canada in the more influential or “masculine” portfolios such as finance and defence, and over-represented in those perceived as caring or social in nature.

    While women are at Canada’s cabinet table, most do not have seats with the greatest views. Equity in numbers does not yet translate to equity in influence.

    Formalizing gender parity

    Overall, Canada’s broader trends in political representation remain troubling. The 2025 election saw a decrease in both the proportion of women candidates and elected MPs.

    Canada has now slipped to 70th in the Inter-Parliamentary Union’s global ranking for women in national parliaments. With only 30.9 per cent of parliamentary seats held by women, Canada falls well below peer countries such as the United Kingdom (40.5 per cent) and New Zealand (45.5 per cent).

    Relying on the electoral fortunes of a single party to push for and uphold gender equity in Canada’s Parliament is unsustainable.

    Carney has now shown responsiveness to feminist public critique — a pragmatic move given the high number of women who supported the Liberal Party. If he wants to demonstrate ongoing commitment, his next step could be institutionalizing gender parity in ways that outlast any single leader or party. Such a change would ensure equity in politics is justice-based, not leader-based.

    More specifically, Parliament could amend the Parliament of Canada Act to require gender-balanced cabinets. Legislated gender quotas for political parties would also help ensure a minimum baseline of equitable representation in the House of Commons.




    Read more:
    Women in politics: To run or not to run?


    More than 100 countries have adopted such quotas. Canada could join them given most Canadians support their use.

    The Speaker of the House of Commons could also be tasked with producing annual gender-sensitive assessments of Parliament, policy outputs and government structures.

    Overall, Carney’s new cabinet is a win for feminist advocacy, but it cannot be the final word. Canada needs legal mechanisms, cultural shifts and institutionalized reforms to ensure its democratic institutions are truly representative.

    The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    ref. Mark Carney’s cabinet: A course correction on gender, but there’s more work ahead – https://theconversation.com/mark-carneys-cabinet-a-course-correction-on-gender-but-theres-more-work-ahead-256541

    MIL OSI – Global Reports

  • MIL-OSI Global: ‘The pope is Peruvian!’ How 2 decades in South America shaped the vision of Pope Leo XIV

    Source: The Conversation – USA – By Matthew Casey-Pariseault, Associate Clinical Professor of History, Arizona State University

    Faithful hold a photo of Robert Prevost, who was elected Pope Leo XIV, in front of the Cathedral of Chiclayo, Peru, where he served as bishop for several years. AP Photo/Manuel Medina

    In his first appearance as Pope Leo XIV on the balcony of St. Peter’s Basilica, the man born Robert Francis Prevost spoke for 10 minutes in Italian. Then he transitioned to Spanish and, with a big grin, gave a greeting to his “beloved diocese of Chiclayo in Peru.”

    Many Peruvians were overjoyed with the election of Leo, whom they are proud to claim as a fellow citizen. “The Pope is Peruvian!” reported the live coverage on Latina Noticias, one of the main national networks. Other news outlets around Lima, where I live, shared similar headlines. Within minutes, all of Peru knew that the new pope, who was born and raised in Chicago, had served in Peru for over two decades and was nationalized as a citizen in 2015.

    During his time in the South American nation, he lived alongside his parishioners through a bloody civil war, a decade-long dictatorship and an unstable post-dictatorship period that has so far led to three former presidents being handed prison sentences. Amid these challenges, Prevost became part of Peruvian society – and, eventually, a leader within it.

    Prevost’s leadership roles in Chicago and Rome were essential in his formation. But as a scholar of religion in Latin America, I believe that it is his time in Peru that has best prepared him to take on the challenges of directing the global Catholic Church. In Peru, where Catholicism permeates public life, Prevost encountered deep social and political challenges in ways that bishops in many other countries may never face so directly.

    Missionary during war and dictatorship

    Prevost first arrived in Peru in 1985. A member of the Order of St. Augustine, the young man had been sent to its mission in Chulucanas, in the northern province of Piura. Chulucanas is about 30 miles east of the regional capital, where the desert coast begins to rise up into the Andes.

    After a year, Prevost left to finish his doctoral degree and serve briefly in Illinois. But he soon returned to Peru, serving as a missionary in the northern city of Trujillo. He stayed there through the remainder of the 1980s and 1990s, amid civil war between the government and various militant groups – primarily the Maoist guerrillas of Sendero Luminoso, or “Shining Path,” who aimed to install a communist state.

    The violence hit other regions more severely, but Trujillo and the surrounding area were home to car bombs, sabotaged electrical grids and brutal military dragnet operations. Prevost accompanied Peruvians through some of the darkest days of the country’s history.

    During these years, Prevost trained future clergy and served as a parish priest. One fellow Augustinian recalled that Prevost played a key role in recruiting and training Peruvian candidates to the priesthood. Prevost also founded the Trujillo parish of Nuestra Señora de Montserrat, where his parishioners knew him as “Padre Roberto.”

    As the country transitioned away from the civil war period, which ultimately left nearly 70,000 dead, Prevost remained in Peru. During the 1990s, President Alberto Fujimori’s government built a polarizing legacy by undermining democracy and citizenship rights while capturing the two most powerful guerrilla leaders.

    Peruvian families carry remains of recently identified relatives who were killed years ago, during the insurgency, to the cemetery for burial in 2022.
    AP Photo/Martin Mejia

    As I show in my research, religion and politics are deeply intertwined in Peru. By the 1990s, the Peruvian Catholic Church was divided between members who spoke out in defense of human rights and those who defended the often brutal tactics of the government. Juan Luis Cipriani Thorne, who was then the archbishop of Ayacucho – the Andean stronghold of Sendero Luminoso – became a spokesperson for the pro-state faction, framing defenders of human rights as apologists for terrorism.

    Prevost was among those who maintained a critical view of any party, including the government, that committed human rights abuses. Diego Garcia-Sayan, the country’s former minister of justice and foreign affairs, recently wrote an op-ed praising Prevost’s willingness to speak out against attempts to legalize the death penalty and to defend embattled human rights organizations.

    From Chiclayo to the Vatican

    After returning to the United States in 1999, Prevost rose through the leadership ranks of the Augustinian order. He was sent back to Peru in 2014, when Pope Francis named him the apostolic administrator, and later bishop, of the northern diocese of Chiclayo.

    As bishop, Prevost emerged as a voice for democracy and justice. In a 2017 public statement to national media, he urged former President Fujimori to “personally ask forgiveness for the great injustices that were committed and for which he was prosecuted.”

    During his tenure as bishop, Prevost helped guide his community through the COVID-19 pandemic. He also played a key role ministering to Chiclayo’s growing population of Venezuelan migrants.

    Venezuelan Betania Rodriguez on May 10, 2025, shows a photo taken with Pope Leo XIV at a migrant shelter in Chiclayo, Peru.
    AP Photo/Guadalupe Pardo

    Meanwhile, he was gaining the confidence of his peers, as well as Pope Francis. Prevost was given a leadership role in the Peruvian Conference of Bishops and played a central role during Francis’ 2018 visit to Peru. In 2023, Francis named Prevost prefect of the Dicastery of Bishops, the oversight body for naming new bishops across the world.

    Prevost took the position in Rome but was sad to leave Peru again. “This time, again, it will be hard for me to leave here,” Prevost told Peruvian media.

    In recent years, Prevost has taken on causes central to Francis’ papacy. He was a key actor in the Vatican investigations of a Peruvian organization, Sodalicio de Vida Cristiana, which was found to have committed dozens of sexual and psychological abuses dating back to the 1970s. Francis dissolved the organization in 2025. Prevost has also developed an increased focus on Indigenous and environmental rights, in line with Francis’ 2015 encyclical Laudato Si and 2019 conference for bishops in the Amazon.

    Local celebrations

    Photographs and memes celebrating the Peruvian pope have flown around social media and WhatsApp groups in Peru. The photos of Prevost eating traditional dishes from the north coast are especially popular. AI-generated memes of the pope wearing the Peruvian national soccer jersey or eating ceviche with an Inca Kola soda are making the rounds.

    In Chicalayo and Trujillo, in addition to official church celebrations, thousands have taken to the streets to express their joy with placards and chants.

    Leo XIV has clearly brought the memory of his years in Peru with him to the Vatican. He has chosen Edgard Rimaycuna, a Peruvian priest whom the pope knew from his time in Chiclayo, as his personal secretary.

    I believe the challenges that Leo guided his parishioners through in two decades in Peru should offer valuable lessons for the new pope to build on the legacy of Francis, the first Latin American pope.

    Matthew Casey-Pariseault does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. ‘The pope is Peruvian!’ How 2 decades in South America shaped the vision of Pope Leo XIV – https://theconversation.com/the-pope-is-peruvian-how-2-decades-in-south-america-shaped-the-vision-of-pope-leo-xiv-256415

    MIL OSI – Global Reports

  • MIL-OSI Global: Trump moves to gut low-income energy assistance as summer heat descends and electricity prices rise

    Source: The Conversation – USA – By Conor Harrison, Associate Professor of Economic Geography, University of South Carolina

    Cities like Houston get high humidity in addition to the heat, making summer almost unbearable without cooling. Brandon Bell/Getty Images

    The U.S. is headed into what forecasters expect to be one of the hottest summers on record, and millions of people across the country will struggle to pay their power bills as temperatures and energy costs rise.

    A 2023 national survey found that nearly 1 in 4 Americans were unable to pay their full energy bill for at least one month, and nearly 1 in 4 reported that they kept their homes at unsafe temperatures to save money. By 2025, updated polling indicated nearly 3 in 4 Americans are worried about rising energy costs.

    Conservative estimates suggest that utilities shut off power to over 3 million U.S. households each year because the residents cannot pay their bills.

    This problem of high energy prices isn’t lost on the Trump administration.

    On the first day of his second term in 2025, President Donald Trump declared a national energy emergency by executive order, saying that “high energy prices … devastate Americans, particularly those living on low- and fixed incomes.”

    Secretary of Energy Christopher Wright raised concerns about utility disconnections and outlined a mission to “shrink that number, with the target of zero.”

    Yet, the administration’s 2026 budget proposal zeros out funding for the Low Income Home Energy Assistance Program, or LIHEAP, the federal program that administers funding to help low-income households pay their utility bills. And on April 1, 2025, the administration laid off the entire staff of the LIHEAP office.

    During the hottest periods, even nighttime temperatures might not drop below 90 in Phoenix. Without air conditioning, homes can become dangerously hot.
    Patrick T. Fallon/AFP via Getty Images

    Many people already struggle to cobble together enough help from various sources to pay their power bills. As researchers who study energy insecurity, we believe gutting the federal office responsible for administering energy bill assistance will make it even harder for Americans to make ends meet.

    The high stakes of energy affordability

    We work with communities in South Carolina and Tennessee where many residents struggle to heat and cool their homes.

    We see how high energy prices force people to make dangerous trade-offs. Low-income households often find themselves choosing whether to buy necessities, pay for child care or pay their utility bills.

    One elderly person we spoke with for our research, Sarah, explained that she routinely forgoes buying medications in order to pay her utility bill. Another research participant who connects low-income families to energy bill assistance in Tennessee said: “I’ve gone into these homes, and it’s so hot. Your eyes roll in the back of your head. It’s like you can’t breathe. How do you sit in here? It’s just unreal.”

    Unfortunately, these stories are increasingly common, especially in low-income communities and communities of color.

    Electricity prices are predicted to rise with worsening climate change: More frequent heat waves and extreme weather events drive up demand and put pressure on the grid. Furthermore, rising energy demand from data centers – supercharged by the increasing energy use by artificial intelligence – is accelerating price increases.

    Shrinking resources for assistance

    LIHEAP, created in 1981, provides funding to states as block grants to help low-income families pay their utility bills. In fiscal year 2023, the program distributed US$6.1 billion in energy assistance, helping some 5.9 million households avoid losing power connections.

    The program’s small staff played critical roles in disbursing this money, providing implementation guidelines, monitoring state-level fund management and tracking and evaluating program effectiveness.

    A long line of utility customers wait to apply for help from the Low-Income Energy Assistance Program in Trenton, N.J., in 2011. In 2023, around 6 million households benefited from LIHEAP.
    AP Photo/Mel Evans

    LIHEAP has historically prioritized heating assistance in cold-weather states over cooling assistance in warmer states. However, recent research shows a need to revisit the allocation formula to address the increasing need for air conditioning. The layoffs removed staff who could direct this work.

    It is unlikely that other sources of funding can fill in the gaps if states do not receive LIHEAP funds from the federal government. The program’s funding has never been high enough to meet the need. In 2020, LIHEAP provided assistance to just 16% of eligible households.

    Our research has found that, in practice, many households rely on a range of local nonprofits, faith-based organizations and informal networks of family and friends to help them pay their bills and keep the power on.

    For example, a research participant named Deborah reported that when faced with a utility shut-off, she “drove from church to church to church” in search of assistance. United Way in South Carolina received over 16,000 calls from people seeking help to pay their utility bills in 2023.

    These charitable services are an important lifeline for many, especially in the communities we study in the South. However, research has shown that faith-based programs do not have the reach of public programs.

    Without LIHEAP, the limited funds provided by nonprofits and the personal connections that people patch together will be stretched even thinner, especially as other charitable services, such as food banks, also face funding cuts.

    What’s ahead

    The $4.1 billion that Congress allocated to LIHEAP for the 2025 fiscal year, which ends Sept. 30, has already been disbursed. Going forward, however, cuts to LIHEAP staff affect its ability to respond to growing need. Congress now has to decide if it will kill the program’s future funding as well.

    Maricopa County in Arizona, home to Phoenix, illustrates what’s at stake. Annual heat-related deaths have risen 1,000% there in the past decade, from 61 to 602. Hundreds of these deaths occurred indoors.

    Cooling becomes essential during Arizona’s extreme summers. Maricopa County, home to Phoenix, reported more than 600 heat-related deaths in 2024.
    AP Photo/Ross D. Franklin

    We believe gutting LIHEAP puts the goal of energy affordability for all Americans – and Americans’ lives – in jeopardy. Until more affordable energy sources, such as solar and wind power, can be scaled up, an expansion of federal assistance programs is needed, not a contraction.

    Increasing the reach and funding of LIHEAP is one option. Making home weatherization programs more effective is another.

    Governments could also require utilities to forgive past-due bills and end utility shut-offs during the hottest and coldest months. About two dozen states currently have rules to prevent shut-offs during the worst summer heat.

    For now, the cuts mean more pressure on nonprofits, faith-based organizations and informal networks. Looking ahead to another exceptionally hot summer, we can only hope that cuts to LIHEAP staff don’t foreshadow a growing yet preventable death toll.

    Etienne Toussaint, a law professor at the University of South Carolina, and Ann Eisenberg, a law professor at West Virginia University, contributed to this article.

    Conor Harrison receives funding from the National Science Foundation and the Alfred P. Sloan Foundation.

    Elena Louder receives funding from the Alfred P. Sloan Foundation.

    Nikki Luke receives funding from the Alfred P. Sloan Foundation. She previously worked at the U.S. Department of Energy.

    Shelley Welton receives funding from the Alfred P. Sloan Foundation.

    ref. Trump moves to gut low-income energy assistance as summer heat descends and electricity prices rise – https://theconversation.com/trump-moves-to-gut-low-income-energy-assistance-as-summer-heat-descends-and-electricity-prices-rise-256194

    MIL OSI – Global Reports