Category: Academic Analysis

  • MIL-OSI Submissions: Could electric brain stimulation lead to better maths skills?

    Source: The Conversation – UK – By Roi Cohen Kadosh, Professor of Cognitive Neuroscience, University of Surrey

    Triff/Shutterstock

    A painless, non-invasive brain stimulation technique can significantly improve how young adults learn maths, my colleagues and I found in a recent study. In a paper in PLOS Biology, we describe how this might be most helpful for those who are likely to struggle with mathematical learning because of how their brain areas involved in this skill communicate with each other.

    Maths is essential for many jobs, especially in science, technology, engineering and finance. However, a 2016 OECD report suggested that a large proportion of adults in developed countries (24% to 29%) have maths skills no better than a typical seven-year-old. This lack of numeracy can contribute to lower income, poor health, reduced political participation and even diminished trust in others.

    Education often widens rather than closes the gap between high and low
    achievers, a phenomenon known as the Matthew effect. Those who start with an advantage, such as being able to read more words when starting school, tend to pull further ahead. Stronger educational achievement has been also associated with socioeconomic status, higher motivation and greater engagement with material learned during a class.

    Biological factors, such as genes, brain connectivity, and chemical signalling, have been shown in some studies to play a stronger role in learning outcomes than environmental ones. This has been well-documented in different areas, including maths, where differences in biology may explain educational achievements.


    Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.


    To explore this question, we recruited 72 young adults (18–30 years old) and taught them new maths calculation techniques over five days. Some received a placebo treatment. Others received transcranial random noise stimulation (tRNS), which delivers gentle electrical currents to the brain. It is painless and often imperceptible, unless you focus hard to try and sense it.

    It is possible tRNS may cause long term side effects, but in previous studies my team assessed participants for cognitive side effects and found no evidence for it.

    Could tRNS help people improve their maths skills?
    Prostock-studio/Shutterstock

    Participants who received tRNS were randomly assigned to receive it in one of two different brain areas. Some received it over the dorsolateral prefrontal cortex, a region critical for memory, attention, or when we acquire a new cognitive skill. Others had tRNS over the posterior parietal cortex, which processes maths information, mainly when the learning has been accomplished.

    Before and after the training, we also scanned their brains and measured levels of key neurochemicals such as gamma-aminobutyric acid (gaba), which we showed previously, in a 2021 study, to play a role in brain plasticity and learning, including maths.

    Some participants started with weaker connections between the prefrontal and parietal brain regions, a biological profile that is associated with poorer learning. The study results showed these participants made significant gains in learning when they received tRNS over the prefrontal cortex.

    Stimulation helped them catch up with peers who had stronger natural connectivity. This finding shows the critical role of the prefrontal cortex in learning and could help reduce educational inequalities that are grounded in neurobiology.

    How does this work? One explanation lies in a principle called stochastic resonance. This is when a weak signal becomes clearer when a small amount of random noise is added.

    In the brain, tRNS may enhance learning by gently boosting the activity of underperforming neurons, helping them get closer to the point at which they become active and send signals. This is a point known as the “firing threshold”, especially in people whose brain activity is suboptimal for a task like maths learning.

    It is important to note what this technique does not do. It does not make the best
    learners even better. That is what makes this approach promising for bridging gaps,
    not widening them. This form of brain stimulation helps level the playing field.

    Our study focused on healthy, high-performing university students. But in similar studies on children with maths learning disabilities (2017) and with attention-deficit/hyperactivity disorder (2023) my colleagues and I found tRNS seemed to improve their learning and performance in cognitive training.

    I argue our findings could open a new direction in education. The biology of the learner matters, and with advances in knowledge and technology, we can develop tools that act on the brain directly, not just work around it. This could give more people the chance to get the best benefit from education.

    In time, perhaps personalised, brain-based interventions like tRNS could support learners who are being left behind not because of poor teaching or personal circumstances, but because of natural differences in how their brains work.

    Of course, very often education systems aren’t operating to their full potential because of inadequate resources, social disadvantage or systemic barriers. And so any brain-based tools must go hand-in-hand with efforts to tackle these obstacles.

    Roi Cohen Kadosh serves on the scientific advisory boards of Neuroelectrics Inc., and Innosphere Ltd. He is the founder and shareholder of Cognite Neurotechnology Ltd. He received funding from the Wellcome Trust, UKRI, the British Academy, IARPA, DASA, Joy Ventures, the James S McDonnell Foundation, and the European Union. He is affiliated with the University of Surrey.

    ref. Could electric brain stimulation lead to better maths skills? – https://theconversation.com/could-electric-brain-stimulation-lead-to-better-maths-skills-260134

    MIL OSI

  • MIL-Evening Report: What did ancient Rome smell like? Honestly, often pretty rank

    Source: The Conversation (Au and NZ) – By Thomas J. Derrick, Gale Research Fellow in Ancient Glass and Material Culture, Macquarie University

    minoandriani/Getty Images

    The roar of the arena crowd, the bustle of the Roman forum, the grand temples, the Roman army in red with glistening shields and armour – when people imagine ancient Rome, they often think of its sights and sounds. We know less, however, about the scents of ancient Rome.

    We cannot, of course, go back and sniff to find out. But the literary texts, physical remains of structures, objects, and environmental evidence (such as plants and animals) can offer clues.

    So what might ancient Rome have smelled like?

    Honestly, often pretty rank

    In describing the smells of plants, author and naturalist Pliny the Elder uses words such as iucundus (agreeable), acutus (pungent), vis (strong), or dilutus (weak).

    None of that language is particularly evocative in its power to transport us back in time, unfortunately.

    But we can probably safely assume that, in many areas, Rome was likely pretty dirty and rank-smelling. Property owners did not commonly connect their toilets to the sewers in large Roman towns and cities – perhaps fearing rodent incursions or odours.

    Roman sewers were more like storm drains, and served to take standing water away from public areas.

    Professionals collected faeces for fertiliser and urine for cloth processing from domestic and public latrines and cesspits. Chamber pots were also used, which could later be dumped in cesspits.

    This waste disposal process was just for those who could afford to live in houses; many lived in small, non-domestic spaces, barely furnished apartments, or on the streets.

    A common whiff in the Roman city would have come from the animals and the waste they created. Roman bakeries frequently used large lava stone mills (or “querns”) turned by mules or donkeys. Then there was the smell of pack animals and livestock being brought into town for slaughter or sale.

    Animals were part of life in the Roman empire.
    Marco_Piunti/Getty Images

    The large “stepping-stones” still seen in the streets of Pompeii were likely so people could cross streets and avoid the assorted feculence that covered the paving stones.

    Disposal of corpses (animals and human) was not formulaic. Depending on the class of the person who had died, people might well have been left out in the open without cremation or burial.

    Bodies, potentially decaying, were a more common sight in ancient Rome than now.

    Suetonius, writing in the first century CE, famously wrote of a dog carrying a severed human hand to the dining table of the Emperor Vespasian.

    Deodorants and toothpastes

    In a world devoid of today’s modern scented products – and daily bathing by most of the population – ancient Roman settlements would have smelt of body odour.

    Classical literature has some recipes for toothpaste and even deodorants.

    However, many of the deodorants were to be used orally (chewed or swallowed) to stop one’s armpits smelling.

    One was made by boiling golden thistle root in fine wine to induce urination (which was thought to flush out odour).

    The Roman baths would likely not have been as hygienic as they may appear to tourists visiting today. A small tub in a public bath could hold between eight and 12 bathers.

    The Romans had soap, but it wasn’t commonly used for personal hygiene. Olive oil (including scented oil) was preferred. It was scraped off the skin with a strigil (a bronze curved tool).

    This oil and skin combination was then discarded (maybe even slung at a wall). Baths had drains – but as oil and water don’t mix, it was likely pretty grimy.

    Scented perfumes

    The Romans did have perfumes and incense.

    The invention of glassblowing in the late first century BCE (likely in Roman-controlled Jerusalem) made glass readily available, and glass perfume bottles are a common archaeological find.

    Animal and plant fats were infused with scents – such as rose, cinnamon, iris, frankincense and saffron – and were mixed with medicinal ingredients and pigments.

    The roses of Paestum in Campania (southern Italy) were particularly prized, and a perfume shop has even been excavated in the city’s Roman forum.

    The trading power of the vast Roman empire meant spices could be sourced from India and the surrounding regions.

    There were warehouses for storing spices such as pepper, cinnamon and myrrh in the centre of Rome.

    In a recent Oxford Journal of Archaeology article, researcher Cecilie Brøns writes that even ancient statues could be perfumed with scented oils.

    Sources frequently do not describe the smell of perfumes used to anoint the statues, but a predominantly rose-based perfume is specifically mentioned for this purpose in inscriptions from the Greek city of Delos (at which archaeologists have also identified perfume workshops). Beeswax was likely added to perfumes as a stabiliser.

    Enhancing the scent of statues (particularly those of gods and goddesses) with perfumes and garlands was important in their veneration and worship.

    An olfactory onslaught

    The ancient city would have smelt like human waste, wood smoke, rotting and decay, cremating flesh, cooking food, perfumes and incense, and many other things.

    It sounds awful to a modern person, but it seems the Romans did not complain about the smell of the ancient city that much.

    Perhaps, as historian Neville Morley has suggested, to them these were the smells of home or even of the height of civilisation.

    Thomas J. Derrick does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. What did ancient Rome smell like? Honestly, often pretty rank – https://theconversation.com/what-did-ancient-rome-smell-like-honestly-often-pretty-rank-257111

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: News laws to make it harder for large Australian and foreign companies to avoid paying tax

    Source: The Conversation (Au and NZ) – By Kerrie Sadiq, Professor of Taxation, QUT Business School, and ARC Future Fellow, Queensland University of Technology

    The Conversation, CC BY

    The beginning of the financial year means for the first time in Australia the public will see previously unreleased tax reports produced by multinational taxpayers.

    These documents, known as country-by-country reports, or CbCR for short, contain information about the tax practices of large Australian businesses and foreign businesses operating in Australia. This information, previously only available to the taxpayer and the Australian Tax Office, will be made public.

    Country-by-country reports, announced in the October 2022-2023 budget, were introduced with other measures designed to improve corporate tax behaviour. The reports will be released from this week as part of corporate reporting practices. Multinationals have 12 months to comply.

    A fairer tax system

    Country-by-country reporting forms part of the government’s multinational tax integrity election commitment package. The aim is to ensure a fairer and more sustainable tax system. Large firms will be required to publish a statement on their global activities plus tax information for each jurisdiction in which they operate.

    Until now, large multinationals only had to prepare annual consolidated financial statements under international financial reporting standards. The traditional reports aggregate results and provide limited geographic reporting information.

    Traditional high-level reporting allows multinationals to conceal their country-level activities. This hides questionable tax practices.

    Country-by-country reporting allows us to better see where a multinational operates. More importantly, the amount of activity in each jurisdiction is reported. The information provides clues as to whether artificial profit shifting has occurred.

    Anyone interested can uncover details about how multinationals structure their global operations. Information may reveal a misalignment between the company’s real economic presence in a country, the profits they book and taxes they pay in that country.

    Bringing Australia into line with the EU

    Country-by-country reporting is not new. It is the requirement that the information be made public that has changed.

    Australian firms have been required to provide such reports to the Australian Tax Office since 2016. However, the information has been confidential.

    The new public disclosure law brings Australia in line with large firms operating in the European Union which brought in the change last year.

    How country-by-country reporting works

    A taxpayer with annual global income above A$1 billion and at least A$10 million of its turnover Australian-sourced will need to produce a report. The obligation to disclose rests with the parent entity no matter where they are located.

    Australia’s largest companies, including mining giants Rio Tinto and BHP, biotech firm CSL, and investment bank Macquarie Group, will be among those expected to report, as will foreign tech behemoths such as Apple, Amazon, Microsoft and Meta.

    These tech giants are the same US firms likely to be excluded from the global minimum tax rules under a G7 agreement reached last week. Under the agreement, US multinationals were exempted from paying more corporate tax overseas. Other G7 members gave in to protect their own companies from the US’s threat of retaliation.

    Under the law change in Australia, a parent entity will provide its name, the names of all members of the group, a description of their approach to tax, and information about operations in certain countries. Included on the list are countries that attract multinationals due to reduced tax obligations, such as Singapore, Switzerland, and the Bahamas.

    Everyone will be able to see where a multinational is operating. They will also see the types of business activities conducted, number of employees, assets, revenue, and taxes paid. Large profits in a country but little business activity and very few employees may raise questions, especially if a country has a low tax rate.

    Benefits of better transparency

    Access to the extra information will help investors assess the tax and reputational risk of a firm. A multinational that shifts profits to low tax countries may be audited and pay extra tax and penalties.

    Increased transparency allows greater scrutiny. In turn, it is hoped multinationals will reduce aggressive tax planning due to potential risk to their reputation.

    If multinationals shift less taxable profits out of Australia to low-tax or no-tax jurisdictions, this will lead to Australia receiving a greater share of much needed corporate tax revenue.

    Reducing profit shifting

    Recent academic research on public country-by-country reporting reveals it provides additional information to better identify tax haven activity. However, it does not result in a significant drop in corporate tax avoidance.

    Increased tax transparency helps investors and tax authorities to better understand a multinational’s economic and tax geographic footprint. It is also important when it seems that US giants will be excluded from the 15% global minimum tax rules. Transparency by itself, however, does not lead to multinationals paying more corporate taxes.

    By its very nature, tax avoidance is legal but pushes the boundaries by going against the spirit of the law. Indeed, many large multinationals argue tax is a legal obligation and is not voluntary. They maintain they pay the tax required of them according to the law.

    Undoubtedly, Australia’s new public country-by-country regime is a positive step for tax transparency. As a country initiative, it has been applauded as groundbreaking and world leading. However, it is not a panacea to corporate tax avoidance.

    To limit corporate tax avoidance and have multinationals pay more corporate taxes, we must get to the heart of the problem. We must change the law that dictates the way multinationals are taxed.

    Kerrie Sadiq currently receives funding from the Australian Research Council. She has previously received research grants from CPA Australia and CAANZ.

    Rodney Brown has previously received research grants from CPA Australia and CAANZ.

    ref. News laws to make it harder for large Australian and foreign companies to avoid paying tax – https://theconversation.com/news-laws-to-make-it-harder-for-large-australian-and-foreign-companies-to-avoid-paying-tax-260004

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: News laws to make it harder for large Australian and foreign companies to avoid paying tax

    Source: The Conversation (Au and NZ) – By Kerrie Sadiq, Professor of Taxation, QUT Business School, and ARC Future Fellow, Queensland University of Technology

    The Conversation, CC BY

    The beginning of the financial year means for the first time in Australia the public will see previously unreleased tax reports produced by multinational taxpayers.

    These documents, known as country-by-country reports, or CbCR for short, contain information about the tax practices of large Australian businesses and foreign businesses operating in Australia. This information, previously only available to the taxpayer and the Australian Tax Office, will be made public.

    Country-by-country reports, announced in the October 2022-2023 budget, were introduced with other measures designed to improve corporate tax behaviour. The reports will be released from this week as part of corporate reporting practices. Multinationals have 12 months to comply.

    A fairer tax system

    Country-by-country reporting forms part of the government’s multinational tax integrity election commitment package. The aim is to ensure a fairer and more sustainable tax system. Large firms will be required to publish a statement on their global activities plus tax information for each jurisdiction in which they operate.

    Until now, large multinationals only had to prepare annual consolidated financial statements under international financial reporting standards. The traditional reports aggregate results and provide limited geographic reporting information.

    Traditional high-level reporting allows multinationals to conceal their country-level activities. This hides questionable tax practices.

    Country-by-country reporting allows us to better see where a multinational operates. More importantly, the amount of activity in each jurisdiction is reported. The information provides clues as to whether artificial profit shifting has occurred.

    Anyone interested can uncover details about how multinationals structure their global operations. Information may reveal a misalignment between the company’s real economic presence in a country, the profits they book and taxes they pay in that country.

    Bringing Australia into line with the EU

    Country-by-country reporting is not new. It is the requirement that the information be made public that has changed.

    Australian firms have been required to provide such reports to the Australian Tax Office since 2016. However, the information has been confidential.

    The new public disclosure law brings Australia in line with large firms operating in the European Union which brought in the change last year.

    How country-by-country reporting works

    A taxpayer with annual global income above A$1 billion and at least A$10 million of its turnover Australian-sourced will need to produce a report. The obligation to disclose rests with the parent entity no matter where they are located.

    Australia’s largest companies, including mining giants Rio Tinto and BHP, biotech firm CSL, and investment bank Macquarie Group, will be among those expected to report, as will foreign tech behemoths such as Apple, Amazon, Microsoft and Meta.

    These tech giants are the same US firms likely to be excluded from the global minimum tax rules under a G7 agreement reached last week. Under the agreement, US multinationals were exempted from paying more corporate tax overseas. Other G7 members gave in to protect their own companies from the US’s threat of retaliation.

    Under the law change in Australia, a parent entity will provide its name, the names of all members of the group, a description of their approach to tax, and information about operations in certain countries. Included on the list are countries that attract multinationals due to reduced tax obligations, such as Singapore, Switzerland, and the Bahamas.

    Everyone will be able to see where a multinational is operating. They will also see the types of business activities conducted, number of employees, assets, revenue, and taxes paid. Large profits in a country but little business activity and very few employees may raise questions, especially if a country has a low tax rate.

    Benefits of better transparency

    Access to the extra information will help investors assess the tax and reputational risk of a firm. A multinational that shifts profits to low tax countries may be audited and pay extra tax and penalties.

    Increased transparency allows greater scrutiny. In turn, it is hoped multinationals will reduce aggressive tax planning due to potential risk to their reputation.

    If multinationals shift less taxable profits out of Australia to low-tax or no-tax jurisdictions, this will lead to Australia receiving a greater share of much needed corporate tax revenue.

    Reducing profit shifting

    Recent academic research on public country-by-country reporting reveals it provides additional information to better identify tax haven activity. However, it does not result in a significant drop in corporate tax avoidance.

    Increased tax transparency helps investors and tax authorities to better understand a multinational’s economic and tax geographic footprint. It is also important when it seems that US giants will be excluded from the 15% global minimum tax rules. Transparency by itself, however, does not lead to multinationals paying more corporate taxes.

    By its very nature, tax avoidance is legal but pushes the boundaries by going against the spirit of the law. Indeed, many large multinationals argue tax is a legal obligation and is not voluntary. They maintain they pay the tax required of them according to the law.

    Undoubtedly, Australia’s new public country-by-country regime is a positive step for tax transparency. As a country initiative, it has been applauded as groundbreaking and world leading. However, it is not a panacea to corporate tax avoidance.

    To limit corporate tax avoidance and have multinationals pay more corporate taxes, we must get to the heart of the problem. We must change the law that dictates the way multinationals are taxed.

    Kerrie Sadiq currently receives funding from the Australian Research Council. She has previously received research grants from CPA Australia and CAANZ.

    Rodney Brown has previously received research grants from CPA Australia and CAANZ.

    ref. News laws to make it harder for large Australian and foreign companies to avoid paying tax – https://theconversation.com/news-laws-to-make-it-harder-for-large-australian-and-foreign-companies-to-avoid-paying-tax-260004

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: Farming within Earth’s limits is still possible – but it will take a Herculean effort

    Source: The Conversation (Au and NZ) – By Michalis Hadjikakou, Senior Lecturer in Environmental Sustainability, School of Life and Environmental Sciences, Faculty of Science, Engineering & Built Environment, Deakin University

    Patrick Pleul/Getty

    The way we currently produce and consume food takes a big toll on the environment.

    Worldwide, farming is responsible for more than 20% of greenhouse gas emissions and uses more than 70% of all fresh water taken from rivers, lakes and groundwater. It’s the leading driver of deforestation and nutrient pollution, largely from fertiliser run-off. All of these pose a serious threat to ecosystems.

    If this sounds serious, it’s because it is. If emissions and land clearing trends continue, the world’s food system alone could make it impossible to meet climate targets. If we continue eating and producing food in the same way we are now, we will almost certainly exceed crucial environmental limits by 2050.

    What can be done? In our new research, we looked for ways to keep the food system within environmental limits by 2050. We found only one approach worked: combine high-impact changes such as shifting to flexitarian (low meat) diets, improving farming practices and reducing food waste.

    Why will farming take us past environmental limits?

    Environmental limits are also known as planetary boundaries. These nine boundaries are Earth’s natural safety limits. They range from freshwater resources to the biosphere to the climate. Human activities have pushed past six out of nine safe boundaries through clearing too much land, overusing water for irrigation, overapplying fertilisers or emitting more than our shrinking carbon budget permits.

    If we cross these thresholds, we risk dangerous and irreversible changes to the conditions supporting a stable planet.

    Transforming the way we farm and eat is essential if we are to keep humanity in a safe operating space within environmental limits.

    The 2021 documentary Breaking Boundaries focused on the very real dangers of breaching planetary limits.

    What does this transformation look like?

    The challenge of making food production sustainable is long-running. Previous research has compared the effectiveness of different changes authorities and consumers could make. But most studies used different models, making it hard to compare changes.

    To overcome this problem, we synthesised information from previous studies and built a database of thousands of future food system scenarios and possible changes. Then we performed a meta-analysis to combine data from multiple studies and draw more robust conclusions.

    This approach allows policymakers and researchers to compare apples and apples, as well as see which combination of changes would let us stay within crucial safety limits by 2050.

    We focused on four vital indicators: how much land and water is used for farming; the amount of greenhouse gases emitted; and the flows of two key nutrients, nitrogen and phosphorus.

    What works best?

    What stood out was the sheer variation in effectiveness. Some changes would work very well across several areas, while others would take a lot of effort for not enough result.

    Two changes punch well above their weight on land, water and emissions.

    The first is shifting to a flexitarian diet with fewer foods sourced from animals. This is similar to traditional regional diets such as the Mediterranean and Okinawan diets, where meat and dairy are eaten in much smaller proportions compared to whole grains, fruits, vegetables, nuts and legumes.

    Returning to this diet could shrink how much land we use for farming by almost a quarter (24%), cut water demand by 14% and slash greenhouse gas emissions by 47%.

    Traditional diets such as the Mediterranean diet rely less on animal products and more on plants, nuts, oils and legumes.
    monticello/Shutterstock

    The second is breeding better livestock. Livestock today are much better at converting their feed into meat or milk than their precursors. But this could be better still. More productive animals could enable an 18% reduction in land use, a 10% drop in water use and a 34% cut to emissions.

    Modern fertilisers have made it possible to produce many more crops and fodder. But if too much fertiliser is applied, it can wash off after rain and pollute waterways.

    Better timed and more precise application of fertiliser is by far the best way to cut nutrient pollution. Major improvements here could cut nitrogen pollution by 39% and phosphorus pollution by 42%. As a side benefit, it could save farmers money.



    Increasing crop yields, lowering agricultural emissions through better soil management and other practices, and taking up technologies such as methane-reducing supplements can significantly reduce our risk of exceeding environmental limits. So too can cutting food waste and using water more wisely in farming. Our extended results show the relative benefits of ten possible interventions.

    There is no silver bullet

    We found no single change was up to the task of making food production and consumption sustainable.

    We considered over a million possible combinations of changes. Of these combinations, only a tiny fraction – 0.02% – give us a fighting chance of staying within all environmental limits.

    In almost all successful combinations, the world would need to make significant cuts to how many calories come from animals, make big improvements to fertiliser use and nutrient management, and focus research and development on finding ways to farm land and livestock with less resources and emissions.

    Most successful combinations also rely on halving food waste and reducing overconsumption.

    Is it still possible?

    Farming within the limits of Earth’s systems will be hard. But it is possible.

    Some work is already being done. Global organisations such as the United Nations are making a concerted effort to accelerate changes to food systems across many countries.

    Research like ours can make people feel powerless. But individual change is always worthwhile. Reducing your intake of animal products benefits your health and the planet.

    Properly addressing these very real issues will take concerted, collective work. If we don’t succeed, we risk triggering ecological collapse – and threatening the foundation for human civilisation.

    The knowledge and tools are at hand. What’s needed now is ambition – and a sense of what’s at stake.

    The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    ref. Farming within Earth’s limits is still possible – but it will take a Herculean effort – https://theconversation.com/farming-within-earths-limits-is-still-possible-but-it-will-take-a-herculean-effort-259901

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: Gum disease, decay, missing teeth: why people with mental illness have poorer oral health

    Source: The Conversation (Au and NZ) – By Bonnie Clough, Senior Lecturer, School of Applied Psychology, Griffith University

    mihailomilovanovic/Getty Images

    People with poor mental health face many challenges. One that’s perhaps lesser known is that they’re more likely than the overall population to have poor oral health.

    Research has shown people with serious mental illness are four times more likely than the general population to have gum disease. They’re nearly three times more likely to have lost all their teeth due to problems such as gum disease and tooth decay.

    Serious mental illnesses include major depressive disorder, bipolar disorder and psychotic disorders such as schizophrenia. These conditions affect about 800,000 Australians.

    People living with schizophrenia have, on average, eight more teeth that are decayed, missing or filled than the general population.

    So why does this link exist? And what can we do to address the problem?

    Why is this a problem?

    Oral health problems are expensive to fix and can make it hard for people to eat, socialise, work or even just smile.

    What’s more, dental issues can land people in hospital. Our research shows dental conditions are the third most common reason for preventable hospital admissions among people with serious mental illness.

    Meanwhile, poor oral health is linked with long-term health conditions such as diabetes, heart disease, some cancers, and even cognitive problems. This is because the bacteria associated with gum diseases can cause inflammation throughout the body, which affects other systems in the body.

    Why are mental health and oral health linked?

    Poor mental and oral health share common risk factors. Social factors such as isolation, unemployment and housing insecurity can worsen both oral and mental health.

    For example, unemployment increases the risk of oral disease. This can be due to financial difficulties, reduced access to oral health care, or potential changes to diet and hygiene practices.

    At the same time, oral disease can increase barriers to finding employment, due to stigma, discrimination, dental pain and associated long-term health conditions.

    It’s clear the relationship between oral health and mental health goes both ways. Dental disease can reduce self-esteem and increase psychological distress. Meanwhile, symptoms of mental health conditions, such as low motivation, can make engaging in good oral health practices, including brushing, flossing, and visiting the dentist, more difficult.

    And like many people, those with serious mental illness can experience significant anxiety about going to the dentist. They may also have experienced trauma in the past, which can make visiting a dental clinic a frightening experience.

    Separately, poor oral health can be made worse by some medications for mental health conditions. Certain medications can interfere with saliva production, reducing the protective barrier that covers the teeth. Some may also increase sugar cravings, which heightens the risk of tooth decay.

    Some medications people take for mental health conditions can affect oral health.
    Gladskikh Tatiana/Shutterstock

    Our research

    In a recent study, we interviewed young people with mental illness. Our findings show the significant personal costs of dental disease among people with mental illness, and highlight the relationship between oral and mental health.

    Smiling is one of our best ways to communicate, but we found people with serious mental illness were sometimes embarrassed and ashamed to smile due to poor oral health.

    One participant told us:

    [poor oral health is] not only [about] the physical aspects of restricting how you eat, but it’s also about your mental health in terms of your self-esteem, your self-confidence, and basic wellbeing, which sort of drives me to become more isolated.

    Another said:

    for me, it was that serious fear of – God my teeth are looking really crap, and in the past they’ve [dental practitioners] asked, “Hey, you’ve missed this spot; what’s happening?”. How do I explain to them, hey, I’ve had some really shitty stuff happening and I have a very serious episode of depression?

    What can we do?

    Another of our recent studies focused on improving oral health awareness and behaviours among young adults experiencing mental health difficulties. We found a brief online oral health education program improved participants’ oral health knowledge and attitudes.

    Improving oral health can result in improved mental wellbeing, self-esteem and quality of life. But achieving this isn’t always easy.

    Limited Medicare coverage for dental care means oral diseases are frequently treated late, particularly among people with mental illness. By this time, more invasive treatments, such as removal of teeth, are often required.

    It’s crucial the health system takes a holistic approach to caring for people experiencing serious mental illness. That means we have mental health staff who ask questions about oral health, and dental practitioners who are trained to manage the unique oral health needs of people with serious mental illness.

    It also means increasing government funding for oral health services – promotion, prevention and improved interdisciplinary care. This includes better collaboration between oral health, mental health, and peer and informal support sectors.

    Amanda Wheeler is an investigator on a MetroSouth Health 2025 grant exploring use of Queensland Emergency Departments for people with mental ill-health seeking acute care for oral health problems.

    Steve Kisely has received a grant on oral health from Metro South Research Foundation and one from the Medical Research Future Fund.

    Bonnie Clough, Caroline Victoria Robertson, and Santosh Tadakamadla do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    ref. Gum disease, decay, missing teeth: why people with mental illness have poorer oral health – https://theconversation.com/gum-disease-decay-missing-teeth-why-people-with-mental-illness-have-poorer-oral-health-258403

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: Gum disease, decay, missing teeth: why people with mental illness have poorer oral health

    Source: The Conversation (Au and NZ) – By Bonnie Clough, Senior Lecturer, School of Applied Psychology, Griffith University

    mihailomilovanovic/Getty Images

    People with poor mental health face many challenges. One that’s perhaps lesser known is that they’re more likely than the overall population to have poor oral health.

    Research has shown people with serious mental illness are four times more likely than the general population to have gum disease. They’re nearly three times more likely to have lost all their teeth due to problems such as gum disease and tooth decay.

    Serious mental illnesses include major depressive disorder, bipolar disorder and psychotic disorders such as schizophrenia. These conditions affect about 800,000 Australians.

    People living with schizophrenia have, on average, eight more teeth that are decayed, missing or filled than the general population.

    So why does this link exist? And what can we do to address the problem?

    Why is this a problem?

    Oral health problems are expensive to fix and can make it hard for people to eat, socialise, work or even just smile.

    What’s more, dental issues can land people in hospital. Our research shows dental conditions are the third most common reason for preventable hospital admissions among people with serious mental illness.

    Meanwhile, poor oral health is linked with long-term health conditions such as diabetes, heart disease, some cancers, and even cognitive problems. This is because the bacteria associated with gum diseases can cause inflammation throughout the body, which affects other systems in the body.

    Why are mental health and oral health linked?

    Poor mental and oral health share common risk factors. Social factors such as isolation, unemployment and housing insecurity can worsen both oral and mental health.

    For example, unemployment increases the risk of oral disease. This can be due to financial difficulties, reduced access to oral health care, or potential changes to diet and hygiene practices.

    At the same time, oral disease can increase barriers to finding employment, due to stigma, discrimination, dental pain and associated long-term health conditions.

    It’s clear the relationship between oral health and mental health goes both ways. Dental disease can reduce self-esteem and increase psychological distress. Meanwhile, symptoms of mental health conditions, such as low motivation, can make engaging in good oral health practices, including brushing, flossing, and visiting the dentist, more difficult.

    And like many people, those with serious mental illness can experience significant anxiety about going to the dentist. They may also have experienced trauma in the past, which can make visiting a dental clinic a frightening experience.

    Separately, poor oral health can be made worse by some medications for mental health conditions. Certain medications can interfere with saliva production, reducing the protective barrier that covers the teeth. Some may also increase sugar cravings, which heightens the risk of tooth decay.

    Some medications people take for mental health conditions can affect oral health.
    Gladskikh Tatiana/Shutterstock

    Our research

    In a recent study, we interviewed young people with mental illness. Our findings show the significant personal costs of dental disease among people with mental illness, and highlight the relationship between oral and mental health.

    Smiling is one of our best ways to communicate, but we found people with serious mental illness were sometimes embarrassed and ashamed to smile due to poor oral health.

    One participant told us:

    [poor oral health is] not only [about] the physical aspects of restricting how you eat, but it’s also about your mental health in terms of your self-esteem, your self-confidence, and basic wellbeing, which sort of drives me to become more isolated.

    Another said:

    for me, it was that serious fear of – God my teeth are looking really crap, and in the past they’ve [dental practitioners] asked, “Hey, you’ve missed this spot; what’s happening?”. How do I explain to them, hey, I’ve had some really shitty stuff happening and I have a very serious episode of depression?

    What can we do?

    Another of our recent studies focused on improving oral health awareness and behaviours among young adults experiencing mental health difficulties. We found a brief online oral health education program improved participants’ oral health knowledge and attitudes.

    Improving oral health can result in improved mental wellbeing, self-esteem and quality of life. But achieving this isn’t always easy.

    Limited Medicare coverage for dental care means oral diseases are frequently treated late, particularly among people with mental illness. By this time, more invasive treatments, such as removal of teeth, are often required.

    It’s crucial the health system takes a holistic approach to caring for people experiencing serious mental illness. That means we have mental health staff who ask questions about oral health, and dental practitioners who are trained to manage the unique oral health needs of people with serious mental illness.

    It also means increasing government funding for oral health services – promotion, prevention and improved interdisciplinary care. This includes better collaboration between oral health, mental health, and peer and informal support sectors.

    Amanda Wheeler is an investigator on a MetroSouth Health 2025 grant exploring use of Queensland Emergency Departments for people with mental ill-health seeking acute care for oral health problems.

    Steve Kisely has received a grant on oral health from Metro South Research Foundation and one from the Medical Research Future Fund.

    Bonnie Clough, Caroline Victoria Robertson, and Santosh Tadakamadla do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    ref. Gum disease, decay, missing teeth: why people with mental illness have poorer oral health – https://theconversation.com/gum-disease-decay-missing-teeth-why-people-with-mental-illness-have-poorer-oral-health-258403

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: The National Anti-Corruption Commission turns 2 – has it restored integrity to federal government?

    Source: The Conversation (Au and NZ) – By A J Brown, Professor of Public Policy & Law, Centre for Governance & Public Policy, Griffith University

    The National Anti-Corruption Commission (NACC) opened its doors two years ago this week amid much fanfare and high expectations.

    Since then the body has attracted considerable criticism, overshadowing a solid, if slow, start to a whole new anti-corruption system across federal government.

    Established with strong powers after a history of much weaker proposals, what has it achieved in its first two years?

    Early hurdles

    On its first day, the decision to livestream the opening ceremony showed the Commission was alive to public expectations.

    However, the Commission’s reputation faced major early challenges: fears its transparency had been “nobbled”, and its damaging initial decision not to investigate officials referred by the Robodebt Royal Commission.

    The first challenge flowed from the politics that birthed the Commission.

    In 2022, despite otherwise state-of-the-art powers, the Albanese government made a late decision to insert an “exceptional circumstances” test to its ability to hold public hearings in corruption investigations.

    The shift created a bad impression. Many voices, including cross-bench parliamentarians, were left with good reason to question the very institution they helped create.

    The problem will haunt the NACC until the unnecessary threshold is removed.

    Public recognition

    In reality, the NACC still has hefty public hearing powers, but they are as yet to be used.

    When the need arises for royal commission-scale transparency, it will deliver an important side benefit the NACC still badly needs: public visibility.

    The challenge is confirmed by research on public trust, yet to be published, by Griffith University. Surveyed in March this year, only 12% of respondents said they knew at least a fair amount about the NACC, while a third had never heard of it at all, or didn’t know.

    This contrasts with the NSW Independent Commission Against Corruption, now 37 years old and the country’s heaviest user of public hearings. Over a quarter (26%) of NSW respondents said they knew at least a fair amount about the ICAC.

    Building visibility is a slow road, and does not mean the NACC is not doing its job. But with recognition a cornerstone of confidence, it’s a key tool the Commission clearly still needs to learn how to use.

    Workload

    In fact, the NACC’s heavy pipeline of work is finally starting to give it more to talk about.

    About 4,500 corruption complaints or referrals have been assessed since 1 July 2023, leading to more then 40 full investigations, including 31 currently underway.

    It will take time for this workload to pay off, in dealing with and preventing corruption, as well as reinforcing the public trust everyone needs. But even if slow, the first results confirm the importance of the investment.

    This week, the Commission published its fourth investigation report, revealing details of serious corrupt conduct by a Department of Home Affairs Senior Executive who abused her office by dishonestly advantaging her sister’s fiance for a job.

    Small fry? Maybe to some. But the fact 15 of the current investigations relate to senior officials takes the fight against nepotism and cronyism right to where it needs to be.

    Before the NACC, there was little confidence in how this kind of soft corruption was being dealt with by federal agencies.

    Hard corruption

    In its first two years, the NACC has also monitored 40 internal investigations by agencies which previously would have gone unsupervised, if they happened at all.

    On harder corruption, some results tell an even stronger tale.

    Last year, the NACC finalised an investigation which saw a former Australian Taxation Office employee jailed for five years, for accepting A$150,000 in bribes to reduce the tax debts of a Sydney businessman – also since jailed.

    And in December, a former Western Sydney Airport manager pleaded guilty to soliciting a A$200,000 bribe in exchange for a A$5 million services contract at Badgery’s Creek.

    Prior to the NACC, this was exactly the type of hard corruption many federal politicians and public servants claimed did not occur. No-one believed it, but now there’s a system for getting it under control.

    Politicians not immune

    The fact 13 of the NACC’s current investigations relate to former or current federal politicians or their staff is also reassuring. Of all the public officials in Australia, they have long been the most immune from integrity oversight.

    Known referrals include former Liberal Minister Stuart Robert in relation to alledged improper financial dealings with Canberra lobbyist, Synergy 360.

    A separate review found $374 million in contracts linked to Robert and the firm were poor value for money or plagued with perceived conflicts of interest.

    Even if Robert’s denials are correct, the NACC has good scope to help ensure no such dealings are possible in the future.

    The NACC’s strategic priorities highlight “senior public official decision-making” as an area where “even the perception of corruption can significantly harm trust in government”. This is especially important given the lack of regulation covering contractor, consultant and departmental relationships.

    Robodebt setback

    Tackling such fundamental issues, and not just driving a hamster wheel of criminal investigations, is the big challenge. It is underscored by the worst hurdle confronted by the NACC: its initial refusal to investigate Robodebt.

    The NACC’s independent inspector, Gail Furness, found that decision was contaminated by a badly managed conflict of interest, which caused the Commission reputational damage.

    But the poor handling also provided the circuit breaker needed for an independent reconsideration.

    Since February, the NACC is now investigating whether six individuals referred by the Robodebt Royal Commission engaged in corrupt conduct.

    It is a chance for the Commission to show it’s more than a compliance-focused enforcement agency, and is ready to play a positive part ensuring accountability and justice for victims when officials abuse their power.

    The larger mission

    Accepting this larger mission is a challenge for all anti-corruption commissions, but the NACC’s ability to do so is aided by some special powers.

    Its broad definition of “corrupt conduct” means it can tackle any kind of serious integrity failure, including breaches of trust or abuses of power, which don’t involve the types of private gain often associated with corruption in the past.

    A second key tool – also the likely solution to its visibility problem – is the Commission’s unique power to tackle larger issues through public inquiries.

    Also yet to be used, this power extends to any “corruption risks and vulnerabilities” or “measures to prevent corruption” the Commission sees fit. Unlike individual investigation hearings, it does not require “exceptional circumstances”.

    The last two years have seen the NACC well and truly blooded in its role as the cornerstone of the federal integrity transformation we needed to have.

    Now the question is more about the Commission’s choices of direction, including how it nurtures its relationship with the public, than whether it has capacity to get the job done.

    A J Brown AM is Chair of Transparency International Australia. He has received funding from the Australian Research Council and all Australian governments for research on public interest whistleblowing, integrity and anti-corruption reform through partners including Australia’s federal and state Ombudsmen and other regulatory agencies, parliaments, state anti-corruption agencies, and private sector industry bodies. He currently leads an ARC Discovery Project on mapping and harnessing public trust and distrust, in partnership with Sydney, La Trobe and Bond Universities. He is a former senior investigator for the Commonwealth Ombudsman, was a member of the Commonwealth Ministerial Expert Panel on Whistleblowing and is a member of the Queensland Public Sector Governance Council.

    ref. The National Anti-Corruption Commission turns 2 – has it restored integrity to federal government? – https://theconversation.com/the-national-anti-corruption-commission-turns-2-has-it-restored-integrity-to-federal-government-257889

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: ‘Shit in, shit out’: AI is coming for agriculture, but farmers aren’t convinced

    Source: The Conversation (Au and NZ) – By Tom Lee, Senior Lecturer, School of Design, University of Technology Sydney

    David Gray / AFP / Getty Images

    Australian farms are at the forefront of a wave of technological change coming to agriculture. Over the past decade, more than US$200 billion (A$305 billion) has been invested globally into the likes of pollination robots, smart soil sensors and artificial intelligence (AI) systems to help make decisions.

    What do the people working the land make of it all? We interviewed dozens of Australian farmers about AI and digital technology, and found they had a sophisticated understanding of their own needs and how technology might help – as well as a wariness of tech companies’ utopian promises.

    The future of farming

    The supposed revolution coming to agriculture goes by several names: “precision agriculture”, “smart farming”, and “agriculture 4.0” are some of the more common ones.

    These names all gesture towards a future in which the relationship between humans, computing and nature have been significantly reconfigured. Perhaps remote sensing technology will monitor ever more of a farm system, autonomous vehicles will patrol it, and AI will predict crop growth or cattle weight gain.

    But there’s another story to tell about the way technological change happens. It involves people and communities creating their own future, their own sense of important change from the past.

    AI, country style

    Our research team conducted more than 35 interviews with farmers, specifically livestock producers, from across Australia.

    The dominant themes of their responses were captured in two pithy quotes: “shit in, shit out” and “more automation, less features”.

    “Shit in, shit out” is an earthier version of the “garbage in, garbage out” adage in computer science. If the data going into a model is unreliable or overly abstract, then the outputs will be shaped by those errors.

    This captured a real concern for many farmers. They didn’t feel they could trust new technologies if they didn’t understand what knowledge and information they had been built with.

    A different kind of automation

    On the other hand, “more automation, less features” is what farmers want: technologies that may not have a lot of bells and whistles, but can reliably take a task off their hands.

    Australian farmers have a ready appetite for labour-saving technologies. When human bodies are scarce, as they often are in rural Australia, machines are created to fill the void.

    Windmills, wire fences, and even the iconic Australian sheepdog have been a crucial part of the technological narrative of settler colonial farming. These things are not “autonomous” in the same way as computer-powered vehicles and drones, but they offer similar advantages to farmers.

    What these classic farm technologies have in common is a simplicity that derives from a clarity of purpose. They are the opposite of the “everything apps” that fuel the dreams of many Silicon Valley entrepreneurs.

    “More automation, less features” is in this sense a farmer envisaging a digital product that fits with their image of a useful technology: transparent in its operations, and a reliable replacement for or an addition to human labour.

    The lesson of the Suzuki Sierra Stockman

    When speaking with one farmer about favoured technologies of her lifetime, she mentioned the Suzuki Sierra Stockman. These small, no-frills, four-wheel-drive vehicles became something of an icon on Australian sheep and cattle farms through the 1970s, ‘80s and ’90s.

    By the 1990s, the Suzuki Sierra Stockman had an iconic status among Australian farmers.
    Turbo_J / Flickr

    Reflecting on her memories of first using the vehicle, the farmer said:

    Once I learnt that I could actually draft cattle out with the Suzuki, that changed everything. You could do exactly what you did on a horse with a vehicle.

    It seems unlikely that Suzuki’s engineers in Japan envisaged their little jeep chasing cattle in the paddocks of Central West of NSW. The Suzuki was in a sense remade by farmers who found innovative uses for it.

    Future technology must be simple, adaptable and reliable

    The combustion engine was a key technological change on farms in the 20th century. Computers may play a similar role in the 21st.

    We are perhaps yet to see a digital product as iconic as wire fences, windmills, sheepdogs and the Suzuki Stockman. Computers are still largely technologies of the office, not the paddock.

    However, this is changing as computers get smaller and are wired into water tanks, soil monitors and in-paddock scales. More data input from these sensors means AI systems have more scope to help farmers make decisions.

    AI may well become a much-loved tool for farmers. But that journey to iconic status will depend as much on how farmers adapt the technology as on how the developers build it. And we can guess at what it will look like: simple, adaptable and reliable.

    This article is based on research conducted by the Foragecaster project, led by AgriWebb and supported by funding from Food Agility CRC Ltd, funded under the Commonwealth Government CRC Program. The CRC Program supports industry-led collaborations between industry, researchers and the community. This project was also supported by funding from Meat and Livestock Australia (MLA).

    ref. ‘Shit in, shit out’: AI is coming for agriculture, but farmers aren’t convinced – https://theconversation.com/shit-in-shit-out-ai-is-coming-for-agriculture-but-farmers-arent-convinced-259997

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: ‘Shit in, shit out’: AI is coming for agriculture, but farmers aren’t convinced

    Source: The Conversation (Au and NZ) – By Tom Lee, Senior Lecturer, School of Design, University of Technology Sydney

    David Gray / AFP / Getty Images

    Australian farms are at the forefront of a wave of technological change coming to agriculture. Over the past decade, more than US$200 billion (A$305 billion) has been invested globally into the likes of pollination robots, smart soil sensors and artificial intelligence (AI) systems to help make decisions.

    What do the people working the land make of it all? We interviewed dozens of Australian farmers about AI and digital technology, and found they had a sophisticated understanding of their own needs and how technology might help – as well as a wariness of tech companies’ utopian promises.

    The future of farming

    The supposed revolution coming to agriculture goes by several names: “precision agriculture”, “smart farming”, and “agriculture 4.0” are some of the more common ones.

    These names all gesture towards a future in which the relationship between humans, computing and nature have been significantly reconfigured. Perhaps remote sensing technology will monitor ever more of a farm system, autonomous vehicles will patrol it, and AI will predict crop growth or cattle weight gain.

    But there’s another story to tell about the way technological change happens. It involves people and communities creating their own future, their own sense of important change from the past.

    AI, country style

    Our research team conducted more than 35 interviews with farmers, specifically livestock producers, from across Australia.

    The dominant themes of their responses were captured in two pithy quotes: “shit in, shit out” and “more automation, less features”.

    “Shit in, shit out” is an earthier version of the “garbage in, garbage out” adage in computer science. If the data going into a model is unreliable or overly abstract, then the outputs will be shaped by those errors.

    This captured a real concern for many farmers. They didn’t feel they could trust new technologies if they didn’t understand what knowledge and information they had been built with.

    A different kind of automation

    On the other hand, “more automation, less features” is what farmers want: technologies that may not have a lot of bells and whistles, but can reliably take a task off their hands.

    Australian farmers have a ready appetite for labour-saving technologies. When human bodies are scarce, as they often are in rural Australia, machines are created to fill the void.

    Windmills, wire fences, and even the iconic Australian sheepdog have been a crucial part of the technological narrative of settler colonial farming. These things are not “autonomous” in the same way as computer-powered vehicles and drones, but they offer similar advantages to farmers.

    What these classic farm technologies have in common is a simplicity that derives from a clarity of purpose. They are the opposite of the “everything apps” that fuel the dreams of many Silicon Valley entrepreneurs.

    “More automation, less features” is in this sense a farmer envisaging a digital product that fits with their image of a useful technology: transparent in its operations, and a reliable replacement for or an addition to human labour.

    The lesson of the Suzuki Sierra Stockman

    When speaking with one farmer about favoured technologies of her lifetime, she mentioned the Suzuki Sierra Stockman. These small, no-frills, four-wheel-drive vehicles became something of an icon on Australian sheep and cattle farms through the 1970s, ‘80s and ’90s.

    By the 1990s, the Suzuki Sierra Stockman had an iconic status among Australian farmers.
    Turbo_J / Flickr

    Reflecting on her memories of first using the vehicle, the farmer said:

    Once I learnt that I could actually draft cattle out with the Suzuki, that changed everything. You could do exactly what you did on a horse with a vehicle.

    It seems unlikely that Suzuki’s engineers in Japan envisaged their little jeep chasing cattle in the paddocks of Central West of NSW. The Suzuki was in a sense remade by farmers who found innovative uses for it.

    Future technology must be simple, adaptable and reliable

    The combustion engine was a key technological change on farms in the 20th century. Computers may play a similar role in the 21st.

    We are perhaps yet to see a digital product as iconic as wire fences, windmills, sheepdogs and the Suzuki Stockman. Computers are still largely technologies of the office, not the paddock.

    However, this is changing as computers get smaller and are wired into water tanks, soil monitors and in-paddock scales. More data input from these sensors means AI systems have more scope to help farmers make decisions.

    AI may well become a much-loved tool for farmers. But that journey to iconic status will depend as much on how farmers adapt the technology as on how the developers build it. And we can guess at what it will look like: simple, adaptable and reliable.

    This article is based on research conducted by the Foragecaster project, led by AgriWebb and supported by funding from Food Agility CRC Ltd, funded under the Commonwealth Government CRC Program. The CRC Program supports industry-led collaborations between industry, researchers and the community. This project was also supported by funding from Meat and Livestock Australia (MLA).

    ref. ‘Shit in, shit out’: AI is coming for agriculture, but farmers aren’t convinced – https://theconversation.com/shit-in-shit-out-ai-is-coming-for-agriculture-but-farmers-arent-convinced-259997

    MIL OSI AnalysisEveningReport.nz

  • MIL-OSI Global: Could electric brain stimulation lead to better maths skills?

    Source: The Conversation – UK – By Roi Cohen Kadosh, Professor of Cognitive Neuroscience, University of Surrey

    Triff/Shutterstock

    A painless, non-invasive brain stimulation technique can significantly improve how young adults learn maths, my colleagues and I found in a recent study. In a paper in PLOS Biology, we describe how this might be most helpful for those who are likely to struggle with mathematical learning because of how their brain areas involved in this skill communicate with each other.

    Maths is essential for many jobs, especially in science, technology, engineering and finance. However, a 2016 OECD report suggested that a large proportion of adults in developed countries (24% to 29%) have maths skills no better than a typical seven-year-old. This lack of numeracy can contribute to lower income, poor health, reduced political participation and even diminished trust in others.

    Education often widens rather than closes the gap between high and low
    achievers, a phenomenon known as the Matthew effect. Those who start with an advantage, such as being able to read more words when starting school, tend to pull further ahead. Stronger educational achievement has been also associated with socioeconomic status, higher motivation and greater engagement with material learned during a class.

    Biological factors, such as genes, brain connectivity, and chemical signalling, have been shown in some studies to play a stronger role in learning outcomes than environmental ones. This has been well-documented in different areas, including maths, where differences in biology may explain educational achievements.


    Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.


    To explore this question, we recruited 72 young adults (18–30 years old) and taught them new maths calculation techniques over five days. Some received a placebo treatment. Others received transcranial random noise stimulation (tRNS), which delivers gentle electrical currents to the brain. It is painless and often imperceptible, unless you focus hard to try and sense it.

    It is possible tRNS may cause long term side effects, but in previous studies my team assessed participants for cognitive side effects and found no evidence for it.

    Could tRNS help people improve their maths skills?
    Prostock-studio/Shutterstock

    Participants who received tRNS were randomly assigned to receive it in one of two different brain areas. Some received it over the dorsolateral prefrontal cortex, a region critical for memory, attention, or when we acquire a new cognitive skill. Others had tRNS over the posterior parietal cortex, which processes maths information, mainly when the learning has been accomplished.

    Before and after the training, we also scanned their brains and measured levels of key neurochemicals such as gamma-aminobutyric acid (gaba), which we showed previously, in a 2021 study, to play a role in brain plasticity and learning, including maths.

    Some participants started with weaker connections between the prefrontal and parietal brain regions, a biological profile that is associated with poorer learning. The study results showed these participants made significant gains in learning when they received tRNS over the prefrontal cortex.

    Stimulation helped them catch up with peers who had stronger natural connectivity. This finding shows the critical role of the prefrontal cortex in learning and could help reduce educational inequalities that are grounded in neurobiology.

    How does this work? One explanation lies in a principle called stochastic resonance. This is when a weak signal becomes clearer when a small amount of random noise is added.

    In the brain, tRNS may enhance learning by gently boosting the activity of underperforming neurons, helping them get closer to the point at which they become active and send signals. This is a point known as the “firing threshold”, especially in people whose brain activity is suboptimal for a task like maths learning.

    It is important to note what this technique does not do. It does not make the best
    learners even better. That is what makes this approach promising for bridging gaps,
    not widening them. This form of brain stimulation helps level the playing field.

    Our study focused on healthy, high-performing university students. But in similar studies on children with maths learning disabilities (2017) and with attention-deficit/hyperactivity disorder (2023) my colleagues and I found tRNS seemed to improve their learning and performance in cognitive training.

    I argue our findings could open a new direction in education. The biology of the learner matters, and with advances in knowledge and technology, we can develop tools that act on the brain directly, not just work around it. This could give more people the chance to get the best benefit from education.

    In time, perhaps personalised, brain-based interventions like tRNS could support learners who are being left behind not because of poor teaching or personal circumstances, but because of natural differences in how their brains work.

    Of course, very often education systems aren’t operating to their full potential because of inadequate resources, social disadvantage or systemic barriers. And so any brain-based tools must go hand-in-hand with efforts to tackle these obstacles.

    Roi Cohen Kadosh serves on the scientific advisory boards of Neuroelectrics Inc., and Innosphere Ltd. He is the founder and shareholder of Cognite Neurotechnology Ltd. He received funding from the Wellcome Trust, UKRI, the British Academy, IARPA, DASA, Joy Ventures, the James S McDonnell Foundation, and the European Union. He is affiliated with the University of Surrey.

    ref. Could electric brain stimulation lead to better maths skills? – https://theconversation.com/could-electric-brain-stimulation-lead-to-better-maths-skills-260134

    MIL OSI – Global Reports

  • MIL-Evening Report: Memo to Shane Jones: what if NZ needs more regional government, not less?

    Source: The Conversation (Au and NZ) – By Jeffrey McNeill, Honorary Research Associate, School of People, Environment and Planning, Te Kunenga ki Pūrehuroa – Massey University

    If the headlines are anything to go by, New Zealand’s regional councils are on life support.

    Regional Development Minister Shane Jones recently wondered whether “there’s going to be a compelling case for regional government to continue to exist”. And Prime Minister Christopher Luxon is open to exploring the possibility of scrapping the councils.

    This has all been driven by the realisation that the government’s proposed resource management reforms would essentially gut local authorities of their basic planning and environmental management functions. Various mayors and other interested parties have agreed. While some are circumspect, there’s broad agreement a review is needed.

    At present, each territorial council writes its own city or district plan. Regional councils write a series of thematic plans addressing different environmental issues. All the plans contain the councils’ regulatory “rules” that determine what people can or cannot do.

    Under the coming reforms, the territorial and regional councils of each region would have only a single chapter each within a broader regional spatial plan. Their function would, for the main part, involve tweaking all-embracing national policies and standards.

    Further, all compliance and monitoring – now a predominantly regional council activity – is to be taken over by a national agency (possibly the Environment Protection Authority). This won’t leave much for regional councils to do, compared with their broad remits now.

    How regional government evolved

    In truth, regional councils have been targets since they were created as part of the Labour government’s 1989 local government reform. Carried out in lockstep with the drafting of the Resource Management Act (passed in 1991), this established two levels of local government.

    City and district councils were to be responsible for infrastructure and the built environment. The new regional councils were more opaque, essentially multi-function, special-purpose authorities, recognising that some government actions are bigger than local but smaller than national.

    In the event, they became what in many countries would be thought of as environmental protection agencies. Their boundaries were drawn to capture river catchments, reflecting their catchment board antecedents, which looked after soil erosion and flood management.

    Other functions were drawn from other government departments. Air-quality management came from the old Department of Health. Coastal management was partly inherited from the Ministry of Transport, shared with the Department of Conservation.

    Public transport and civil defence were tacked on, given their cross-territorial scale and lack of anywhere else to put them.

    Parochialism and politics

    All their various functions have meant regional councils determine who gets to use the region’s resources – and who misses out. And political decisions are a surefire way to make enemies.

    For example, the Resource Management Act applied the presumption that no one could discharge any contaminant into water unless expressly allowed by a rule or a resource consent. Regional councils therefore required their territorial councils to upgrade their rubbish dumps and sewage treatment systems.

    Similarly, farmers could no longer simply take water to irrigate or empty cowshed effluent straight into the nearest stream as of right. The necessary infrastructure upgrades were expensive.

    Ironically, these attempts to minimise the immediate impacts of such demands on water users saw urban voters and environmental groups criticise the councils and the government for being too soft on “dirty dairying” and other polluters.

    Parochialism also plays a part, as does the feeling in some rural communities that they’re forgotten by their regions’ cities, where most voters live. The perceived poor handling of events such as last year’s Hawke’s Bay flooding and the 2018 Wellington bus network failure have not helped.

    The government even replaced Environment Canterbury’s elected council with appointed commissioners in 2010 over performance concerns, particularly in water management.

    Yet the regional council model has largely survived intact – with two exceptions. The Nelson-Marlborough Regional Council was replaced by the Nelson City and Marlborough and Tasman District unitary councils in 1992, as a token sacrifice to the conservative wing of the National government, which vehemently opposed the new regions.

    The genesis of the Auckland Council super-region can be traced to the 1999–2008 Labour government’s frustration at getting a unified position from the city’s seven councils on where to build a stadium for the 2011 Rugby World Cup. Not everyone is happy with the resulting metro-regional solution.

    Who will be accountable?

    If regional government is indeed put to rest, it will be another phase in this piecemeal evolutionary process. But the new model will still require central government to have a significant regional presence – and commensurate central government funding.

    But central government has had a regional-scale presence for a long time. Police, the fire service, economic development and social welfare agencies all have their own regional boundaries. Public health and tertiary training and education are also essentially regional.

    All these functions are inherently political. And in many other countries, they are are delivered by regional governments. Maybe, once the implications are looked at more closely, leaving regional councils intact will seem the easier and cheaper option. Indeed, there is a counter argument that we need more regional government, not less.

    The current impulse for local government change – including district council amalgamation – continues an ad hoc process going back more than 30 years. As I have argued previously, the form, function and funding of local government need to be considered together.

    The regional level of administration will not go away. But the overriding question remains: who should speak for and be accountable to their communities for what are ultimately still political decisions, whoever makes them?

    Jeffrey McNeill does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Memo to Shane Jones: what if NZ needs more regional government, not less? – https://theconversation.com/memo-to-shane-jones-what-if-nz-needs-more-regional-government-not-less-259778

    MIL OSI AnalysisEveningReport.nz

  • MIL-OSI Global: Why frequent nightmares may shorten your life by years

    Source: The Conversation – UK – By Timothy Hearn, Senior Lecturer in Bioinformatics, Anglia Ruskin University

    Lightfield Studios/Shutterstock.com

    Waking up from a nightmare can leave your heart pounding, but the effects may reach far beyond a restless night. Adults who suffer bad dreams every week were almost three times more likely to die before age 75 than people who rarely have them.

    This alarming conclusion – which is yet to be peer reviewed – comes from researchers who combined data from four large long-term studies in the US, following more than 4,000 people between the ages of 26 and 74. At the beginning, participants reported how often nightmares disrupted their sleep. Over the next 18 years, the researchers kept track of how many participants died prematurely – 227 in total.

    Even after considering common risk factors like age, sex, mental health, smoking and weight, people who had nightmares every week were still found to be nearly three times more likely to die prematurely – about the same risk as heavy smoking.

    The team also examined “epigenetic clocks” – chemical marks on DNA that act as biological mileage counters. People haunted by frequent nightmares were biologically older than their birth certificates suggested, across all three clocks used (DunedinPACE, GrimAge and PhenoAge).

    The science behind the silent scream

    Faster ageing accounted for about 39% of the link between nightmares and early death, implying that whatever is driving the bad dreams is simultaneously driving the body’s cells towards the finish line.

    How might a scream you never utter leave a mark on your genome? Nightmares happen during so-called rapid-eye-movement sleep when the brain is highly active but muscles are paralysed. The sudden surge of adrenaline, cortisol and other fight-or-flight chemicals can be as strong as anything experienced while awake. If that alarm bell rings night after night, the stress response may stay partially switched on throughout the day.

    Continuous stress takes its toll on the body. It triggers inflammation, raises blood pressure and speeds up the ageing process by wearing down the protective tips of our chromosomes.

    On top of that, being jolted awake by nightmares disrupts deep sleep, the crucial time when the body repairs itself and clears out waste at the cellular level. Together, these two effects – constant stress and poor sleep – may be the main reasons the body seems to age faster.

    Your brain clears out waste when you sleep.
    Teeradej/Shutterstock.com

    The idea that disturbing dreams foreshadow poor health is not entirely new. Earlier studies have shown that adults tormented by weekly nightmares are more likely to develop dementia and Parkinson’s disease, years before any daytime symptoms appear.

    Growing evidence suggests that the brain areas involved in dreaming are also those affected by brain diseases, so frequent nightmares might be an early warning sign of neurological problems.

    Nightmares are also surprisingly common. Roughly 5% of adults report at least one each week and another 12.5% experience them monthly.

    Because they are both frequent and treatable, the new findings elevate bad dreams from a spooky nuisance to a potential public health target. Cognitive behavioural therapy for insomnia, imagery-rehearsal therapy – where sufferers rewrite the ending of a recurrent nightmare while awake – and simple steps such as keeping bedrooms cool, dark and screen free have all been shown to curb nightmare frequency.

    Before jumping to conclusions, there are a few important things to keep in mind. The study used people’s own reports of their dreams, which can make it hard to tell the difference between a typical bad dream and a true nightmare. Also, most of the people in the study were white Americans, so the findings might not apply to everyone.

    And biological age was measured only once, so we cannot yet say whether treating nightmares slows the clock. Crucially, the work was presented as a conference abstract and has not yet navigated the gauntlet of peer review.

    Despite these limitations, the study has important strengths that make it worth taking seriously. The researchers used multiple groups of participants, followed them for many years and relied on official death records rather than self-reported data. This means we can’t simply dismiss the findings as a statistical fluke.

    If other research teams can replicate these results, doctors might start asking patients about their nightmares during routine check-ups – alongside taking blood pressure and checking cholesterol levels.

    Therapies that tame frightening dreams are inexpensive, non-invasive and already available. Scaling them could offer a rare chance to add years to life while improving the quality of the hours we spend asleep.

    Timothy Hearn does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Why frequent nightmares may shorten your life by years – https://theconversation.com/why-frequent-nightmares-may-shorten-your-life-by-years-260008

    MIL OSI – Global Reports

  • MIL-OSI Global: Where does the UK most need more public EV chargers?

    Source: The Conversation – UK – By Labib Azzouz, Research Associate in Transport and Energy Innovation, University of Oxford

    Electric vehicle chargers at a motorway service station in Grantham, England. Angus Reid/Shutterstock

    The automotive and EV industry has repeatedly insisted that the UK needs more electric vehicle (EV) chargers to help motorists make the switch from conventional fossil-fuel burning cars.

    The Labour government has announced £400 million to install EV chargers, mainly on streets in poorer residential neighbourhoods, in place of the Conservative’s £950 million rapid charging fund that was directed at installing chargers in motorway service stations.

    Does it matter where these chargers are – and who pays to build them?

    The short answer is yes, it does matter. Our research conducted at motorway and local EV charging stations across England – including those located in residential areas, high streets and community centres – indicates that these two types of infrastructure serve distinct groups of users and fulfil different purposes.

    Suggesting that one can substitute for the other risks sending mixed signals to both the industry and the driving public.

    We found that motorway charging stations tend to cater to wealthier men, who are more likely to own premium EVs with long-range batteries and better performance. Many of these drivers have access to home chargers, so their use of public chargers is only for occasional, long-distance travel for business, leisure, or holidays – trips that require chargers along motorways.

    Convenience and charging speed are often more important than the price of public charging, particularly when the travel costs of these drivers are covered by their employers.

    Local public charging stations, on the other hand, serve more diverse groups. These include drivers from lower-income households who are more likely to own older and smaller EVs with shorter ranges. Access to home charging is often limited, especially for people living in flats or urban areas without driveways, garages or off-street parking.

    Not everyone can plug in at home.
    Andersen EV/Shutterstock

    Local chargers are also vital for taxi and delivery drivers who depend on their vehicles for work and make frequent short trips throughout the day. There are many professional drivers without access to workplace charging stations who need alternative local provision – something the Conservative government recognised in its 2022 EV charging strategy.

    Ultimately, the transition to EVs should take a balanced approach that carefully considers social equity, economic viability and environmental impact.

    Different locations serve different drivers

    Motorway charging stations are commercially attractive to private investors, such as energy companies, specialist charging providers and car manufacturers, despite their higher upfront costs and complex requirements.

    This is because service stations offer greater short-term revenue due to their ability to set premium prices. This is a result of there being limited alternatives and high demand for rapid charging, especially among long-distance travellers, and the willingness of EV drivers to pay for speed and convenience – unlike in more price-sensitive neighbourhood settings.

    Unsurprisingly, the government found that the rapid deployment of motorway chargers in recent years has been largely driven by the private sector. Our research highlighted that these revenues could be enhanced by a broader range of retail, dining and relaxation amenities, turning the time waiting for a car to charge into a more productive and pleasurable experience.

    Residential charging stations may not offer high profits per charge, but they typically require lower capital investment and benefit from consistent and predictable use. They are also suited to measures for reducing strain on the grid and balancing energy supply and demand.

    These measures include tariffs that make it cheaper to charge EVs during off-peak hours, or technology that allows cars to feed electricity stored in batteries back into the grid. These features make them appropriate for public funding, where return on investment is measured not just in profit but in value for the public.

    Considering that local EV charging serves those who do not have access to home charging and who drive for a living, the case for public funding is even stronger. These sorts of chargers make switching to an EV easier for different groups.

    For example, safe and carefully placed public chargers could help more women switch to EVs – although our research suggests that, while “careful placement” might refer to residential areas, it doesn’t necessarily mean on streets. Well-lit car parks and community destinations are sometimes considered safer options.

    Charging points outside a community centre in the Outer Hebrides, Scotland.
    AlanMorris/Shutterstock

    By helping EV drivers make frequent short trips, local chargers can also significantly reduce urban air pollution, emissions and noise, contributing to more liveable, healthier cities.

    That said, motorway charging stations and those near key transport corridors still play a crucial role in a comprehensive national network, and public funding may be required in more peripheral and rural areas of the UK where installations lag and commercial interest is limited.

    While long-distance trips are less frequent than short ones, they account for a disproportionately large share of energy use and emissions. Switching such trips to electric will be essential to reaching net zero goals.

    It seems reasonable to prioritise public investment in local EV charging infrastructure to support a fairer EV transition, but this should not be limited to on-street chargers. Investment is needed in residential and non-residential areas, public car parks, community centres and workplaces.

    Different types of EV charging are not interchangeable – all are needed to support the switch.


    Don’t have time to read about climate change as much as you’d like?

    Get a weekly roundup in your inbox instead. Every Wednesday, The Conversation’s environment editor writes Imagine, a short email that goes a little deeper into just one climate issue. Join the 45,000+ readers who’ve subscribed so far.


    Labib Azzouz has received funding from the UK Research and Innovation via the UK Energy Research Centre and Innovate UK as part of the Energy Superhub Oxford (ESO) project.

    Hannah Budnitz receives government funding from UK Research and Innovation grants via the Economic and Social Research Council and the Engineering and Physical Sciences Research Council. She has also previously received funding from Innovate UK and the Department for Transport.

    ref. Where does the UK most need more public EV chargers? – https://theconversation.com/where-does-the-uk-most-need-more-public-ev-chargers-259623

    MIL OSI – Global Reports

  • MIL-OSI Global: The Bear season 4: this meaty restaurant drama is still an enticing bingeable prospect

    Source: The Conversation – UK – By Jane Steventon, Course Leader, BA (Hons) Screenwriting; Deputy Course Leader & Senior Lecturer, BA (Hons) Film Production, University of Portsmouth

    Take a soupçon of identity crisis, a pinch of perfectionism, a scoop of burnout and mix thoroughly with a large measure of fraternal grief and sear over a hot grill and voilà! You have The Bear, a perfectly blended drama about a chef on the edge, driven by relentless ambition and exacting standards as he turns his family’s humble sandwich shop into a fine-dining restaurant.

    This intoxicating family drama was eaten up by critics and audiences alike in 2022, its first season garnering a rare perfect 100% score on Rotten Tomatoes, the subsequent two reaching scores of 99% and 89% respectively. It’s certainly a hard act to follow for season four.

    The first ten minutes of The Bear’s pilot episode thrillingly defined what was to come in high-octane style and scene-setting detail. The first season delivered a clever mix of authentic dialogue and setting, relatable family dysfunction and dynamic production style.

    Showstopping scenes of stressful kitchen heat were served up alongside a delectable range of new and established talent in the form of Jeremy Allen White (Carmy), Ebon Moss-Bachrach (Richie), Ayo Edebiri (Sydney) and Oliver Platt (Cicero/Uncle Jimmy).


    Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.


    In charge is showrunner Christopher Storer, who came up with the concept after being inspired by his friend’s father Chris Zucchero, the owner of Chicago sandwich joint Mr Beef.

    With his professional chef sister also serving as a consultant, Storer succeeded in creating a deliciously authentic and intensely real drama. Buoyed along the way by 21 Emmys and five Golden Globes, Storer also watched his cast ascend, the tortured-soul performance of White garnering particular praise.

    Testing the parameters of a long-running show, Storer focused in on the entire cast of characters and their backstories, a successful tactic used by shows such as Orange is the New Black to keep the drama – largely confined to a kitchen set – fresh.

    Pulling in Hollywood die-hards Oliver Platt and Jamie Lee Curtis for familial tough-love roles further enriched the mix, often using a non-chronological timeframe to go back to moments of family turbulence and tension. This made for three-dimensional characters and enabled evolution around difficult themes such as the aftermath of suicide and generational trauma.

    The Bear has come a long way in three seasons, starting with a spit and sawdust establishment serving up the lunchtime beef sandwiches for its working customers.

    Carmy’s experience and longing for the high-end restaurant of his dreams hurtled forward in season two, as he sent his core crew off in different directions to hone their skills and help form his vision. A restaurant trying to win success but plagued with challenges, there were exhausting familial tensions embedded in every episode of season three.

    Several themes play out in The Bear: love, family, loyalty, community and purpose. The relationship between Carmy and cousin Richie (not a real cousin, but a term of endearment) is key to linking past and future. Richie provides some of the highlights of comedy and pathos as he spits truth bombs, most frequently at talented sous-chef Syd.

    It is Syd who follows Carmy’s aspirations for gastronomic perfection but can’t abide the lack of order or the intense highs and lows that inevitably go hand in hand with his talent. And this is one central question to consider for the latest series: just how long will the audience remain loyal to Carmy and his endless quest for artistry in a high-failure rate industry?

    It’s all in the sauce

    Storer begins season four with a ghost. Carmy and his dead brother Mikey (Jon Berthal) banter in a seven-minute scene, with Carmy ultimately confiding the dream of a restaurant as Mikey watches him make tomato sauce (“too much garlic”). The tomatoes resonate: Mikey left behind money hidden in tomato cans that ended up saving Carmy’s sanity and his dream of a proper restaurant.

    Just as oranges represent death to Frances Ford Coppola, Storer uses tomatoes to underscore themes; here they symbolise familial loyalty and history, a solid base to a meal, a core ingredient. Mikey was one of the core ingredients in Carmy’s life, and now he’s gone.

    Carmy awakens to a rerun of Groundhog Day on late-night TV and fittingly, we too are back – same dish, now more seasoned and enriched with its core ingredients and ready to serve up a big bowlful of family, love, ambition, strife and grief.

    The episode furthers the theme of loyalty as the restaurant receives The Tribune’s review – the cliffhanger of the season three finale. Naturally, Storer doesn’t let up – the food critic highlights “dissonance” and Carmy is back in emotional chaos, with Syd urging him to lighten up and lose the misery.

    In truth, this series could do with adding some more humour in the mix; the teasing and frivolous banter of season one has got somewhat lost in the seasons that followed.

    Storer ramps up the tension, setting several ticking clocks in place: chiefly Uncle Jimmy’s notice period for the business to turn a profit is literally installed on a digital clock in the kitchen. Then Syd’s headhunter calls, offering her desired autonomy and an exit strategy from the chaos.

    And Carmy raises the stakes with an intention to gain a Michelin star. Thus a heroic journey is set in place for the whole cast, with future battles both internal and external laid out.

    There’s too much going on at this feast and the feeling of being stuffed full of story is tangible by the end of the first episode. Still, with a season lining up more emotional turbulence steered by White, more celebrity cameos (Brie Larson and Rob Reiner are lined up) and the excellent cinematography and performances that we have come to expect, Storer stirs his secret sauce.

    The Bear still offers an entertaining and enticing proposition, bingeable and mostly satisfying.

    Jane Steventon does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. The Bear season 4: this meaty restaurant drama is still an enticing bingeable prospect – https://theconversation.com/the-bear-season-4-this-meaty-restaurant-drama-is-still-an-enticing-bingeable-prospect-260143

    MIL OSI – Global Reports

  • MIL-OSI Global: Five ways to avoid illness like the Lionesses

    Source: The Conversation – UK – By Samantha Abbott, Doctoral Researcher, Department of Sport Science, Nottingham Trent University

    England’s Beth Mead cheering on podium after win v Germany in the Women European Championship Final 2022 photographyjp/Shutterstock

    Think back to the last time you had a cold or the flu. Now imagine stepping onto the pitch for a European Cup final, while battling through those symptoms. For elite athletes, illness can strike at the worst possible time – and it could hit women harder.

    Research suggests that female athletes are more susceptible to cold and flu-like illnesses than their male counterparts. For England women’s national football team, the Lionesses, this risk only increases before a major tournament like the Euros.

    Close contact, shared kit, disrupted sleep and travel all add up to a perfect storm for infection. But targeted nutritional strategies, alongside good sleep and hand hygiene, can offer a crucial line of defence.


    Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.


    1. Fuel first: energy matters for immunity

    Before anything else, players need to eat enough. Energy supports both performance and immune function. In fact, female athletes who didn’t meet their energy needs in the run-up to the 2016 Olympics were four times more likely to report cold or flu symptoms.

    This is especially relevant in women’s football, where low energy and carbohydrate intake has been documented among professional players and recreational players too. Regular meals and snacks that include carbohydrate-rich foods like oats, bread and pasta, especially around training, are essential to meet energy demands and support immune health.

    2. Eat the rainbow

    Athletes are often encouraged to go beyond the public’s five-a-day fruit and veg target, aiming instead for eight to ten portions daily. Why? Because colourful plant foods are packed with vitamins, minerals, antioxidants and anti-inflammatory compounds: all vital for immunity.




    Read more:
    We’re told to ‘eat a rainbow’ of fruit and vegetables. Here’s what each colour does in our body


    Each colour offers unique benefits. For instance, red fruits and vegetables, such as tomatoes, contain lycopene, a powerful antioxidant. Orange produce like carrots get their colour from beta-carotene, which is converted by the body into vitamin A – a key vitamin for immune health.

    Eating a rainbow of colours means getting a wide range of nutrients.

    3. Vitamin C: powerful but timing matters

    Vitamin C has long been linked with reducing the risk and severity of cold and flu symptoms. One Cochrane review found that regular vitamin C intake halved the risk of illness in physically active people.

    However, more isn’t always better. Long-term use of high-dose vitamin C supplements could blunt training adaptations – the structural and functional changes the body undergoes in response to repeated exercise – because of its anti-inflammatory effects. That’s why vitamin C is most effective when used strategically, such as during high-risk periods like travel or intense competition. Good food sources include oranges, kiwis, blackcurrants, red and yellow peppers, broccoli and even potatoes.

    4. Gut health supports immune health

    Around 70% of the immune system is located in the gut, making gut health a key player in illness prevention. This is where probiotics (live bacteria) and prebiotics (which feed those bacteria) come in.

    Probiotics, found in fermented foods like kefir and kimchi or in supplement form, have been shown to reduce the duration and severity of respiratory illnesses in athletes. Prebiotics have similarly shown promise. In one study, a 24-week prebiotic intervention in elite rugby players reduced the duration of cold and flu symptoms by over two days.




    Read more:
    Gut microbiome: meet Lactobacillus acidophilus – the gut health superhero


    In the build-up to the Euros, including probiotic-rich foods in their diet or taking a daily prebiotic and probiotic supplement may help players stay healthy and return to training faster if they do get ill.

    5. Zinc lozenges: first aid for a sore throat

    If cold-like symptoms do appear, zinc lozenges can offer fast-acting relief. Zinc has antiviral, antioxidant and anti-inflammatory properties. When zinc is delivered as a lozenge, it acts directly in the throat, where many infections begin. Taken within 24 hours of symptoms starting, zinc lozenges could shorten illness duration by a third.

    But caution is key. Long-term use of high-dose zinc supplements can actually suppress immune function. Zinc lozenges should only be used short-term at symptom onset, not as a daily supplement.

    Staying match-ready during major tournaments means more than just tactical drills and fitness. Nutrition is a powerful ally in illness prevention, especially for women’s teams like the Lionesses. From fuelling adequately to supporting gut health and knowing when to supplement, these nutritional strategies can make the difference between sitting on the bench and bringing a trophy home.

    The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    ref. Five ways to avoid illness like the Lionesses – https://theconversation.com/five-ways-to-avoid-illness-like-the-lionesses-259302

    MIL OSI – Global Reports

  • MIL-OSI Global: Why is Islamophobia so hard to define?

    Source: The Conversation – UK – By Julian Hargreaves, Lecturer, Department of Sociology and Criminology, City St George’s, University of London

    The UK government wants a new definition of Islamophobia and has created a working group of politicians, academics and independent experts to provide one. It aims to settle long-running political debates over the term.

    The concept of Islamophobia describes anti-Muslim and anti-Islamic prejudices and their impact on Muslim communities. The term became familiar in the UK following publication of the Runnymede Trust report, Islamophobia: A Challenge for Us All, in 1997.

    The concept is now used to discuss negative public opinion towards Muslims and Islam, biased media reporting, verbal and physical assaults and online attacks. It is also used when discussing social and economic inequalities, discrimination within various institutional settings and unfair treatment from the police and security services.

    Previous definitions have been controversial, failing to unite politicians, academics and British Muslims, and leading to charged debates over free speech.

    Some academics have argued that the word “Islamophobia” – which suggests a phobia or fear of Islam – is an inaccurate label for a prejudice which often targets skin colour, ethnicity and culture.

    Many Muslim-led organisations accept that the term is imperfect and interchangeable with others such as “anti-Muslim hatred”. However, they maintain the term “Islamophobia” is needed to focus attention on a growing problem.

    Definitions and controversy

    The 1997 Runnymede Trust report defined Islamophobia as an “unfounded hostility towards Islam”, “the practical consequences of such hostility in unfair discrimination against Muslim individuals and communities” and “the exclusion of Muslims from mainstream political and social affairs”.

    The Runnymede Trust revised its definition in a follow-up report published in 2017. The report defines Islamophobia in two ways.

    The first is “anti-Muslim racism”. A longer, second version amends the United Nation’s 1965 definition of “racial discrimination”. These revised definitions are important because they re-framed Islamophobia as a product of racist thinking rather than religious prejudices.

    Other attempts to define Islamophobia include British academic Chris Allen’s 200-word definition. Allen defined it as an ideology like racism that spreads negative views of Muslims and Islam, influencing social attitudes and leading to discrimination and violence. US political scientist Erik Bleich defined it more succinctly as “indiscriminate negative attitudes or emotions directed at Islam or Muslims”.

    In 2018, the all-party parliamentary group on British Muslims published another definition linking Islamophobia to racism. According to the APPG, “Islamophobia is rooted in racism and is a type of racism that targets expressions of Muslimness or perceived Muslimness.” The APPG called for its definition to be legally binding.

    The APPG definition was adopted by various organisations including local authorities, UK universities and the Labour party while in opposition. But it was rejected by the then Conservative government and later by the current Labour government, which argued it was seeking “a more integrated and cohesive approach”.

    This lack of consensus over previous definitions led Angela Rayner, the deputy prime minister, to announce the working group in March 2025. The group’s aim is to provide a new definition of “anti-Muslim hatred and Islamophobia” which is “reflective of a wide range of perspectives and priorities for British Muslims”.

    Former Conservative MP and attorney general Dominic Grieve was appointed to chair the group, evidence of Labour’s ambition to build consensus.

    A march in London against Islamophobia, racism and anti-migrant views.
    Shutterstock

    Some are concerned that use of the term “Islamophobia”, and particularly the APPG definition, stifles legitimate criticism of Islam. Free speech campaigners have argued that it is “blasphemy via the back door”.

    The centre-right thinktank Policy Exchange published a report claiming that the term is used in bad faith to divert attention away from serious social problems within some Muslim communities – specifically, discussion of the grooming gangs scandal.

    These debates bear resemblance to those surrounding the term “antisemitism” and the adoption of a definition proposed by the International Holocaust Memorial Alliance. The term is widely accepted, although critics have argued this specific definition stifles legitimate criticism of the Israeli state.

    A new approach

    A new definition of “Islamophobia” must balance the protection of Muslim communities and freedoms of religion, expression and assembly for all Muslims and non-Muslims in the UK. It must be clear enough for everyday use, specific enough for academic and policy research, and capable of generating support across the UK’s diverse Muslim population.

    A proposed definition by an emerging thought leader on British Islam addresses these challenges. Mamnun Khan is a writer whose work explores the social integration of Muslims in contemporary British society. Khan is associated with Equi, a thinktank which describes its work as “drawing on Muslim insight”. Other members of Equi are members of the government’s working group.

    Khan sets out three tests that a definition must pass, based on Islamic law, moral teachings within Islam and other more universal values. First, a definition must serve the public interest. Second, it must be just and balanced and preserve freedom of expression. Third, it must uphold the dignity of Muslim communities.

    For Khan, “Islamophobia, also known as anti-Muslim hatred, is an irrational fear, hostility, or prejudice toward Muslims that leads to discrimination, unequal treatment, exclusion, social and political marginalisation, or violence.”

    Khan’s definition has many good qualities. It brings together stronger elements of previous definitions – for, example, the separation of negative attitudes and outcomes – without being weakened by jargon or strong political ideology. On the other hand, some social scientists may question whether defining something as “irrational” is a matter of preference rather than academic research.

    The working group also needs to decide whether Islamophobia and anti-Muslim hatred are closely related or exactly the same. Failure to do so will cause confusion and inconsistency among those wishing to apply the term precisely. Regardless, Khan’s example is a strong step in the right direction. A better definition of Islamophobia is needed, and now within reach.

    Julian Hargreaves is an Affiliated Researcher at the Prince Alwaleed bin Talal Centre of Islamic Studies, University of Cambridge.

    ref. Why is Islamophobia so hard to define? – https://theconversation.com/why-is-islamophobia-so-hard-to-define-258522

    MIL OSI – Global Reports

  • MIL-OSI Global: Toxic fungus from King Tutankhamun’s tomb yields cancer-fighting compounds – new study

    Source: The Conversation – UK – By Justin Stebbing, Professor of Biomedical Sciences, Anglia Ruskin University

    Miro Varcek / Shutterstock.com

    In November 1922, archaeologist Howard Carter peered through a small hole into the sealed tomb of King Tutankhamun. When asked if he could see anything, he replied: “Yes, wonderful things.” Within months, however, Carter’s financial backer Lord Carnarvon was dead from a mysterious illness. Over the following years, several other members of the excavation team would meet similar fates, fuelling legends of the “pharaoh’s curse” that have captivated the public imagination for just over a century.

    For decades, these mysterious deaths were attributed to supernatural forces. But modern science has revealed a more likely culprit: a toxic fungus known as Aspergillus flavus. Now, in an unexpected twist, this same deadly organism is being transformed into a powerful new weapon in the fight against cancer.

    Aspergillus flavus is a common mould found in soil, decaying vegetation and stored grains. It is infamous for its ability to survive in harsh environments, including the sealed chambers of ancient tombs, where it can lie dormant for thousands of years.

    When disturbed, the fungus releases spores that can cause severe respiratory infections, particularly in people with weakened immune systems. This may explain the so-called “curse” of King Tutankhamun and similar incidents, such as the deaths of several scientists who entered the tomb of Casimir IV in Poland in the 1970s. In both cases, investigations later found that A flavus was present, and its toxins were probably responsible for the illnesses and deaths.

    Despite its deadly reputation, Aspergillus flavus is now at the centre of a remarkable scientific finding. Researchers at the University of Pennsylvania have discovered that this fungus produces a unique class of molecules with the potential to fight cancer.

    These molecules belong to a group called ribosomally synthesised and post-translationally modified peptides, or RiPPs. RiPPs are made by the ribosome – the cell’s protein factory – and are later chemically altered to enhance their function.

    While thousands of RiPPs have been identified in bacteria, only a handful have been found in fungi – until now.

    The process of finding these fungal RiPPs was far from simple. The research team screened a dozen different strains or types of aspergillus, searching for chemical clues that might indicate the presence of these promising molecules. Aspergillus flavus quickly stood out as a prime candidate.

    The researchers compared the chemicals from different fungal strains to known RiPP compounds and found promising matches. To confirm their discovery, they switched off the relevant genes and, sure enough, the target chemicals vanished, proving they had found the source.

    Purifying these chemicals proved to be a significant challenge. However, this complexity is also what gives fungal RiPPs their remarkable biological activity.

    The team eventually succeeded in isolating four different RiPPs from Aspergillus flavus. These molecules shared a unique structure of interlocking rings, a feature that had never been described before. The researchers named these new compounds “asperigimycins”, after the fungus in which they were found.

    The next step was to test these asperigimycins against human cancer cells. In some cases, they stopped the growth of cancer cells, suggesting that asperigimycins could one day become a new treatment for certain types of cancer.

    The team also worked out how these chemicals get inside cancer cells. This discovery is significant because many chemicals, like asperigimycins, have medicinal properties but struggle to enter cells in large enough quantities to be useful. Knowing that particular fats (lipids) can enhance this process gives scientists a new tool for drug development.

    Further experiments revealed that asperigimycins probably disrupt the process of cell division in cancer cells. Cancer cells divide uncontrollably, and these compounds appear to block the formation of microtubules, the scaffolding inside cells that are essential for cell division.

    Tremendous untapped potential

    This disruption is specific to certain types of cells, so this may in turn reduce the risk of side-effects. But the discovery of asperigimycins is just the beginning. The researchers also identified similar clusters of genes in other fungi, suggesting that many more fungal RiPPs remain to be discovered.

    Almost all the fungal RiPPs found so far have strong biological activity, making this an area with tremendous untapped potential. The next step is to test asperigimycins in other systems and models, with the hope of eventually moving to human clinical trials. If successful, these molecules could join the ranks of other fungal-derived medicines, such as penicillin, which revolutionised modern medicine.

    The story of Aspergillus flavus is a powerful example of how nature can be both a source of danger and a wellspring of healing. For centuries, this fungus was feared as a silent killer lurking in ancient tombs, responsible for mysterious deaths and the legend of the pharaoh’s curse. Today, scientists are turning that fear into hope, harnessing the same deadly spores to create life-saving medicines.

    This transformation, from curse to cure, highlights the importance of continued exploration and innovation in the natural world. Nature has in fact provided us with an incredible pharmacy, filled with compounds that can heal as well as harm. It is up to scientists and engineers to uncover these secrets, using the latest technologies to identify, modify and test new molecules for their potential to treat disease.

    The discovery of asperigimycins is a reminder that even the most unlikely sources – such as a toxic tomb fungus – can hold the key to revolutionary new treatments. As researchers continue to explore the hidden world of fungi, who knows what other medical breakthroughs may lie just beneath the surface?

    Justin Stebbing does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Toxic fungus from King Tutankhamun’s tomb yields cancer-fighting compounds – new study – https://theconversation.com/toxic-fungus-from-king-tutankhamuns-tomb-yields-cancer-fighting-compounds-new-study-259706

    MIL OSI – Global Reports

  • MIL-OSI Global: When do we first feel pain?

    Source: The Conversation – UK – By Laurenz Casser, Leverhulme Trust Early Career Fellow, University of Sheffield

    Alina Troeva/Shutterstock.com

    At some point between conception and early childhood, pain makes its debut. But when exactly that happens remains one of medicine’s most challenging questions.

    Some have claimed that foetuses as young as twelve weeks can already be seen wincing in agony, while others have flat-out denied that even infants show any true signs of pain until long after birth.

    New research from University College London offers fresh insights into this puzzle. By mapping the development of pain-processing networks in the brain – what researchers call the “pain connectome” – scientists have begun to trace exactly when and how our capacity for pain emerges. What they discovered challenges simple answers about when pain “begins”.


    Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.


    The researchers used advanced brain imaging to compare the neural networks of foetuses and infants with those of adults, tracking how different components of pain processing mature over time. Until about 32 weeks after conception, all pain-related brain networks remain significantly underdeveloped compared with adult brains. But then development accelerates dramatically.

    The sensory aspects of pain – the basic detection of harmful stimuli – mature first, becoming functional around 34 to 36 weeks of pregnancy. The emotional components that make pain distressing follow shortly after, developing between 36 and 38 weeks. However, the cognitive centres responsible for consciously interpreting and evaluating pain lag far behind, and remain largely immature by the time of birth, about 40 weeks after conception.

    This staged development suggests that while late-term foetuses and newborns can detect and respond to harmful stimuli, they probably experience pain very differently from older children and adults. Most significantly, newborns probably can’t consciously evaluate their pain – they can’t form the thought: “This hurts and it’s bad!”

    Does it hurt?
    Martin Valigursky/Shutterstock.com

    A history of changing views

    These findings represent the latest chapter in a long-running scientific debate that has swung dramatically over the centuries, often with profound consequences for medical practice.

    For most physiologists in the 18th and 19th centuries, the perceived delicacy of the infant’s body meant that it must be exquisitely sensitive to pain, so much so that some have had their doubts if infants ever felt anything else. Birth, in particular, was imagined to be an extremely painful event for a newborn.

    However, advances in embryology during the 1870s reversed this thinking. As scientists discovered that infant brains and nervous systems were far less developed than adult versions, many began questioning whether babies could truly feel pain at all. If the neural machinery wasn’t fully formed, how could genuine pain experiences exist?

    This scepticism had troubling practical consequences. For nearly a century, many doctors performed surgery on infants without anaesthesia, convinced that their patients were essentially immune to suffering. The practice continued well into the 1980s in some medical centres.

    Towards the end of the 20th century, public outrage about the medical treatment of infants and new scientific results turned the tables yet again. It was found that newborns exhibited many of the signs (neurological, physiological and behavioural) of pain after all, and that, if anything, pain in infants had probably been underestimated.

    The ambiguous brain

    The reason why there has been endless disagreement about infant pain is that we cannot access their experiences directly.

    Sure, we can observe their behaviour and study their brains, but these are not the same thing. Pain is an experience, something that’s felt in the privacy of a person’s own mind, and that’s inaccessible to anyone but the person whose pain it is.

    Of course, pain experiences are typically accompanied by telltale signs: be it the retraction of a body part from a sharp object or the increased activity of certain brain regions. Those we can measure. But the trouble is that no one behaviour or brain event is ever unambiguous.

    The fact that an infant pulls back their hand from a pin prick may mean that it experiences the pricking as painful, but it may also just be an unconscious reflex. Similarly, the fact that the brain is simultaneously showing pain-related activity may be a sign of pain, but it may also be that the processing unfolds entirely unconsciously. We simply don’t know.

    Perhaps the infant knows. But even if they do, they can’t tell us about their experiences yet, and until they can, scientists are left guessing. Fortunately, their guesses are becoming increasingly well informed, but for now, that is all they can be – guesses.

    What would it take to get certainty? Well, it would require an explanation that connects our brains and behaviour to our conscious experiences. But so far, no scientifically respectable explanation of this kind has been forthcoming.

    Laurenz Casser receives funding from the Leverhulme Trust.

    ref. When do we first feel pain? – https://theconversation.com/when-do-we-first-feel-pain-259588

    MIL OSI – Global Reports

  • MIL-OSI Global: From Roman drains to ancient filters, these artefacts show how solutions to water contamination have evolved

    Source: The Conversation – UK – By Rosa Busquets, Associate Professor, School of Life Sciences, Pharmacy and Chemistry, Kingston University

    Thirst: In Search of Freshwater, an exhibition at Wellcome Collection. Benjamin Gilbert., CC BY-NC-ND

    A new exhibition in London (open until February 2026) called Thirst: In search of freshwater highlights how civilisations have treasured – and been intrinsically linked to – safe, clean water.

    As a chemist, I research how freshwater is polluted by modern civilisation. Common contaminants in rivers include pharmaceuticals,
    microplastics
    (which degrade further when exposed to sunlight and wave power), and forever chemicals or per- and polyfluoroalkyl substances (PFAS) (some of which are carcinogenic).

    Synthetic toxic chemicals are introduced into the environment from the products we make, use and dispose of. This wasn’t a problem centuries ago, where we had a totally different manufacturing industry and technologies.

    Some, such as PFAS from stain-resistant textiles or nonstick materials such as cookware, can be particularly difficult to remove from wastewater. PFAS don’t degrade easily, they resist conventional heat treatments and can easily pass through wastewater treatments, so they contaminate rivers or lakes that are sources of our drinking water.


    Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.


    Testing for pollutants is even more critical in developing nations that lack sanitation and face drought or flooding.
    Having to protect and conserve drinking water and its sources is as relevant today as it always has been.

    For this exhibition, curator at the Wellcome Collection in London, Janice Li, has selected 125 historical objects, photographs and feats of engineering that link to drought, rain, glaciers, rivers and lakes. These three artefacts from Thirst illustrate how our relationship with water contamination has evolved:

    1. Ancient water filters

    Made from natural materials such as clay, water jug filters have been used for hundreds of years in every continent by ancient civilisations. They show that purifying water for drinking was commonplace. The sand and soil particles that naturally get suspended in water and removed by these filters would have carried microbes.

    Water jug filters with Arabic inscription, found in Egypt, dating back to 900-1,200.
    Victoria and Albert Museum London/Wellcome Collection, CC BY-NC-ND

    But in ancient times, pharmaceuticals and other drugs, pesticides, forever chemicals and microplastics would not have been a problem. Those filters could work relatively well despite being made of simple materials with wide pores.

    Today, those ancient filters would no longer be effective. Modern water filters are made using more advanced materials which typically have small pores (called micropores and mesopores). For example, filters often include activated carbon (a highly porous type of carbon that can be manufactured to capture contaminants) or membranes that filter water. Only then is it safe for people to drink.




    Read more:
    Forever chemicals are in our drinking water – here’s how to reduce them


    2. Roman water pipes

    Lead water pipes (known as fistulae) were useful parts of a relatively advanced plumbing system that distributed drinking water throughout Roman cities. They are still common in water systems in our cities today. In the US, there are about 9.2 million lead service lines in use. Exposure to lead causes severe human health problems. Lead exposure, not necessarily from drinking water only, was attributed to more than 1.5 million deaths in 2021.

    A Roman lead water pipe that dates back to 1-300CE.
    Courtesy of Wellcome Collection/Science Museum Group., CC BY-NC-ND

    It’s now understood that lead is neurotoxic and it can diffuse or spread from the pipes to drinking water. Lead from paints and batteries, including car batteries, can also contaminate drinking water.

    To protect us from lead leaching or flaking off from pipes, some government agencies are calling for the replacement of lead pipes with copper or plastic pipes. Water companies routinely add phosphates (mined powder that contains phosphorus) to drinking water to help capture potential lead contamination and make it safe to drink.

    3. The horror of unhealthy water

    One caricature titled The Monster Soup by artist William Heath (1828) is part of the Wellcome Trust’s permanent collection. The graphics read “microcosms dedicated to the London Water companies” and “Monster soup, commonly called Thames Water being a correct representation of the precious stuff doled out to us”. The cartoon shows a lady so terrified at the sight of microbes in river water from the Thames that she drops her cup of tea.

    Monster Soup by William Heath.
    Courtesy of the Wellcome Collection., CC BY-NC-ND

    Even today, many people remain shocked at the toxic contamination in rivers and sewage pollution prevents people from swimming.

    By 2030, 2 billion people will still not have safely managed drinking water and 1.2 billion will lack basic hygiene services. Drinking water will still be contaminated by bacteria such as E. coli and other dangerous pathogens that cause waterborne diseases. So advancing technologies to filter out contamination will be just as crucial in the future as it has been in the past.


    Don’t have time to read about climate change as much as you’d like?

    Get a weekly roundup in your inbox instead. Every Wednesday, The Conversation’s environment editor writes Imagine, a short email that goes a little deeper into just one climate issue. Join the 45,000+ readers who’ve subscribed so far.


    Rosa Busquets receives funding from UKRI/ EU Horizons MSCA Staff exchanges Clean Water project 101131182, DASA, project ACC6093561. She is affiliated with Kingston University, UCL, Al-Farabi Kazakh National University, UNEP EEAP.

    ref. From Roman drains to ancient filters, these artefacts show how solutions to water contamination have evolved – https://theconversation.com/from-roman-drains-to-ancient-filters-these-artefacts-show-how-solutions-to-water-contamination-have-evolved-253876

    MIL OSI – Global Reports

  • MIL-OSI Global: How Trump plays with new media says a lot about him – as it did with FDR, Kennedy and Obama

    Source: The Conversation – UK – By Sara Polak, University Lecturer in American Studies, Leiden University

    There is a strange and worrying parallel between the breakneck speed at which Donald Trump has operated in the first few months of his presidency and the ever-accelerating pace at which information moves on social media platforms. Where in his first term he used Twitter, now, the 47th US president is using his own platform, TruthSocial, to announce changes of direction that are sometimes so fundamental that they change decades of US policy.

    Social media has become a key tool of governing for Trump’s administration. He uses it both to make announcements and to drum up support for those announcements. His social media posts can move the markets and make or break careers. They can even, it seems, stop wars.

    So when he used TruthSocial to announce a ceasefire between Israel and Iran on June 23, giving the two countries a deadline to stop firing missiles, it appears that neither of the antagonists were fully aware of the situation, given they carried on attacking each other. So an all-caps message followed: “ISRAEL. DO NOT DROP THOSE BOMBS,” he posted. “BRING YOUR PILOTS HOME, NOW!” – adding, just in case anyone had any doubt he was serious: “DONALD J. TRUMP, PRESIDENT OF THE UNITED STATES.”

    Trump’s use of his TruthSocial platform began as he sought to re-establish himself from the political wilderness after the insurrection of January 6 2021. It has now become a tool of his extreme power and his willingness to use (and abuse) it – globally as well as domestically.


    Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.


    He’s the latest in a string of US presidents known for their adroit use of whichever is the medium most guaranteed to connect with the greatest number of people. From Theodore “Teddy” Roosevelt’s adept cultivation of print journalists in the early 20th century through Franklin D. Roosevelt’s comforting use of radio as it gained popularity and John F. Kennedy’s mastery of the rising medium of television, presidents have expanded their reach and influence through adept use of media.

    FDR’s “fireside chats”, broadcast on the radio throughout the US in the 1930s, reached an estimated 80% of the population, showing he understood the key media principle of reach. Roosevelt would address his listeners as “my friends” and Americans came to understand them as seemingly intimate conversations with their president.

    FDR dominated the airwaves at a time when many Americans hardly understood the important role that the federal government played in their own lives – and millions of households were only just getting mains electricity (thanks to the Rural Electrification Act of 1936). But radios were becoming a common mass medium and FDR perfectly understood how to use it. If you listen to the fireside chats, FDR may sound patrician – and at times formal – but his tone is also friendly, thoughtful and reassuring.

    In Germany at around the same time, Adolf Hitler’s massive stadium speeches were very effective for people who were in the stadium and being lifted by the intensity of the crowd and all the carefully thought out visual cues. But when broadcast on radio, Hitler had nothing like Roosevelt’s ability to connect with people on a personal level.

    Roosevelt was hardly the first leader – or even the first US president – to speak on the radio. But he was the first to master the medium. He figured out how to use its potential to deliver a key implicit message: that his government should and did take on a central role in people’s lives.

    Equally, John F. Kennedy can be said to have “discovered” political television. Not just as a medium for political campaigns, debates and speeches – but also for putting across to a mass audience his role as the embodiment of American decency, beauty and masculinity: JFK’s White House as Camelot.

    JFK was considered a master of the fast-growing medium of television.

    Both Roosevelt and Kennedy were in several ways physically disabled and lived with chronic illness, yet through the “new medium” of their time were able to project an image of quintessentially American strength and trustworthiness. In part this was their own doing – but it’s also a testament to the power of the media they used for their time.

    Mastering the medium

    These possibilities of a medium used to its best advantage – for example, to be heard around the US, but still to project a sense of intimacy – have become known as the “affordances” of a medium. The medium afforded Roosevelt space to be authentic without showing his disability. Kennedy appeared young, fit and handsome – even when dependent on painkillers.

    When a new medium is introduced, people start to play around with its affordances – and this applies to politicians too. Political leaders who develop a special aptitude for using the new medium to emphasise their unique style can become particularly successful, as has Donald Trump with his use of social media.

    The US president rose to power helped by his adept use of many of Twitter’s attributes – the imposed brevity of his messages, the ease of retweeting, the tendency for other users to “pile on” (and the user anonymity, which tends to encourage pile-ons) to polarise American public debate.

    Trump was forced off Twitter after the Capitol Hill insurrection of January 6 2021. So he came back with his own platform, TruthSocial, where he can also make the rules. And now he uses the platform to make foreign policy, trumpeting his positions (which can change with bewildering speed) on TruthSocial well before they can be announced by the White House press team, which often has to scramble to catch up.

    When Canadian communication theorist Marshall McLuhan penned his famous phrase: “The medium is the message” in his groundbreaking 1964 study, Understanding Media: The Extensions of Man, he meant to say that media form and content are not as distinct from one another as one might think and that the form of a medium of communication can shape society as much as its content. In Donald Trump’s use of social media, we are seeing this idea at work.

    Sara Polak does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. How Trump plays with new media says a lot about him – as it did with FDR, Kennedy and Obama – https://theconversation.com/how-trump-plays-with-new-media-says-a-lot-about-him-as-it-did-with-fdr-kennedy-and-obama-248923

    MIL OSI – Global Reports

  • MIL-OSI Global: Why Asos should be wary of banning customers returning unwanted goods

    Source: The Conversation – UK – By Nic Sanders, Senior Lecturer in Management and Marketing, University of Westminster

    ‘Now where’s that returns label?’ Cast of Thousands.Shutterstock

    Shopping for clothes online is a risky business. How do you know if that top will be a good fit, or those shoes will definitely be the right colour? One popular solution to this predicament is to order lots of tops and lots of shoes, try them on at home, and send back all the ones you don’t want – often at no cost.

    But that tactic can be expensive for the fashion retailer, which needs to pay for all those deliveries and returns. And now Asos, which sends millions of shipments every month, has started banning some customers for over-returning items – prompting something of a backlash.

    The response by the retail giant, which says it wants to maintain a “commitment to offering free returns to all customers across all core markets”, also raises questions about the sustainability of the online fashion business model which Asos helped to create.

    Many online retailers rely on the emotional highs of shopping. The excitement of placing an order, the anticipation of delivery, and the dopamine hit of unpacking a purchase is central to its popular customer experience.


    Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.


    Online shopping generally has thrived on impulsive buying, with the option of returning items treated as a normal part of the process. Of course, even in the days before online shopping there would be customers who routinely returned items.

    But by digitising and simplifying the process, the likes of Asos have helped this to happen on a massive scale. Shoppers have become completely used to ordering multiple sizes or styles with the express intention of returning most of the items they receive. Their homes effectively become fitting rooms.

    And those customers could reasonably argue that online retailers often use digital strategies which encourage multi-item purchases.

    Some sites remind shoppers of recently viewed products and provide suggestions of similar items, for example. There may be are prompts and nudges towards clothes which are frequently bought together.

    Items are then sometimes temporarily reserved in a shopper’s basket for 60 minutes, creating a sense of urgency. Targeted emails and limited time offers drive bulging shopping baskets, encouraging more risk purchases and returns.

    Yet returned items carry a significant cost. They may be unfit for resale and ultimately disposed of, which beyond the financial burden, has an environmental price.

    In addition to creating landfill, each delivery and return has a carbon footprint. And although many younger consumers express support for sustainable practices, their buying behaviour continues to prioritise price and convenience.

    But free returns have become part of the online fashion industry landscape. Research suggests that customers are simply more likely to buy something if returns are free.

    And today’s tricky financial climate, marked by inflation and rising living costs will surely have made consumers even more cautious. Many will be reluctant to buy items that incur delivery and return costs.

    Shopping around

    Frustrations can then arise from unclear return policies, often buried in lengthy terms and conditions documents. Some of those banned by Asos say they were confused about the rules.

    Automated customer service systems offering generic responses may then leave shoppers with no clear way to challenge these decisions.

    Perhaps the wider issue here is that online shopping cannot fully replicate the benefits of shopping in store. In physical shops, customers can try on items before deciding.

    But online, this can’t happen, so returns become fundamental to the decision-making process. For cost-conscious shoppers, avoiding unnecessary spending is essential. But if returns policies become harder to access, they may turn to other retailers which offer more certainty.

    Return to sender?
    A08/Shutterstock

    For example, retailers such as Zara and H&M, with a business model which mixes online convenience with a high street (or shopping mall) presence, offer the option to order online and then return in person.

    This hybrid (or “omni-channel”) model appears to be driving consumers to physical shops for a blended experience which provides convenience and helps reduce return costs.

    For Asos, doing something similar would require major investment (in bricks and mortar) and increased operational costs – so is perhaps an unlikely solution for the company.

    But to balance sustainability, cost and customer satisfaction, Asos could explore other options. These might include clearer, more visible communication regarding “fair use” policies and their consequences. It could aim for more human interactions and better dialogue with customers it plans to ban.

    Offering physical retail locations or return collection points to simplify the process and reduce the environmental impact and costs will provide customer flexibility. Overall, these areas will help create a better customer service experience.

    Ultimately, Asos and other similar online clothing retailers must evolve. With changing consumer expectations, a challenging economic climate and rising operational costs, the model that defined these retailers’ early success cannot remain unchanged.

    If they make adjustments, they may emerge stronger. If they do not, they risk sparking a customer exodus that would be hard to reverse.

    Nic Sanders does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Why Asos should be wary of banning customers returning unwanted goods – https://theconversation.com/why-asos-should-be-wary-of-banning-customers-returning-unwanted-goods-259952

    MIL OSI – Global Reports

  • MIL-OSI Global: Humans and animals can both think logically − but testing what kind of logic they’re using is tricky

    Source: The Conversation – USA – By Olga Lazareva, Professor of Psychology, Drake University

    For some mental processes, humans and animals likely follow similar lines of thinking. Catherine Falls Commercial/Moment via Getty Images

    Can a monkey, a pigeon or a fish reason like a person? It’s a question scientists have been testing in increasingly creative ways – and what we’ve found so far paints a more complicated picture than you’d think.

    Imagine you’re filling out a March Madness bracket. You hear that Team A beat Team B, and Team B beat Team C – so you assume Team A is probably better than Team C. That’s a kind of logical reasoning known as transitive inference. It’s so automatic that you barely notice you’re doing it.

    It turns out humans are not the only ones who can make these kinds of mental leaps. In labs around the world, researchers have tested many animals, from primates to birds to insects, on tasks designed to probe transitive inference, and most pass with flying colors.

    As a scientist focused on animal learning and behavior, I work with pigeons to understand how they make sense of relationships, patterns and rules. In other words, I study the minds of animals that will never fill out a March Madness bracket – but might still be able to guess the winner.

    Logic test without words

    The basic idea is simple: If an animal learns that A is better than B, and B is better than C, can it figure out that A is better than C – even though it’s never seen A and C together?

    In the lab, researchers test this by giving animals randomly paired images, one pair at a time, and rewarding them with food for picking the correct one. For example, animals learn that a photo of hands (A) is correct when paired with a classroom (B), a classroom (B) is correct when paired with bushes (C), bushes (C) are correct when paired with a highway (D), and a highway (D) is correct when paired with a sunset (E). We don’t know whether they “understand” what’s in the picture, and it is not particularly important for the experiment that they do.

    In a transitive inference task, subjects learn a series of rewarded pairs – such as A+ vs. B–, B+ vs. C– – and are later tested on novel pairings, like B vs. D, to see whether they infer an overall ranking.
    Olga Lazareva, CC BY-ND

    One possible explanation is that the animals that learn all the tasks create a mental ranking of these images: A > B > C > D > E. We test this idea by giving them new pairs they’ve never seen before, such as classroom (B) vs. highway (D). If they consistently pick the higher-ranked item, they’ve inferred the underlying order.

    What’s fascinating is how many species succeed at this task. Monkeys, rats, pigeons – even fish and wasps – have all demonstrated transitive inference in one form or another.

    The twist: Not all tasks are easy

    But not all types of reasoning come so easily. There’s another kind of rule called transitivity that is different from transitive inference, despite the similar name. Instead of asking which picture is better, transitivity is about equivalence.

    In this task, animals are shown a set of three pictures and asked which one goes with the center image. For example, if white triangle (A1) is shown, choosing red square (B1) earns a reward, while choosing blue square (B2) does not. Later, when red square (B1) is shown, choosing white cross (C1) earns a reward while choosing white circle (C2) does not. Now comes the test: white triangle (A1) is shown with white cross (C1) and white circle (C2) as choices. If they pick white cross (C1), then they’ve demonstrated transitivity.

    In a transitivity task, subjects learn matching rules across overlapping sets – such as A1 matches B1, B1 matches C1 – and are tested on new combinations, such as A1 with C1 or C2, to assess whether they infer the relationship between A1 and C1.
    Olga Lazareva, CC BY-ND

    The change may seem small, but species that succeed in those first transitive inference tasks often stumble in this task. In fact, they tend to treat the white triangle and the white cross as completely separate things, despite their common relationship with the red square. In my recently published review of research using the two tasks, I concluded that more evidence is needed to determine whether these tests tap into the same cognitive ability.

    Small differences, big consequences

    Why does the difference between transitive inference and transitivity matter? At first glance, they may seem like two versions of the same ability – logical reasoning. But when animals succeed at one and struggle with the other, it raises an important question: Are these tasks measuring the same kind of thinking?

    The apparent difference between the two tasks isn’t just a quirk of animal behavior. Psychology researchers apply these tasks to humans in order to draw conclusions about how people reason.

    For example, say you’re trying to pick a new almond milk. You know that Brand A is creamier than Brand B, and your friend told you that Brand C is even waterier than Brand B. Based on that, because you like a thicker milk, you might assume Brand A is better than Brand C, an example of transitive inference.

    But now imagine the store labels both Brand A and Brand C as “barista blends.” Even without tasting them, you might treat them as functionally equivalent, because they belong to the same category. That’s more like transitivity, where items are grouped based on shared relationships. In this case, “barista blend” signals the brands share similar quality.

    How researchers define logical reasoning determines how they interpret results.
    Svetlana Mishchenko/iStock via Getty Images

    Researchers often treat these types of reasoning as measuring the same ability. But if they rely on different mental processes, they might not be interchangeable. In other words, the way scientists ask their questions may shape the answer – and that has big implications for how they interpret success in animals and in people.

    This difference could affect how researchers interpret decision-making not only in the lab, but also in everyday choices and in clinical settings. Tasks like these are sometimes used in research on autism, brain injury or age-related cognitive decline.

    If two tasks look similar on the surface, then choosing the wrong one might lead to inaccurate conclusions about someone’s cognitive abilities. That’s why ongoing work in my lab is exploring whether the same distinction between these logical processes holds true for people.

    Just like a March Madness bracket doesn’t always predict the winner, a reasoning task doesn’t always show how someone got to the right answer. That’s the puzzle researchers are still working on – figuring out whether different tasks really tap into the same kind of thinking or just look like they do. It’s what keeps scientists like me in the lab, asking questions, running experiments and trying to understand what it really means to reason – no matter who’s doing the thinking.

    Olga Lazareva does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Humans and animals can both think logically − but testing what kind of logic they’re using is tricky – https://theconversation.com/humans-and-animals-can-both-think-logically-but-testing-what-kind-of-logic-theyre-using-is-tricky-253001

    MIL OSI – Global Reports

  • MIL-Evening Report: Antarctic summer sea ice is at record lows. Here’s how it will harm the planet – and us

    Source: The Conversation (Au and NZ) – By Edward Doddridge, Senior Research Associate in Physical Oceanography, University of Tasmania

    An icebreaker approaches Denman Glacier in March, when there was 70% less Antarctic sea ice than usual. Pete Harmsen AAD

    On her first dedicated scientific voyage to Antarctica in March, the Australian icebreaker RSV Nuyina found the area sea-ice free. Scientists were able to reach places never sampled before.

    Over the past four summers, Antarctic sea ice extent has hit new lows.

    I’m part of a large group of scientists who set out to explore the consequences of summer sea ice loss after the record lows of 2022 and 2023. Together we rounded up the latest publications, then gathered new evidence using satellites, computer modelling, and robotic ocean sampling devices. Today we can finally reveal what we found.

    It’s bad news on many levels, because Antarctic sea ice is vital for the world’s climate and ecosystems. But we need to get a grip on what’s happening – and use this concerning data to prompt faster action on climate change.

    Sea ice around Antarctica waxes and wanes with the seasons, growing in the cold months and melting in warm ones. But this rhythmic cycle is changing.

    What we did and what we found

    Our team used a huge range of approaches to study the consequences of sea ice loss.

    We used satellites to understand sea ice loss over summer, measuring everything from ice thickness and extent to the length of time each year when sea ice is absent.

    Satellite data was also used to calculate how much of the Antarctic coast was exposed to open ocean waves. We were then able to quantify the relationship between sea ice loss and iceberg calving.

    Data from free-drifting ocean robots was used to understand how sea ice loss affects the tiny plants that support the marine food web.

    Every other kind of available data was then harnessed to explore the full impact of sea ice changes on ecosystems.

    Voyage reports from international colleagues came in handy when studying how sea ice loss affected Antarctic resupply missions.

    We also used computer models to simulate the impact of dramatic summer sea ice loss on the ocean.

    In summary, our extensive research reveals four key consequences of summer sea ice loss in Antarctica.

    1. Ocean warming is compounding

    Bright white sea ice reflects about 90% of the incoming energy from sunlight, while the darker ocean absorbs about 90%. So if there’s less summer sea ice, the ocean absorbs much more heat.

    This means the ocean surface warms more in an extreme low sea ice year, such as 2016 – when everything changed.

    Until recently, the Southern Ocean would reset over winter. If there was a summer with low sea ice cover, the ocean would warm a bit. But over winter, the extra heat would shift into the atmosphere.

    That’s not working anymore. We know this from measuring sea surface temperatures, but we have also confirmed this relationship using computer models.

    What’s happening instead is when summer sea ice is very low, as in 2016, it triggers ocean warming that persists. It takes about three years for the system to fully recover. But recovery is becoming less and less likely, given warming is building from year to year.

    Comparing an average sea ice summer (a) to an extreme low sea ice summer (b) in which there is less sea ice for wildlife and more sunlight is absorbed by the ocean. The ice shelf is more exposed to ocean waves, calving more icebergs. The ocean is also less productive and tourist vessels can make a closer approach.
    Doddridge, E., W., et al. (2025) PNAS Nexus., CC BY-NC-ND

    2. More icebergs are forming

    Sea ice protects Antarctica’s coast from ocean waves.

    On average, about a third of the continent’s coastline is exposed over summer. But this is changing. In 2022 and 2023, more than half of the Antarctic coast was exposed.

    Our research shows more icebergs break away from Antarctic ice sheets in years with less sea ice. During an average summer, about 100 icebergs break away. Summers with low sea ice produce about twice as many icebergs.

    Antarctic ice sheets without sea ice are more exposed to waves.
    Pete Harmsen AAD

    3. Wildlife squeezed off the ice

    Many species of seals and penguins rely on sea ice, especially for breeding and moulting.

    Entire colonies of emperor penguins experienced “catastrophic breeding failure” in 2022, when sea ice melted before chicks were ready to go to sea.

    After giving birth, crabeater seals need large, stable sea ice platforms for 2–3 weeks until their pups are weaned. The ice provides shelter and protection from predators. Less summer sea-ice cover makes large platforms harder to find.

    Many seal and penguin species also take refuge on the sea ice when moulting. These species must avoid the icy water while their new feathers or fur grows, or risk dying of hypothermia.

    4. Logistical challenges at the end of the world

    Low summer sea ice makes it harder for people working in Antarctica. Shrinking summer sea ice will narrow the time window during which Antarctic bases can be resupplied over the ice. These bases may soon need to be resupplied from different locations, or using more difficult methods such as small boats.

    Supply ships typically unload their cargo directly onto the sea ice, but that may have to change.
    Jared McGhie, Australian Antarctic Division

    No longer safe

    Anarctic sea ice began to change rapidly in 2015 and 2016. Since then it has remained well below the long-term average.

    The dataset we use relies on measurements from US Department of Defense satellites. Late last month, the department announced it would no longer provide this data to the scientific community. While this has since been delayed to July 31, significant uncertainty remains.

    One of the biggest challenges in climate science is gathering and maintaining consistent long-term datasets. Without these, we don’t accurately know how much our climate is changing. Observing the entire Earth is hard enough when we all work together. It’s going to be almost impossible if we don’t share our data.

    Antarctic sea ice extent anomalies (the difference between the long-term average and the measurement) for the entire satellite record since the late 1970s.
    Edward Doddridge, using data from the US NSIDC Sea Ice Index, version 3., CC BY

    Recent low sea ice summers present a scientific challenge. The system is currently changing faster than our scientific community can study it.

    But vanishing sea ice also presents a challenge to society. The only way to prevent even more drastic changes in the future is to rapidly transition away from fossil fuels and reach net zero emissions.

    Edward Doddridge receives funding from the Australian Research Council.

    ref. Antarctic summer sea ice is at record lows. Here’s how it will harm the planet – and us – https://theconversation.com/antarctic-summer-sea-ice-is-at-record-lows-heres-how-it-will-harm-the-planet-and-us-256104

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: Antarctic summer sea ice is at record lows. Here’s how it will harm the planet – and us

    Source: The Conversation (Au and NZ) – By Edward Doddridge, Senior Research Associate in Physical Oceanography, University of Tasmania

    An icebreaker approaches Denman Glacier in March, when there was 70% less Antarctic sea ice than usual. Pete Harmsen AAD

    On her first dedicated scientific voyage to Antarctica in March, the Australian icebreaker RSV Nuyina found the area sea-ice free. Scientists were able to reach places never sampled before.

    Over the past four summers, Antarctic sea ice extent has hit new lows.

    I’m part of a large group of scientists who set out to explore the consequences of summer sea ice loss after the record lows of 2022 and 2023. Together we rounded up the latest publications, then gathered new evidence using satellites, computer modelling, and robotic ocean sampling devices. Today we can finally reveal what we found.

    It’s bad news on many levels, because Antarctic sea ice is vital for the world’s climate and ecosystems. But we need to get a grip on what’s happening – and use this concerning data to prompt faster action on climate change.

    Sea ice around Antarctica waxes and wanes with the seasons, growing in the cold months and melting in warm ones. But this rhythmic cycle is changing.

    What we did and what we found

    Our team used a huge range of approaches to study the consequences of sea ice loss.

    We used satellites to understand sea ice loss over summer, measuring everything from ice thickness and extent to the length of time each year when sea ice is absent.

    Satellite data was also used to calculate how much of the Antarctic coast was exposed to open ocean waves. We were then able to quantify the relationship between sea ice loss and iceberg calving.

    Data from free-drifting ocean robots was used to understand how sea ice loss affects the tiny plants that support the marine food web.

    Every other kind of available data was then harnessed to explore the full impact of sea ice changes on ecosystems.

    Voyage reports from international colleagues came in handy when studying how sea ice loss affected Antarctic resupply missions.

    We also used computer models to simulate the impact of dramatic summer sea ice loss on the ocean.

    In summary, our extensive research reveals four key consequences of summer sea ice loss in Antarctica.

    1. Ocean warming is compounding

    Bright white sea ice reflects about 90% of the incoming energy from sunlight, while the darker ocean absorbs about 90%. So if there’s less summer sea ice, the ocean absorbs much more heat.

    This means the ocean surface warms more in an extreme low sea ice year, such as 2016 – when everything changed.

    Until recently, the Southern Ocean would reset over winter. If there was a summer with low sea ice cover, the ocean would warm a bit. But over winter, the extra heat would shift into the atmosphere.

    That’s not working anymore. We know this from measuring sea surface temperatures, but we have also confirmed this relationship using computer models.

    What’s happening instead is when summer sea ice is very low, as in 2016, it triggers ocean warming that persists. It takes about three years for the system to fully recover. But recovery is becoming less and less likely, given warming is building from year to year.

    Comparing an average sea ice summer (a) to an extreme low sea ice summer (b) in which there is less sea ice for wildlife and more sunlight is absorbed by the ocean. The ice shelf is more exposed to ocean waves, calving more icebergs. The ocean is also less productive and tourist vessels can make a closer approach.
    Doddridge, E., W., et al. (2025) PNAS Nexus., CC BY-NC-ND

    2. More icebergs are forming

    Sea ice protects Antarctica’s coast from ocean waves.

    On average, about a third of the continent’s coastline is exposed over summer. But this is changing. In 2022 and 2023, more than half of the Antarctic coast was exposed.

    Our research shows more icebergs break away from Antarctic ice sheets in years with less sea ice. During an average summer, about 100 icebergs break away. Summers with low sea ice produce about twice as many icebergs.

    Antarctic ice sheets without sea ice are more exposed to waves.
    Pete Harmsen AAD

    3. Wildlife squeezed off the ice

    Many species of seals and penguins rely on sea ice, especially for breeding and moulting.

    Entire colonies of emperor penguins experienced “catastrophic breeding failure” in 2022, when sea ice melted before chicks were ready to go to sea.

    After giving birth, crabeater seals need large, stable sea ice platforms for 2–3 weeks until their pups are weaned. The ice provides shelter and protection from predators. Less summer sea-ice cover makes large platforms harder to find.

    Many seal and penguin species also take refuge on the sea ice when moulting. These species must avoid the icy water while their new feathers or fur grows, or risk dying of hypothermia.

    4. Logistical challenges at the end of the world

    Low summer sea ice makes it harder for people working in Antarctica. Shrinking summer sea ice will narrow the time window during which Antarctic bases can be resupplied over the ice. These bases may soon need to be resupplied from different locations, or using more difficult methods such as small boats.

    Supply ships typically unload their cargo directly onto the sea ice, but that may have to change.
    Jared McGhie, Australian Antarctic Division

    No longer safe

    Anarctic sea ice began to change rapidly in 2015 and 2016. Since then it has remained well below the long-term average.

    The dataset we use relies on measurements from US Department of Defense satellites. Late last month, the department announced it would no longer provide this data to the scientific community. While this has since been delayed to July 31, significant uncertainty remains.

    One of the biggest challenges in climate science is gathering and maintaining consistent long-term datasets. Without these, we don’t accurately know how much our climate is changing. Observing the entire Earth is hard enough when we all work together. It’s going to be almost impossible if we don’t share our data.

    Antarctic sea ice extent anomalies (the difference between the long-term average and the measurement) for the entire satellite record since the late 1970s.
    Edward Doddridge, using data from the US NSIDC Sea Ice Index, version 3., CC BY

    Recent low sea ice summers present a scientific challenge. The system is currently changing faster than our scientific community can study it.

    But vanishing sea ice also presents a challenge to society. The only way to prevent even more drastic changes in the future is to rapidly transition away from fossil fuels and reach net zero emissions.

    Edward Doddridge receives funding from the Australian Research Council.

    ref. Antarctic summer sea ice is at record lows. Here’s how it will harm the planet – and us – https://theconversation.com/antarctic-summer-sea-ice-is-at-record-lows-heres-how-it-will-harm-the-planet-and-us-256104

    MIL OSI AnalysisEveningReport.nz

  • MIL-OSI Global: Pop, soda or coke? The fizzy history behind America’s favorite linguistic debate

    Source: The Conversation – USA – By Valerie M. Fridland, Professor of Linguistics, University of Nevada, Reno

    ‘I’ll have a coke – no, not Coca-Cola, Sprite.’ Justin Sullivan/Getty Images

    With burgers sizzling and classic rock thumping, many Americans revel in summer cookouts – at least until that wayward cousin asks for a “pop” in soda country, or even worse, a “coke” when they actually want a Sprite.

    Few American linguistic debates have bubbled quite as long and effervescently as the one over whether a generic soft drink should be called a soda, pop or coke.

    The word you use generally boils down to where you’re from: Midwesterners enjoy a good pop, while soda is tops in the North and far West. Southerners, long the cultural mavericks, don’t bat an eyelash asking for coke – lowercase – before homing in on exactly the type they want: Perhaps a root beer or a Coke, uppercase.

    As a linguist who studies American dialects, I’m less interested in this regional divide and far more fascinated by the unexpected history behind how a fizzy “health” drink from the early 1800s spawned the modern soft drink’s many names and iterations.

    Bubbles, anyone?

    Foods and drinks with wellness benefits might seem like a modern phenomenon, but the urge to create drinks with medicinal properties inspired what might be called a soda revolution in the 1800s.

    An 1878 engraving of a soda fountain.
    Smith Collection/Gado via Getty Images

    The process of carbonating water was first discovered in the late 1700s. By the early 1800s, this carbonated water had become popular as a health drink and was often referred to as “soda water.” The word “soda” likely came from “sodium,” since these drinks often contained salts, which were then believed to have healing properties.

    Given its alleged curative effects for health issues such as indigestion, pharmacists sold soda water at soda fountains, innovative devices that created carbonated water to be sold by the glass. A chemistry professor, Benjamin Stillman, set up the first such device in a drugstore in New Haven, Connecticut, in 1806. Its eventual success inspired a boom of soda fountains in drugstores and health spas.

    By the mid-1800s, pharmacists were creating unique root-, fruit- and herb-infused concoctions, such as sassafras-based root beer, at their soda fountains, often marketing them as cures for everything from fatigue to foul moods.

    These flavored, sweetened versions gave rise to the linking of the word “soda” with a sweetened carbonated beverage, as opposed to simple, carbonated water.

    Seltzer – today’s popular term for such sparkling water – was around, too. But it was used only for the naturally carbonated mineral water from the German town Nieder-Selters. Unlike Perrier, sourced similarly from a specific spring in France, seltzer made the leap to becoming a generic term for fizzy water.

    Many late-19th-century and early 20th-century drugstores contained soda fountains – a nod to the original belief that the sugary, bubbly drink possessed medicinal qualities.
    Hall of Electrical History Foundation/Corbis via Getty Images

    Regional naming patterns

    So how did “soda” come to be called so many different things in different places?

    It all stems from a mix of economic enterprise and linguistic ingenuity.

    The popularity of “soda” in the Northeast likely reflects the soda fountain’s longer history in the region. Since a lot of Americans living in the Northeast migrated to California in the mid-to-late 1800s, the name likely traveled west with them.

    As for the Midwestern preference for “pop” – well, the earliest American use of the term to refer to a sparkling beverage appeared in the 1840s in the name of a flavored version called “ginger pop.” Such ginger-flavored pop, though, was around in Britain by 1816, since a Newcastle songbook is where you can first see it used in text. The “pop” seems to be onomatopoeic for the noise made when the cork was released from the bottle before drinking.

    A jingle for Faygo touts the company’s ‘red pop.’

    Linguists don’t fully know why “pop” became so popular in the Midwest. But one theory links it to a Michigan bottling company, Feigenson Brothers Bottling Works – today known as Faygo Beverages – that used “pop” in the name of the sodas they marketed and sold. Another theory suggests that because bottles were more common in the region, soda drinkers were more likely to hear the “pop” sound than in the Northeast, where soda fountains reigned.

    As for using coke generically, the first Coca-Cola was served in 1886 by Dr. John Pemberton, a pharmacist at Jacobs’ Pharmacy in Atlanta and the founder of the company. In the 1900s, the Coca-Cola company tried to stamp out the use of “Coke” for “Coca-Cola.” But that ship had already sailed. Since Coca-Cola originated and was overwhelmingly popular in the South, its generic use grew out of the fact that people almost always asked for “Coke.”

    No alcohol means not ‘hard’ but ‘soft.’
    Nostalgic Collections/eBay

    As with Jell-O, Kleenex, Band-Aids and seltzer, it became a generic term.

    What’s soft about it?

    Speaking of soft drinks, what’s up with that term?

    It was originally used to distinguish all nonalcoholic drinks from “hard drinks,” or beverages containing spirits.

    Interestingly, the original Coca-Cola formula included wine – resembling a type of alcoholic “health” drink popular overseas, Vin Mariani. But Pemberton went on to develop a “soft” version a few years later to be sold as a medicinal drink.

    Due to the growing popularity of soda water concoctions, eventually “soft drink” came to mean only such sweetened carbonated beverages, a linguistic testament to America’s enduring love affair with sugar and bubbles.

    With the average American guzzling almost 40 gallons per year, you can call it whatever you what. Just don’t call it healthy.

    Valerie M. Fridland does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Pop, soda or coke? The fizzy history behind America’s favorite linguistic debate – https://theconversation.com/pop-soda-or-coke-the-fizzy-history-behind-americas-favorite-linguistic-debate-259114

    MIL OSI – Global Reports

  • MIL-OSI Global: In LGBTQ+ storybook case, Supreme Court handed a win to parental rights, raising tough questions for educators

    Source: The Conversation – USA – By Charles J. Russo, Joseph Panzer Chair in Education and Research Professor of Law, University of Dayton

    The parents who brought the case had requested that their children be excused when books with LGBTQ+ characters were used in class. SDI Productions/E+ via Getty Images

    The Supreme Court tends to save its blockbuster orders for the last day of the term – and 2025 was no exception.

    Among the important decisions handed down June 27, 2025, was Mahmoud v. Taylor – a case of particular interest to me, because I teach education law. Mahmoud, I believe, may become one of the court’s most consequential rulings on parental rights.

    An interfaith coalition of Muslim, Orthodox Christian and Catholic parents in Montgomery County, Maryland – including Tamer Mahmoud, for whom the case is named – questioned the school board’s refusal to allow them to opt their young children out of lessons using picture books with LGBTQ+ characters. Ruling in favor of the parents, the court found that the board violated their First Amendment right to the free exercise of religion by requiring their children to sit through lessons with materials inconsistent with their faiths.

    Case history

    The parents in Mahmoud challenged the use of certain storybooks that the board had approved for use in preschool and elementary school. “Pride Puppy!” for example – a book the schools later removed – portrays a family whose pet gets lost at a LGBTQ+ Pride parade, with each page devoted to a letter of the alphabet. The book’s “search and find” list of words directs readers to look for terms in the pictures, including “(drag) queen” and “king,” “leather” and “lip ring.” Other materials included stories about same-sex marriage, a transgender child, and nonbinary bathroom signs.

    Initially, school administrators agreed to allow opt-outs for students whose parents objected to the materials. A day later, however, educators changed their minds. School officials cited concerns about absenteeism, the feasibility of accommodating opt-out requests, and a desire to avoid stigmatizing LGBTQ+ students or families.

    In August 2023, a federal trial court rejected the parents’ claim that officials had violated their fundamental due process right to direct the care, custody and education of their children. The following year, the U.S. Court of Appeals for the 4th Circuit affirmed in favor of the board, finding that officials did not violate the parents’ rights to the free exercise of their religious beliefs, as protected by the First Amendment.

    A group of parents in Montgomery County, Maryland, protest the lack of opt-outs on July 20, 2023.
    Celal Gunes/Anadolu Agency via Getty Images

    On appeal, a 6-3 Supreme Court reversed in favor of the parents. Justice Samuel Alito, who authored the court’s opinion, was joined by Chief Justice John Roberts, plus Justices Clarence Thomas, Neil Gorsuch, Brett Kavanaugh and Amy Coney Barrett.

    Supreme Court

    In brief, the court held that by denying the parental requests to opt their children out of instruction inconsistent with their beliefs, school officials violated their First Amendment right to the free exercise of religion.

    Alito largely grounded the court’s rationale in a dispute from 1925, Pierce v. Society of Sisters of the Holy Name of Jesus and Mary, and even more heavily on 1972’s Wisconsin v. Yoder. Both cases recognize the primacy of parental rights to direct the education of their children. According to Pierce’s famous dictum, “the child is not the mere creature of the state; those who nurture him and direct his destiny have the right, coupled with the high duty, to recognize and prepare him for additional obligations.”

    In Yoder, Amish parents – an Anabaptist Christian community that avoids using many modern technologies – objected to sending their children to school after eighth grade because this would have violated their religious beliefs. The justices unanimously agreed with the parents that their children received all of the education they needed in their communities. The justices added that requiring the children to attend high school would have violated the parents’ rights to direct their children’s religious upbringing.

    Accordingly, the court acknowledged that the parental right “to guide the religious future and education of their children” was “established beyond debate.”

    Similarly, in Mahmoud the court declared that “the Board’s introduction of the ‘LGBTQ+-inclusive’ storybooks, along with its decision to withhold opt-outs, places an unconstitutional burden on the parents’ rights to the free exercise of their religion.”

    Thomas agreed fully with the court, yet wrote a separate concurrence, which emphasized “an important implication of this decision for schools across the country.” Citing Yoder, Thomas contended that rather than support inclusion, the board’s policy “imposes conformity with a view that undermines parents’ religious beliefs, and thus interferes with the parents’ right to ‘direct the religious upbringing of their children.’”

    Justice Sonia Sotomayor’s dissent, joined by Justices Elena Kagan and Ketanji Brown Jackson, feared “the result will be chaos for this Nation’s public schools. Requiring schools to provide advance notice and the chance to opt out of every lesson plan or story time that might implicate a parent’s religious beliefs will impose impossible administrative burdens on schools.”

    Supporters of LGBTQ+ rights demonstrate outside the U.S. Supreme Court during oral arguments in Mahmoud v. Taylor on April 22, 2025.
    Oliver Contreras/AFP via Getty Images

    She maintained that “simply being exposed to beliefs contrary to your own” does not violate a person’s free exercise rights. Insulating children from different ideas, she wrote, denies them of an experience that is crucial for democracy: “practice living in our multicultural society.”

    Implications

    After the decision was handed down, Montgomery County’s Board of Education issued a statement promising to “analyze the Supreme Court decision and develop next steps in alignment with today’s decision, and as importantly, our values.”

    Mahmoud raises challenging questions about the scope or reach of how far parents can question curricular content.

    On the one hand, parents should not be able to micromanage curricular content via the “heckler’s veto,” because this can lead to larger issues. Moreover, while Mahmoud concerns religious rights, what happens if parents question teachings based on another type of sincerely held belief – discussing war if they are pacifist, for example, or capitalism if they are socialists? While Mahmoud dealt with free-exercise rights, it may open the door to other types of First Amendment challenges from parents wishing to exempt their children from lessons.

    On the other hand, Mahmoud highlights the need to take legitimate parental concerns into consideration. While educators typically control instruction, how can they be respectful of parents’ rights as primary caregivers of their children when conflicts arise?

    Mahmoud may go a long way in defining parents’ free-exercise rights in public schools. Still, such disputes are likely far from over in America’s increasingly diverse religious culture.

    Charles J. Russo does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. In LGBTQ+ storybook case, Supreme Court handed a win to parental rights, raising tough questions for educators – https://theconversation.com/in-lgbtq-storybook-case-supreme-court-handed-a-win-to-parental-rights-raising-tough-questions-for-educators-260064

    MIL OSI – Global Reports

  • MIL-OSI Global: Invasive carp threaten the Great Lakes − and reveal a surprising twist in national politics

    Source: The Conversation – USA – By Mike Shriberg, Professor of Practice & Engagement, School for Environment & Sustainability, University of Michigan

    Invasive Asian carp are spreading up the Mississippi River system and already clog the Illinois River. AP Photo/John Flesher

    In his second term, President Donald Trump has not taken many actions that draw near-universal praise from across the political spectrum. But there is at least one of these political anomalies, and it illustrates the broad appeal of environmental protection and conservation projects – particularly when it concerns an ecosystem of vital importance to millions of Americans.

    In May 2025, Trump issued a presidential memorandum supporting the construction of a physical barrier that is key to keeping invasive carp out of the Great Lakes. These fish have made their way up the Mississippi River system and could have dire ecological consequences if they enter the Great Lakes.

    It was not a given that Trump would back this project, which had long been supported by environmental and conservation organizations. But two very different strategies from two Democratic governors – both potential presidential candidates in 2028 – reflected the importance of the Great Lakes to America.

    As a water policy and politics scholar focused on the Great Lakes, I see this development not only as an environmental and conservation milestone, but also a potential pathway for more political unity in the U.S.

    A feared invasion

    Perhaps nothing alarms Great Lakes ecologists more than the potential for invasive carp from Asia to establish a breeding population in the Great Lakes. These fish were intentionally introduced in the U.S. Southeast by private fish farm and wastewater treatment operators as a means to control algae in aquaculture and sewage treatment ponds. Sometime in the 1990s, the fish escaped from those ponds and moved rapidly up the Mississippi River system, including into the Illinois River, which connects to the Great Lakes.

    Sometimes said to “breed like mosquitoes and eat like hogs,” these fish can consume up to 40% of their body weight each day, outcompeting many native species and literally sucking up other species and food sources.

    Studies of Lake Erie, for example, predict that if the carp enter and thrive, they could make up approximately one-third of the fish biomass of the entire lake within 20 years, replacing popular sportfishing species such as walleye and other ecologically and economically important species.

    Invasive carp are generally not eaten in the U.S. and are not desirable for sportfishing. In fact, silver carp have a propensity to jump up to 10 feet out of the water when startled by a boat motor. That can make parts of the Illinois River, which is packed with the invasive fish, almost impossible to fish or even maneuver a boat.

    Look out! Silver carp fly out of the water, obstructing boats and hitting people trying to enjoy a river in Indiana.

    The Brandon Road Lock and Dam solution

    Originally, the Great Lakes and the Mississippi River were not connected to each other. But in 1900, the city of Chicago connected them to avoid sending its sewage into Lake Michigan, from which the city draws its drinking water.

    The most complete way to block the carp from invading the Great Lakes would be to undo that connection – but that would recreate sewage and flooding issues for Chicago, or require other expensive infrastructure upgrades. The more practical, short-term alternative is to modify the historic Brandon Road Lock and Dam in Joliet, Illinois, by adding several obstacles that together would block the carp from swimming farther upriver toward the Great Lakes.

    The barrier, estimated to cost US$1.15 billion, was authorized by Congress in 2020 and 2022 after many years of intense planning and negotiations. For the first phase of construction, the project received $226 million in federal money from the Bipartisan Infrastructure Law to complement $114 million in state funding – $64 million from Michigan and $50 million from Illinois.

    On the first day of Trump’s second term, however, he paused a wide swath of federal funding, including funding from the Bipartisan Infrastructure Law. And that’s when two different political strategies emerged.

    A brief documentary explains the construction of a connection between the Great Lakes and the Mississippi River basin.

    Pritzker vs. Whitmer vs. Trump

    Illinois, a state that has voted for the Democratic candidate in every presidential election since 1992, has the most financially at stake in the Brandon Road project because the project requires the state to acquire land and operate the barrier. When Trump issued his order, Illinois Gov. JB Pritzker, a Democrat, postponed the purchase of a key piece of land, blaming the “Trump Administration’s lack of clarity and commitment” to the project. Pritzker essentially dared Trump to be the reason for the collapse of the Great Lakes ecosystem and fisheries.

    Another Democrat, Gov. Gretchen Whitmer of Michigan, a swing state with the most at stake economically and ecologically if these carp species enter the Great Lakes, took a very different approach. She went to the White House to talk with Trump about invasive carp and other issues. She defended her nonconfrontational approach to critics, though she also hid her face from cameras when Trump surprised her with an Oval Office press conference. When Trump visited Michigan, she stood beside him as they praised each other.

    When Trump released the federal funding in early May, Pritzker kept up his adversarial language, saying he was “glad that the Trump administration heard our calls … and decided to finally meet their obligation.” Whitmer stayed more conciliatory, calling the funding decision a “huge win that will protect our Great Lakes and secure our economy.” She said she was “grateful to the president for his commitment.”

    Michigan Gov. Gretchen Whitmer greets President Donald Trump as he arrives in her state in late April 2025.
    AP Photo/Alex Brandon

    Why unity on carp?

    Whether coordinated or not, the net result of Pritzker’s and Whitmer’s actions drew praise from both sides of the aisle but was little noticed nationally.

    Trump’s support for the project was a rare moment of political unity and an extremely unusual example of leading Democrats being on the same page as Trump. I attribute this surprising outcome to two key factors.

    First, the Great Lakes region holds disproportionate power in presidential elections. Michigan, Wisconsin and Pennsylvania have backed the eventual winner in every presidential race for the past 20 years. This swing state power has been used by advocates and state political leaders to drive funding for Great Lakes protection for many years.

    Second, Great Lakes are the uniting force in the region. According to polling from the International Joint Commission, the binational body charged with overseeing waterways that cross the U.S.-Canada border, there is “nearly unanimous support (96%) for the importance of government investment in Great Lakes protections” from residents of the region.

    There aren’t any other issues with such high voter resonance, so politicians want to be sure Great Lakes voters are happy. For example, Vice President JD Vance has been particularly vocal about the Great Lakes. And Great Lakes restoration funding was one of the few things in the presidential budget that Democrats and Republicans agreed on.

    Both Pritzker and Whitmer likely had state-based and national motivations in mind and big aspirations at stake.

    Their combined effort has put the project back on track: As of May 12, 2025, Pritzker authorized Illinois to sign the land-purchase agreement he had paused back in February.

    And perhaps the governors have identified a new area for unity in a divided United States: Conservation and environmental issues have broad public support, particularly when they involve iconic natural resources, shared values and popular outdoor pursuits such as fishing and boating. Even when political strategies diverge, the results can bring bipartisan satisfaction.

    Mike Shriberg was previously the Great Lakes Regional Executive Director of the National Wildlife Federation, which entailed being a co-chair (and, for part of the time, Director) of the Healing Our Waters – Great Lakes Coalition.

    ref. Invasive carp threaten the Great Lakes − and reveal a surprising twist in national politics – https://theconversation.com/invasive-carp-threaten-the-great-lakes-and-reveal-a-surprising-twist-in-national-politics-257707

    MIL OSI – Global Reports

  • MIL-OSI Global: 1 in 4 Americans reject evolution, a century after the Scopes monkey trial spotlighted the clash between science and religion

    Source: The Conversation – USA – By William Trollinger, Professor of History, University of Dayton

    The 1925 Scopes trial, in which a Dayton, Tennessee, teacher was charged with violating state law by teaching biological evolution, was one of the earliest and most iconic conflicts in America’s ongoing culture war.

    Charles Darwin’s “Origin of Species,” published in 1859, and subsequent scientific research made the case that humans and other animals evolved from earlier species over millions of years. Many late-19th-century American Protestants had little problem accommodating Darwin’s ideas – which became mainstream biology – with their religious commitments.

    But that was not the case with all Christians, especially conservative evangelicals, who held that the Bible is inerrant – without error – and factually accurate in all that it has to say, including when it speaks on history and science.

    The Scopes trial occurred July 10-21, 1925. Between 150 and 200 reporters swooped into the small town. Broadcast on Chicago’s WGN, it was the first trial to be aired live over radio in the United States.

    One hundred years after the trial, and as we have documented in our scholarly work, the culture war over evolution and creationism remains strong – and yet, when it comes to creationism, much has also changed.

    The trial

    In May 1919, over 6,000 conservative Protestants gathered in Philadelphia to create, under the leadership of Baptist firebrand William Bell Riley, the World’s Christian Fundamentals Association, or WCFA.

    Holding to biblical inerrancy, these “fundamentalists” believed in the creation account detailed in chapter 1 of Genesis, in which God brought all life into being in six days. But most of these fundamentalists also accepted mainstream geology, which held that the Earth was millions of years old. Squaring a literal understanding of Genesis with an old Earth, they embraced either the “day-age theory” – that each Genesis day was actually a long period of time – or the “gap theory,” in which there was a huge gap of time before the six 24-hour days of creation.

    This nascent fundamentalist movement initiated a campaign to pressure state legislatures to prohibit public schools from teaching evolution. One of these states was Tennessee, which in 1925 passed the Butler Act. This law made it illegal for public schoolteachers “to teach any theory that denies the story of divine creation of man as taught in the Bible, and to teach instead that man has descended from a lower order of animals.”

    The American Civil Liberties Union persuaded John Thomas Scopes, a young science teacher in Dayton, Tennessee, to challenge the law in court. The WCFA sprang into action, successfully persuading William Jennings Bryan – populist politician and outspoken fundamentalist – to assist the prosecution. In response, the ACLU hired famous attorney Clarence Darrow to serve on the defense team.

    A huge crowd attending the Scopes trial.
    Bettmann/Contributor via Getty Images

    When the trial started, Dayton civic leaders were thrilled with the opportunity to boost their town. Outside the courtroom there was a carnivalesque atmosphere, with musicians, preachers, concession stands and even monkeys.

    Inside the courtroom, the trial became a verbal duel between Bryan and Darrow regarding science and religion. But as the judge narrowed the proceedings to whether or not Scopes violated the law – a point that the defense readily admitted – it seemed clear that Scopes would be found guilty. Many of the reporters thus went home.

    But the trial’s most memorable episode was yet to come. On July 20, Darrow successfully provoked Bryan to take the witness stand as a Bible expert. Due to the huge crowd and suffocating heat, the judge moved the trial outdoors.

    The 3,000 or so spectators witnessed Darrow’s interrogation of Bryan, which was primarily intended to make Bryan and fundamentalism appear foolish and ignorant. Most significant, Darrow’s questions revealed that, despite Bryan’s’ assertion that he read the Bible literally, Bryan actually understood the six days of Genesis not as 24-hour days, but as six long and indeterminate periods of time.

    American lawyer and politician William Jennings Bryan during the Scopes trial in Dayton, Tenn.
    Hulton Archive/Getty Image

    The very next day, the jury found Scopes guilty and fined him US$100. Riley and the fundamentalists cheered the verdict as a triumph for the Bible and morality.

    The fundamentalists and ‘The Genesis Flood’

    But very soon that sense of triumph faded, partly because of news stories that portrayed fundamentalists as ignorant rural bigots. In one such example, a prominent journalist, H. L. Mencken, wrote in a Baltimore Sun column that the Scopes trial “serves notice on the country that Neanderthal man is organizing in these forlorn backwaters of the land.”

    The media ridicule encouraged many scholars and journalists to conclude that creationism and fundamentalism would soon disappear from American culture. But that prediction did not come to pass.

    Instead, fundamentalists, including WCFA leader Riley, seemed all the more determined to redouble their efforts at the grassroots level.

    But as Darrow’s interrogation of Bryan made obvious, it was not easy to square a literal reading of the Bible – including the six-day creation outlined in Genesis – with a scientific belief in an old Earth. What fundamentalists needed was a science that supported the idea of a young Earth.

    In their 1961 book, “The Genesis Flood: The Biblical Record and its Scientific Implications, fundamentalists John Whitcomb, a theologian, and Henry Morris, a hydraulic engineer, provided just such a scientific explanation. Making use, without attribution, of the writings of Seventh-day Adventist geologist George McCready Price, Whitcomb and Morris made the case that Noah’s global flood lasted one year and created the geological strata and mountain ranges that made the Earth seem ancient.

    “The Genesis Flood” and its version of flood geology remains ubiquitous among fundamentalists and other conservative Protestants.

    Young Earth creationism

    Today, opinion polls reveal that roughly one-quarter of all Americans are adherents of this newer strand of creationism, which rejects both mainstream geology as well as mainstream biology.

    Replica of Noah’s Ark at the Ark Encounter, near Williamstown, Ky.
    Ron Buskirk/UCG/Universal Images Group via Getty Images

    This popular embrace of young Earth creationism also explains the success of Answers in Genesis – AiG – which is the world’s largest creationist organization, with a website that attracts millions of visitors every year.

    AiG’s tourist sites – the Creation Museum in Petersburg, Kentucky, and the Ark Encounter in Williamstown, Kentucky – have attracted millions of visitors since their opening in 2007 and 2016. Additional AiG sites are planned for Branson, Missouri, and Pigeon Forge, Tennessee.

    Presented as a replica of Noah’s Ark, the Ark Encounter is a gigantic structure – 510 feet long, 85 feet wide, 51 feet high. It includes representations of animal cages as well as plush living quarters for the eight human beings who, according to Genesis chapters 6-8, survived the global flood. Hundreds of placards in the Ark make the case for a young Earth and a global flood that created the geological strata and formations we see today.

    Ark Encounter has been the beneficiary of millions of dollars from state and local governments.

    Besides AiG tourist sites, there is also an ever-expanding network of fundamentalist schools and homeschools that present young Earth creationism as true science. These schools use textbooks from publishers such as Abeka Books, Accelerated Christian Education and Bob Jones University Press.

    The Scopes trial involved what could and could not be taught in public schools regarding creation and evolution. Today, this discussion also involves private schools, given that there are now at least 15 states that have universal private school choice programs, in which families can use taxpayer-funded education money to pay for private schooling and homeschooling.

    In 1921, William Bell Riley admonished his opponents that they should “cease from shoveling in dirt on living men,” for the fundamentalists “refuse to be buried.” A century later, the funeral for fundamentalism and creationism seems a long way off.

    The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    ref. 1 in 4 Americans reject evolution, a century after the Scopes monkey trial spotlighted the clash between science and religion – https://theconversation.com/1-in-4-americans-reject-evolution-a-century-after-the-scopes-monkey-trial-spotlighted-the-clash-between-science-and-religion-258163

    MIL OSI – Global Reports

  • MIL-OSI Global: Bill Moyers’ journalism strengthened democracy by connecting Americans to ideas and each other, in a long and extraordinary career

    Source: The Conversation – USA – By Julie Leininger Pycior, Professor of History Emeritus, Manhattan University

    “Bill Moyers? He’s spectacular!” George Clooney said – and no wonder.

    I mentioned this legendary television journalist to the actor and filmmaker after Clooney emerged from the Broadway theater where he just had been portraying another news icon: Edward R. Murrow. Or as the Museum of Broadcast Communications put it in a tribute to Moyers, he was “one of the few broadcast journalists who might be said to approach the stature of Edward R. Murrow. If Murrow founded broadcast journalism, Moyers significantly extended its traditions.”

    Moyers, who died at 91 on June 26, 2025, was among the most acclaimed broadcast journalists of the 20th century. He’s known for TV news shows that exposed the role of big money in politics and episodes that drew attention to unsung defenders of democracy, such as community organizer Ernesto Cortés Jr..

    Earlier in his life, Moyers served in significant roles in the Kennedy and Johnson administrations, but his fame comes from his journalism.

    Making a connection

    Despite his prominence, Moyers was the same down-to-earth guy in person as he seemed to be on the screen. In 1986, he was commanding a television audience of millions, and I was a historian at home with a preschooler, teaching the occasional college course in a dismal job market. Seeing that Moyers would be speaking at the conference on President Lyndon B. Johnson where I would be giving a paper, I wrote to him.

    To my utter amazement, he replied and then showed up to hear my paper, on Johnson’s experiences as a young principal of the “Mexican” school in Cotulla, Texas, where he championed his students but also forged links to segregationists. Cotulla was “seminal” to LBJ’s development, Moyers said. In 1993, he recommended me for a grant that helped me finish a book: “LBJ and Mexican Americans: The Paradox of Power.

    A few years later, he asked me to head up a project researching the documents related to his time in Johnson’s administration. His memoir of the Johnson years never materialized. Instead, I edited the bestselling ”Moyers on America: A Journalist and His Times.“

    Part of what always impressed me about Moyers was his belief that what matters is not how close you are to power, but how close you are to reality.

    ‘Amazing Grace’

    Moyers didn’t just dwell on politics and policy as a journalist. He also delved into the meaning of creativity and the life of the mind. Many of his most moving interviews spotlighted scientists, novelists and other exceptional people.

    He was also arguably among the best reporters on the religion beat. Even if it wasn’t always the main focus of his work or what comes to mind for those familiar with his legacy, still, he was a lifelong spiritual seeker.

    This is hardly surprising: Moyers had degrees in both divinity and journalism. As a young man, he briefly served as a Baptist minister.

    He once told me that his favorite of the many programs that he produced was the PBS documentary ”Amazing Grace.“ It featured inspiring renditions of this popular Christian hymn as performed by country legend Johnny Cash, folk icon Judy Collins, opera diva Jessye Norman and other musical geniuses. As they share with Moyers their personal connections to this song of redemption, he draws viewers into the stirring saga of its creator, John Newton: a slave trader who became an abolitionist through “amazing grace.”

    Bill Moyers interviews Judy Collins about singing ‘Amazing Grace,’ following the production of his PBS special about the hymn.

    Life’s ultimate questions

    This appreciation of the ineffable clearly informed Moyers’ blockbuster TV series exploring life’s ultimate questions, “Joseph Campbell and the Power of Myth.”

    His interviews with Campbell, a comparative mythologist, evoked moments that made time stand still, and this reminded me of Thomas Merton, the American monk and poet, writing, “Everything is emptiness and everything is compassion” on beholding the immense Polonnaruwa Buddhas of Sri Lanka.

    To my surprise, Moyers knew about this Trappist monk, telling me, “I always wished that I could have interviewed Merton,” who died in 1968.

    It turned out that Moyers had been introduced to Merton by Sargent Shriver, founding director of the Peace Corps, where Moyers was a founding organizer and the deputy director.

    Mentored by LBJ

    Moyers characterized his Peace Corps years as the most rewarding of his life. When Johnson, his mentor, became president, he asked Moyers to join the White House staff. Moyers turned down the offer, so Johnson made it a presidential command.

    The wunderkind – Moyers was 29 years old in 1963, when Johnson was sworn in after President John F. Kennedy’s assassination – coordinated the White House task forces that created the largest number of legislative proposals in American history. Among the programs and landmark reforms established and passed during the Johnson administration were Medicare and Medicaid, a landmark immigration law, the Freedom of Information Act, the Public Broadcasting Act and two historic civil rights laws.

    Johnson’s war on poverty, in addition, introduced several path-breaking programs, such as Head Start.

    Moyers served as one of Johnson’s speechwriters and was a top official in Johnson’s 1964 presidential campaign. The following year, the Johnson administration began escalating U.S. involvement in the Vietnam War and Johnson named a new press secretary: Bill Moyers. Again, the young man tried to decline, but the president prevailed.

    As Moyers had feared, he could not serve two masters – journalists and his boss – especially as the administration’s Vietnam War policies became increasingly unpopular.

    President Lyndon B. Johnson confers with Bill Moyers, his press secretary, in 1965.
    Corbis Historical via Getty Images

    Appreciating the world around you

    Moyers left the Johnson administration in 1967, turning to journalism. He became the publisher of Newsday, a Long Island, New York, newspaper, before becoming a producer and commentator at CBS News. His commentaries reached tens of millions of viewers, but the network refused to provide a regular time slot for his documentaries. He had previously worked at PBS. In 1987, he decamped there for good.

    Moyers’ programs won many journalism awards, including over 30 Emmys, along with the Lifetime Emmy for news and documentary productions.

    He helped millions of Americans appreciate the world around them. As he reflected in 2023, in one of the last interviews he gave, to PBS journalist Judy Woodruff at the Library of Congress: “Everything is linked, and if you can find that nerve that connects us to other things and other places and other ideas – and television should be doing it all the time – we’d be a better democracy.”

    Judy Woodruff interviews Bill Moyers about his life’s work in government and the media, including his contributions to the launch of PBS, at the Library of Congress.

    Today, with disinformation metastasizing, professional journalists losing their jobs by the thousands and some newspaper owners muzzling their editorial staff, thoughtful explanations can lose out. That means Americans can lose out.

    “It takes time, commitment” to dig below the surface and discover the deeper meaning of people’s lives, Moyers noted. He sought to understand, for example, why so many folks in his own hometown of Marshall, Texas, have become much more suspicious – resentful, even – of outsiders than when he gave these folks voice in his poignant, prize-winning 1984 program Marshall, Texas; Marshall, Texas.

    In this era of growing threats to democracy, what can a young person do who aspires to follow in Bill Moyers’ footsteps – whether in journalism or public life?

    Woodruff asked Moyers that question, to which he responded: “You can’t quit. You can’t get out of the boat! Find a place that gives you a sense of being, gives you a sense of mission, gives you a sense of participation.”

    Today, with the future of journalism – and of democracy itself – at stake, I think it would help everyone to take to heart the insights of this late, great American journalist.

    Julie Leininger Pycior edited the book “Moyers on America: A Journalist and His Times.” She also was hired by Moyers to direct the 18-month “LBJ Years” research project.

    In addtion, she served as an unpaid, informal historical adviser for some of his public television programs.

    ref. Bill Moyers’ journalism strengthened democracy by connecting Americans to ideas and each other, in a long and extraordinary career – https://theconversation.com/bill-moyers-journalism-strengthened-democracy-by-connecting-americans-to-ideas-and-each-other-in-a-long-and-extraordinary-career-260047

    MIL OSI – Global Reports