Category: Analysis

  • MIL-OSI Submissions: Self determination theory: how to use it to boost wellbeing

    Source: The Conversation – UK – By Mark Fabian, Reader of Public Policy, University of Warwick

    Self-determination theory (SDT) is one of the most well established and powerful approaches to wellbeing in psychological research literature. Yet it doesn’t seem to have broken through into popular discussions about wellbeing, happiness and self-help. That’s a shame, because it has so much to contribute.

    A foundational idea in self-determination theory is that we have three basic psychological needs: for autonomy, competence and relatedness.

    Autonomy is the need to be in control of your own life rather than being controlled by others. Competence is the need to feel skilful at the tasks one values or needs to thrive. Relatedness refers to feeling loved and cared for, and a sense of belonging to a group that provides social support.

    If our basic psychological needs are met, then we are more likely to experience wellbeing. Symptoms include emotions such as joy, vitality and excitement because we’re doing the things we love, for example. We’ll probably have a sense of meaning and purpose because we live within a community whose culture we value.


    Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.


    Conversely, when our basic needs are thwarted we should see symptoms of illbeing. Anger, frustration and boredom grow when our behaviour is controlled by parents, bureaucrats, bosses or other forces that press our energies towards their ends instead of ours.

    Depression is liable when we our competence is overwhelmed by failure. And anxiety is often a social emotion that arises when we’re worried about whether our group cares for us.

    So we should cultivate our basic psychological needs – but how? You need to discover what you want to do with your life, what skills to become competent in, who to relate to and what communities to contribute to.

    Using motivation to find your way

    Here’s where the second foundational idea in SDT can be super helpful, as I explain in my new book, Beyond Happy: How to rethink happiness and find fulfilment. SDT proposes a motivational spectrum running from extrinsic at one end to intrinsic at the other. Finding out where you are on the spectrum for a certain activity or task can help you work out how to be happier.

    The more extrinsically motivated something is, the more self-regulation it requires. For example, when refugees flee their homes due to encroaching war, there is often a large part of them that wants to stay. Willpower is required to act. In contrast, intrinsically motivated behaviour springs spontaneously from us. You don’t need willpower to get stuck into your hobbies.

    Each type of motivation comes with different emotional signals and deciphering them can help us find what values, behaviour and groups suit us.

    The spectrum of motivation according to self-determination theory.
    CC BY-NC

    “Identified” motivation, for example, sits between extrinsic and intrinsic motivation. It occurs when we value an activity but don’t inherently enjoy it. That’s why success in identified behaviour is usually met with a feeling of accomplishment or the warm and fuzzy feeling you get when you do the right thing, like going a bit out of your way to put your rubbish in a bin.

    In contrast, “introjected” motivation is where you value something contingent to the behaviour itself. Many of us loathe the gym, for example, but we want to be healthy. A child might not want to practice the cello, but they do want their parent’s approval.

    Because introjection is relatively extrinsic, it requires willpower, and probably a bit more of it than for identified behaviour. Completion of an introjected activity is often met with relief rather than accomplishment and little desire to keep going.

    Sometimes things that are dependent on introjected behaviour can make us unhappy. In teen dramas, for example, the protagonist often does something because they want to be popular, but when they win the approval of the cool kids they realise those kids are mean and lame.

    Why money, power and status won’t make you happy

    If that’s how you feel, you’ve found something inauthentic to you. Then there’s very little chance the introjected activity will lead to your wellbeing. In fact, SDT has identified some common values. You’ll recognise them immediately: popularity, fame, status, power, wealth and success.

    They’re extrinsic because they’re not peculiar to you. If you get rich doing the thing you love, that’s great, but many of us never even think about what we love because we’re too busy thinking about how to get rich.

    Extrinsic pursuits are ultimately bad for our wellbeing because they’re all poor substitutes for basic psychological needs. When our autonomy is thwarted by strict parents or disciplinarian teachers, we crave power. When we don’t know what sort of life to build and thus what skills we need competence in, we adopt other people’s notions of success instead.

    Extrinsic pursuits often emerge from a wounded place and a defensive reaction. When we’re lonely or feel unloved for who we are, for example, we might compensate by seeking fame or popularity. We’ll start talking about our accomplishments on LinkedIn, for example.

    The problem is that the people this attracts don’t value you specifically, only your power, status or money. You sense that if you ever lost those things, you would lose these people too.

    SDT can help you learn to listen to your emotions and interpret your motivations instead, and use them to guide you towards the values, activities and people that are right for you.

    For example, if you feel joyful and fulfilled when you solve a complex puzzle, perhaps consider a career that involves that activity, such as law or engineering. If such puzzles feel like torture, that’s a signal too. Perhaps something more relational or intuitive, like social work, would work better.

    When you pursue things that are authentic to you it will nourish your sense of autonomy. You’ll build competence in those activities because they’re intrinsically motivated. And you’ll form deep relationships with the people you encounter because you genuinely like each other. Wellbeing will follow.

    Mark Fabian does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Self determination theory: how to use it to boost wellbeing – https://theconversation.com/self-determination-theory-how-to-use-it-to-boost-wellbeing-259829

    MIL OSI

  • MIL-OSI Submissions: Dune director Denis Villeneuve will helm the next Bond – but what will his 007 be like?

    Source: The Conversation – UK – By William Proctor, Associate Professor in Popular Culture, Bournemouth University

    Wiki Commons/Canva, CC BY-SA

    The James Bond franchise has lain dormant for four years, since Daniel Craig’s swansong as 007, No Time to Die. A legal quarrel between Bond’s producers, Michael G. Wilson and Barbara Broccoli, and Amazon Studios resulted in a stalemate and production on a new Bond film has remained in limbo.

    Nevertheless, speculation has been rife about which actor will next play Ian Fleming’s super-spy (the latest actor to be associated with the role is former Spider-man Tom Holland).

    When news surfaced in February 2025 that Amazon MGM (Amazon purchased MGM in 2021) had effectively become Bond’s new custodians, critics and audiences alike expressed concern – to put it lightly. Many feared that Jeff Bezos was more interested in stimulating Amazon Prime membership by driving multiple content streams through spin-offs and merchandising than protecting Fleming’s legacy.

    However, last week’s announcement that Denis Villeneuve has been appointed as the director of the 26th Bond film is a savvy move. It’s a declaration of intent that seeks to promote and market Amazon MGM as safe harbour for the Bond franchise.


    Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.


    The announcement positions the next era of Bond as a prestigious exercise helmed by “a cinematic master”, not a journeyman director. Villeneuve was previously offered the opportunity to direct No Time to Die, but turned the role down because of his commitment to the Dune films.

    By appointing Villeneuve, Amazon has managed to radically shift the public debate. Villeneuve is “much more than a technical director”, wrote Guardian film critic Peter Bradshaw. “He is an alpha-grade auteur in the same league as Christopher Nolan.”

    Other critics have pointed to his rare ability to “combine blockbuster momentum (and ticket sales) with the finer, more nuanced sensibilities of a filmmaker always concerned with slowing down, honing in on character and theme”.

    Although Sam Mendes, director of Skyfall (2012) and Spectre (2015), came with artistic status, Villeneuve is something different – a marquee name frequently described as an auteur.

    Villeneuve talks about his love for Bond.

    Since his transition from making mostly low-key independent films in his native Canada to his arrival in Hollywood with Prisoners, starring Hugh Jackman and Jake Gyllenhaal (2013), Villeneuve has amassed an impressively eclectic filmography.

    He has proven that he is as comfortable shooting realistic crime thrillers (Sicario, 2015) and surrealist cinema that David Lynch would be proud of (Enemy, 2013), as he is with science fiction (Arrival, 2016, Blade Runner 2049, 2017, and the Dune films, 2021 and 2024).

    Villeneuve’s Bond

    Although Sicario may be the closest in terms of genre to the Bond films, establishing Villeneuve as a director who can expertly shoot action sequences, it is nevertheless difficult at this stage to conceptualise what a Villeneuve Bond film might be like.

    Some critics have suggested that the director’s cinematic resume, eclectic as it is, might not bode well for Bond. The Hollywood Reporter’s film critic Benjamin Svetkey, for instance, worries that Villeneuve’s “lugubrious, meditative filmmaking” is sorely lacking in humour – which could be fatal for 007. “A certain amount of wit and winking is critical to the character,” he claims.

    It is early days for Amazon MGM and Villeneuve. As yet, there is reportedly no treatment, no script, no writer and – more pointedly – no actor appointed to the role. Whatever happens, the 26th Bond film is likely to be a hard reboot that wipes the slate clean (again) after the fate of 007 in No Time to Die.

    Villeneuve’s choice for Bond is unlikely to be as cartoonish as Pierce Brosnan’s iteration.

    Although Villeneuve has said that he intends to honour tradition and that Bond is “sacred territory” for him, Bond’s capacity for revision and regeneration has been key to the franchise’s longevity.

    As socoiologists Tony Bennett and Janet Woollacott argue in their seminal study, Bond and Beyond, the figure of Bond has over the past six decades “been differently constructed at different moments,” with “different sets of ideological and cultural concerns”.

    So what kind of Bond film Villeneuve ends up directing largely depends on the story and whichever actor is anointed as the next James Bond. It is doubtful that audiences will expect a campy pantomime Bond like Roger Moore, or a Bond with an invisible car, like Pierce Brosnan in the cartoonish Die Another Day (2002). Villeneuve’s choice of Casino Royale as his favourite 007 may provide a clue. But it is also unlikely that the director will be satisfied with slavishly repeating the past.

    William Proctor does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Dune director Denis Villeneuve will helm the next Bond – but what will his 007 be like? – https://theconversation.com/dune-director-denis-villeneuve-will-helm-the-next-bond-but-what-will-his-007-be-like-260140

    MIL OSI

  • MIL-OSI Submissions: Why frequent nightmares may shorten your life by years

    Source: The Conversation – UK – By Timothy Hearn, Senior Lecturer in Bioinformatics, Anglia Ruskin University

    Lightfield Studios/Shutterstock.com

    Waking up from a nightmare can leave your heart pounding, but the effects may reach far beyond a restless night. Adults who suffer bad dreams every week were almost three times more likely to die before age 75 than people who rarely have them.

    This alarming conclusion – which is yet to be peer reviewed – comes from researchers who combined data from four large long-term studies in the US, following more than 4,000 people between the ages of 26 and 74. At the beginning, participants reported how often nightmares disrupted their sleep. Over the next 18 years, the researchers kept track of how many participants died prematurely – 227 in total.

    Even after considering common risk factors like age, sex, mental health, smoking and weight, people who had nightmares every week were still found to be nearly three times more likely to die prematurely – about the same risk as heavy smoking.

    The team also examined “epigenetic clocks” – chemical marks on DNA that act as biological mileage counters. People haunted by frequent nightmares were biologically older than their birth certificates suggested, across all three clocks used (DunedinPACE, GrimAge and PhenoAge).

    The science behind the silent scream

    Faster ageing accounted for about 39% of the link between nightmares and early death, implying that whatever is driving the bad dreams is simultaneously driving the body’s cells towards the finish line.

    How might a scream you never utter leave a mark on your genome? Nightmares happen during so-called rapid-eye-movement sleep when the brain is highly active but muscles are paralysed. The sudden surge of adrenaline, cortisol and other fight-or-flight chemicals can be as strong as anything experienced while awake. If that alarm bell rings night after night, the stress response may stay partially switched on throughout the day.

    Continuous stress takes its toll on the body. It triggers inflammation, raises blood pressure and speeds up the ageing process by wearing down the protective tips of our chromosomes.

    On top of that, being jolted awake by nightmares disrupts deep sleep, the crucial time when the body repairs itself and clears out waste at the cellular level. Together, these two effects – constant stress and poor sleep – may be the main reasons the body seems to age faster.

    Your brain clears out waste when you sleep.
    Teeradej/Shutterstock.com

    The idea that disturbing dreams foreshadow poor health is not entirely new. Earlier studies have shown that adults tormented by weekly nightmares are more likely to develop dementia and Parkinson’s disease, years before any daytime symptoms appear.

    Growing evidence suggests that the brain areas involved in dreaming are also those affected by brain diseases, so frequent nightmares might be an early warning sign of neurological problems.

    Nightmares are also surprisingly common. Roughly 5% of adults report at least one each week and another 12.5% experience them monthly.

    Because they are both frequent and treatable, the new findings elevate bad dreams from a spooky nuisance to a potential public health target. Cognitive behavioural therapy for insomnia, imagery-rehearsal therapy – where sufferers rewrite the ending of a recurrent nightmare while awake – and simple steps such as keeping bedrooms cool, dark and screen free have all been shown to curb nightmare frequency.

    Before jumping to conclusions, there are a few important things to keep in mind. The study used people’s own reports of their dreams, which can make it hard to tell the difference between a typical bad dream and a true nightmare. Also, most of the people in the study were white Americans, so the findings might not apply to everyone.

    And biological age was measured only once, so we cannot yet say whether treating nightmares slows the clock. Crucially, the work was presented as a conference abstract and has not yet navigated the gauntlet of peer review.

    Despite these limitations, the study has important strengths that make it worth taking seriously. The researchers used multiple groups of participants, followed them for many years and relied on official death records rather than self-reported data. This means we can’t simply dismiss the findings as a statistical fluke.

    If other research teams can replicate these results, doctors might start asking patients about their nightmares during routine check-ups – alongside taking blood pressure and checking cholesterol levels.

    Therapies that tame frightening dreams are inexpensive, non-invasive and already available. Scaling them could offer a rare chance to add years to life while improving the quality of the hours we spend asleep.

    Timothy Hearn does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Why frequent nightmares may shorten your life by years – https://theconversation.com/why-frequent-nightmares-may-shorten-your-life-by-years-260008

    MIL OSI

  • MIL-OSI Submissions: Where does the UK most need more public EV chargers?

    Source: The Conversation – UK – By Labib Azzouz, Research Associate in Transport and Energy Innovation, University of Oxford

    Electric vehicle chargers at a motorway service station in Grantham, England. Angus Reid/Shutterstock

    The automotive and EV industry has repeatedly insisted that the UK needs more electric vehicle (EV) chargers to help motorists make the switch from conventional fossil-fuel burning cars.

    The Labour government has announced £400 million to install EV chargers, mainly on streets in poorer residential neighbourhoods, in place of the Conservative’s £950 million rapid charging fund that was directed at installing chargers in motorway service stations.

    Does it matter where these chargers are – and who pays to build them?

    The short answer is yes, it does matter. Our research conducted at motorway and local EV charging stations across England – including those located in residential areas, high streets and community centres – indicates that these two types of infrastructure serve distinct groups of users and fulfil different purposes.

    Suggesting that one can substitute for the other risks sending mixed signals to both the industry and the driving public.

    We found that motorway charging stations tend to cater to wealthier men, who are more likely to own premium EVs with long-range batteries and better performance. Many of these drivers have access to home chargers, so their use of public chargers is only for occasional, long-distance travel for business, leisure, or holidays – trips that require chargers along motorways.

    Convenience and charging speed are often more important than the price of public charging, particularly when the travel costs of these drivers are covered by their employers.

    Local public charging stations, on the other hand, serve more diverse groups. These include drivers from lower-income households who are more likely to own older and smaller EVs with shorter ranges. Access to home charging is often limited, especially for people living in flats or urban areas without driveways, garages or off-street parking.

    Not everyone can plug in at home.
    Andersen EV/Shutterstock

    Local chargers are also vital for taxi and delivery drivers who depend on their vehicles for work and make frequent short trips throughout the day. There are many professional drivers without access to workplace charging stations who need alternative local provision – something the Conservative government recognised in its 2022 EV charging strategy.

    Ultimately, the transition to EVs should take a balanced approach that carefully considers social equity, economic viability and environmental impact.

    Different locations serve different drivers

    Motorway charging stations are commercially attractive to private investors, such as energy companies, specialist charging providers and car manufacturers, despite their higher upfront costs and complex requirements.

    This is because service stations offer greater short-term revenue due to their ability to set premium prices. This is a result of there being limited alternatives and high demand for rapid charging, especially among long-distance travellers, and the willingness of EV drivers to pay for speed and convenience – unlike in more price-sensitive neighbourhood settings.

    Unsurprisingly, the government found that the rapid deployment of motorway chargers in recent years has been largely driven by the private sector. Our research highlighted that these revenues could be enhanced by a broader range of retail, dining and relaxation amenities, turning the time waiting for a car to charge into a more productive and pleasurable experience.

    Residential charging stations may not offer high profits per charge, but they typically require lower capital investment and benefit from consistent and predictable use. They are also suited to measures for reducing strain on the grid and balancing energy supply and demand.

    These measures include tariffs that make it cheaper to charge EVs during off-peak hours, or technology that allows cars to feed electricity stored in batteries back into the grid. These features make them appropriate for public funding, where return on investment is measured not just in profit but in value for the public.

    Considering that local EV charging serves those who do not have access to home charging and who drive for a living, the case for public funding is even stronger. These sorts of chargers make switching to an EV easier for different groups.

    For example, safe and carefully placed public chargers could help more women switch to EVs – although our research suggests that, while “careful placement” might refer to residential areas, it doesn’t necessarily mean on streets. Well-lit car parks and community destinations are sometimes considered safer options.

    Charging points outside a community centre in the Outer Hebrides, Scotland.
    AlanMorris/Shutterstock

    By helping EV drivers make frequent short trips, local chargers can also significantly reduce urban air pollution, emissions and noise, contributing to more liveable, healthier cities.

    That said, motorway charging stations and those near key transport corridors still play a crucial role in a comprehensive national network, and public funding may be required in more peripheral and rural areas of the UK where installations lag and commercial interest is limited.

    While long-distance trips are less frequent than short ones, they account for a disproportionately large share of energy use and emissions. Switching such trips to electric will be essential to reaching net zero goals.

    It seems reasonable to prioritise public investment in local EV charging infrastructure to support a fairer EV transition, but this should not be limited to on-street chargers. Investment is needed in residential and non-residential areas, public car parks, community centres and workplaces.

    Different types of EV charging are not interchangeable – all are needed to support the switch.


    Don’t have time to read about climate change as much as you’d like?

    Get a weekly roundup in your inbox instead. Every Wednesday, The Conversation’s environment editor writes Imagine, a short email that goes a little deeper into just one climate issue. Join the 45,000+ readers who’ve subscribed so far.


    Labib Azzouz has received funding from the UK Research and Innovation via the UK Energy Research Centre and Innovate UK as part of the Energy Superhub Oxford (ESO) project.

    Hannah Budnitz receives government funding from UK Research and Innovation grants via the Economic and Social Research Council and the Engineering and Physical Sciences Research Council. She has also previously received funding from Innovate UK and the Department for Transport.

    ref. Where does the UK most need more public EV chargers? – https://theconversation.com/where-does-the-uk-most-need-more-public-ev-chargers-259623

    MIL OSI

  • MIL-OSI Submissions: The Bear season 4: this meaty restaurant drama is still an enticing bingeable prospect

    Source: The Conversation – UK – By Jane Steventon, Course Leader, BA (Hons) Screenwriting; Deputy Course Leader & Senior Lecturer, BA (Hons) Film Production, University of Portsmouth

    Take a soupçon of identity crisis, a pinch of perfectionism, a scoop of burnout and mix thoroughly with a large measure of fraternal grief and sear over a hot grill and voilà! You have The Bear, a perfectly blended drama about a chef on the edge, driven by relentless ambition and exacting standards as he turns his family’s humble sandwich shop into a fine-dining restaurant.

    This intoxicating family drama was eaten up by critics and audiences alike in 2022, its first season garnering a rare perfect 100% score on Rotten Tomatoes, the subsequent two reaching scores of 99% and 89% respectively. It’s certainly a hard act to follow for season four.

    The first ten minutes of The Bear’s pilot episode thrillingly defined what was to come in high-octane style and scene-setting detail. The first season delivered a clever mix of authentic dialogue and setting, relatable family dysfunction and dynamic production style.

    Showstopping scenes of stressful kitchen heat were served up alongside a delectable range of new and established talent in the form of Jeremy Allen White (Carmy), Ebon Moss-Bachrach (Richie), Ayo Edebiri (Sydney) and Oliver Platt (Cicero/Uncle Jimmy).


    Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.


    In charge is showrunner Christopher Storer, who came up with the concept after being inspired by his friend’s father Chris Zucchero, the owner of Chicago sandwich joint Mr Beef.

    With his professional chef sister also serving as a consultant, Storer succeeded in creating a deliciously authentic and intensely real drama. Buoyed along the way by 21 Emmys and five Golden Globes, Storer also watched his cast ascend, the tortured-soul performance of White garnering particular praise.

    Testing the parameters of a long-running show, Storer focused in on the entire cast of characters and their backstories, a successful tactic used by shows such as Orange is the New Black to keep the drama – largely confined to a kitchen set – fresh.

    Pulling in Hollywood die-hards Oliver Platt and Jamie Lee Curtis for familial tough-love roles further enriched the mix, often using a non-chronological timeframe to go back to moments of family turbulence and tension. This made for three-dimensional characters and enabled evolution around difficult themes such as the aftermath of suicide and generational trauma.

    The Bear has come a long way in three seasons, starting with a spit and sawdust establishment serving up the lunchtime beef sandwiches for its working customers.

    Carmy’s experience and longing for the high-end restaurant of his dreams hurtled forward in season two, as he sent his core crew off in different directions to hone their skills and help form his vision. A restaurant trying to win success but plagued with challenges, there were exhausting familial tensions embedded in every episode of season three.

    Several themes play out in The Bear: love, family, loyalty, community and purpose. The relationship between Carmy and cousin Richie (not a real cousin, but a term of endearment) is key to linking past and future. Richie provides some of the highlights of comedy and pathos as he spits truth bombs, most frequently at talented sous-chef Syd.

    It is Syd who follows Carmy’s aspirations for gastronomic perfection but can’t abide the lack of order or the intense highs and lows that inevitably go hand in hand with his talent. And this is one central question to consider for the latest series: just how long will the audience remain loyal to Carmy and his endless quest for artistry in a high-failure rate industry?

    It’s all in the sauce

    Storer begins season four with a ghost. Carmy and his dead brother Mikey (Jon Berthal) banter in a seven-minute scene, with Carmy ultimately confiding the dream of a restaurant as Mikey watches him make tomato sauce (“too much garlic”). The tomatoes resonate: Mikey left behind money hidden in tomato cans that ended up saving Carmy’s sanity and his dream of a proper restaurant.

    Just as oranges represent death to Frances Ford Coppola, Storer uses tomatoes to underscore themes; here they symbolise familial loyalty and history, a solid base to a meal, a core ingredient. Mikey was one of the core ingredients in Carmy’s life, and now he’s gone.

    Carmy awakens to a rerun of Groundhog Day on late-night TV and fittingly, we too are back – same dish, now more seasoned and enriched with its core ingredients and ready to serve up a big bowlful of family, love, ambition, strife and grief.

    The episode furthers the theme of loyalty as the restaurant receives The Tribune’s review – the cliffhanger of the season three finale. Naturally, Storer doesn’t let up – the food critic highlights “dissonance” and Carmy is back in emotional chaos, with Syd urging him to lighten up and lose the misery.

    In truth, this series could do with adding some more humour in the mix; the teasing and frivolous banter of season one has got somewhat lost in the seasons that followed.

    Storer ramps up the tension, setting several ticking clocks in place: chiefly Uncle Jimmy’s notice period for the business to turn a profit is literally installed on a digital clock in the kitchen. Then Syd’s headhunter calls, offering her desired autonomy and an exit strategy from the chaos.

    And Carmy raises the stakes with an intention to gain a Michelin star. Thus a heroic journey is set in place for the whole cast, with future battles both internal and external laid out.

    There’s too much going on at this feast and the feeling of being stuffed full of story is tangible by the end of the first episode. Still, with a season lining up more emotional turbulence steered by White, more celebrity cameos (Brie Larson and Rob Reiner are lined up) and the excellent cinematography and performances that we have come to expect, Storer stirs his secret sauce.

    The Bear still offers an entertaining and enticing proposition, bingeable and mostly satisfying.

    Jane Steventon does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. The Bear season 4: this meaty restaurant drama is still an enticing bingeable prospect – https://theconversation.com/the-bear-season-4-this-meaty-restaurant-drama-is-still-an-enticing-bingeable-prospect-260143

    MIL OSI

  • MIL-OSI Submissions: Five ways to avoid illness like the Lionesses

    Source: The Conversation – UK – By Samantha Abbott, Doctoral Researcher, Department of Sport Science, Nottingham Trent University

    England’s Beth Mead cheering on podium after win v Germany in the Women European Championship Final 2022 photographyjp/Shutterstock

    Think back to the last time you had a cold or the flu. Now imagine stepping onto the pitch for a European Cup final, while battling through those symptoms. For elite athletes, illness can strike at the worst possible time – and it could hit women harder.

    Research suggests that female athletes are more susceptible to cold and flu-like illnesses than their male counterparts. For England women’s national football team, the Lionesses, this risk only increases before a major tournament like the Euros.

    Close contact, shared kit, disrupted sleep and travel all add up to a perfect storm for infection. But targeted nutritional strategies, alongside good sleep and hand hygiene, can offer a crucial line of defence.


    Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.


    1. Fuel first: energy matters for immunity

    Before anything else, players need to eat enough. Energy supports both performance and immune function. In fact, female athletes who didn’t meet their energy needs in the run-up to the 2016 Olympics were four times more likely to report cold or flu symptoms.

    This is especially relevant in women’s football, where low energy and carbohydrate intake has been documented among professional players and recreational players too. Regular meals and snacks that include carbohydrate-rich foods like oats, bread and pasta, especially around training, are essential to meet energy demands and support immune health.

    2. Eat the rainbow

    Athletes are often encouraged to go beyond the public’s five-a-day fruit and veg target, aiming instead for eight to ten portions daily. Why? Because colourful plant foods are packed with vitamins, minerals, antioxidants and anti-inflammatory compounds: all vital for immunity.




    Read more:
    We’re told to ‘eat a rainbow’ of fruit and vegetables. Here’s what each colour does in our body


    Each colour offers unique benefits. For instance, red fruits and vegetables, such as tomatoes, contain lycopene, a powerful antioxidant. Orange produce like carrots get their colour from beta-carotene, which is converted by the body into vitamin A – a key vitamin for immune health.

    Eating a rainbow of colours means getting a wide range of nutrients.

    3. Vitamin C: powerful but timing matters

    Vitamin C has long been linked with reducing the risk and severity of cold and flu symptoms. One Cochrane review found that regular vitamin C intake halved the risk of illness in physically active people.

    However, more isn’t always better. Long-term use of high-dose vitamin C supplements could blunt training adaptations – the structural and functional changes the body undergoes in response to repeated exercise – because of its anti-inflammatory effects. That’s why vitamin C is most effective when used strategically, such as during high-risk periods like travel or intense competition. Good food sources include oranges, kiwis, blackcurrants, red and yellow peppers, broccoli and even potatoes.

    4. Gut health supports immune health

    Around 70% of the immune system is located in the gut, making gut health a key player in illness prevention. This is where probiotics (live bacteria) and prebiotics (which feed those bacteria) come in.

    Probiotics, found in fermented foods like kefir and kimchi or in supplement form, have been shown to reduce the duration and severity of respiratory illnesses in athletes. Prebiotics have similarly shown promise. In one study, a 24-week prebiotic intervention in elite rugby players reduced the duration of cold and flu symptoms by over two days.




    Read more:
    Gut microbiome: meet Lactobacillus acidophilus – the gut health superhero


    In the build-up to the Euros, including probiotic-rich foods in their diet or taking a daily prebiotic and probiotic supplement may help players stay healthy and return to training faster if they do get ill.

    5. Zinc lozenges: first aid for a sore throat

    If cold-like symptoms do appear, zinc lozenges can offer fast-acting relief. Zinc has antiviral, antioxidant and anti-inflammatory properties. When zinc is delivered as a lozenge, it acts directly in the throat, where many infections begin. Taken within 24 hours of symptoms starting, zinc lozenges could shorten illness duration by a third.

    But caution is key. Long-term use of high-dose zinc supplements can actually suppress immune function. Zinc lozenges should only be used short-term at symptom onset, not as a daily supplement.

    Staying match-ready during major tournaments means more than just tactical drills and fitness. Nutrition is a powerful ally in illness prevention, especially for women’s teams like the Lionesses. From fuelling adequately to supporting gut health and knowing when to supplement, these nutritional strategies can make the difference between sitting on the bench and bringing a trophy home.

    The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    ref. Five ways to avoid illness like the Lionesses – https://theconversation.com/five-ways-to-avoid-illness-like-the-lionesses-259302

    MIL OSI

  • MIL-OSI Submissions: Why is Islamophobia so hard to define?

    Source: The Conversation – UK – By Julian Hargreaves, Lecturer, Department of Sociology and Criminology, City St George’s, University of London

    The UK government wants a new definition of Islamophobia and has created a working group of politicians, academics and independent experts to provide one. It aims to settle long-running political debates over the term.

    The concept of Islamophobia describes anti-Muslim and anti-Islamic prejudices and their impact on Muslim communities. The term became familiar in the UK following publication of the Runnymede Trust report, Islamophobia: A Challenge for Us All, in 1997.

    The concept is now used to discuss negative public opinion towards Muslims and Islam, biased media reporting, verbal and physical assaults and online attacks. It is also used when discussing social and economic inequalities, discrimination within various institutional settings and unfair treatment from the police and security services.

    Previous definitions have been controversial, failing to unite politicians, academics and British Muslims, and leading to charged debates over free speech.

    Some academics have argued that the word “Islamophobia” – which suggests a phobia or fear of Islam – is an inaccurate label for a prejudice which often targets skin colour, ethnicity and culture.

    Many Muslim-led organisations accept that the term is imperfect and interchangeable with others such as “anti-Muslim hatred”. However, they maintain the term “Islamophobia” is needed to focus attention on a growing problem.

    Definitions and controversy

    The 1997 Runnymede Trust report defined Islamophobia as an “unfounded hostility towards Islam”, “the practical consequences of such hostility in unfair discrimination against Muslim individuals and communities” and “the exclusion of Muslims from mainstream political and social affairs”.

    The Runnymede Trust revised its definition in a follow-up report published in 2017. The report defines Islamophobia in two ways.

    The first is “anti-Muslim racism”. A longer, second version amends the United Nation’s 1965 definition of “racial discrimination”. These revised definitions are important because they re-framed Islamophobia as a product of racist thinking rather than religious prejudices.

    Other attempts to define Islamophobia include British academic Chris Allen’s 200-word definition. Allen defined it as an ideology like racism that spreads negative views of Muslims and Islam, influencing social attitudes and leading to discrimination and violence. US political scientist Erik Bleich defined it more succinctly as “indiscriminate negative attitudes or emotions directed at Islam or Muslims”.

    In 2018, the all-party parliamentary group on British Muslims published another definition linking Islamophobia to racism. According to the APPG, “Islamophobia is rooted in racism and is a type of racism that targets expressions of Muslimness or perceived Muslimness.” The APPG called for its definition to be legally binding.

    The APPG definition was adopted by various organisations including local authorities, UK universities and the Labour party while in opposition. But it was rejected by the then Conservative government and later by the current Labour government, which argued it was seeking “a more integrated and cohesive approach”.

    This lack of consensus over previous definitions led Angela Rayner, the deputy prime minister, to announce the working group in March 2025. The group’s aim is to provide a new definition of “anti-Muslim hatred and Islamophobia” which is “reflective of a wide range of perspectives and priorities for British Muslims”.

    Former Conservative MP and attorney general Dominic Grieve was appointed to chair the group, evidence of Labour’s ambition to build consensus.

    A march in London against Islamophobia, racism and anti-migrant views.
    Shutterstock

    Some are concerned that use of the term “Islamophobia”, and particularly the APPG definition, stifles legitimate criticism of Islam. Free speech campaigners have argued that it is “blasphemy via the back door”.

    The centre-right thinktank Policy Exchange published a report claiming that the term is used in bad faith to divert attention away from serious social problems within some Muslim communities – specifically, discussion of the grooming gangs scandal.

    These debates bear resemblance to those surrounding the term “antisemitism” and the adoption of a definition proposed by the International Holocaust Memorial Alliance. The term is widely accepted, although critics have argued this specific definition stifles legitimate criticism of the Israeli state.

    A new approach

    A new definition of “Islamophobia” must balance the protection of Muslim communities and freedoms of religion, expression and assembly for all Muslims and non-Muslims in the UK. It must be clear enough for everyday use, specific enough for academic and policy research, and capable of generating support across the UK’s diverse Muslim population.

    A proposed definition by an emerging thought leader on British Islam addresses these challenges. Mamnun Khan is a writer whose work explores the social integration of Muslims in contemporary British society. Khan is associated with Equi, a thinktank which describes its work as “drawing on Muslim insight”. Other members of Equi are members of the government’s working group.

    Khan sets out three tests that a definition must pass, based on Islamic law, moral teachings within Islam and other more universal values. First, a definition must serve the public interest. Second, it must be just and balanced and preserve freedom of expression. Third, it must uphold the dignity of Muslim communities.

    For Khan, “Islamophobia, also known as anti-Muslim hatred, is an irrational fear, hostility, or prejudice toward Muslims that leads to discrimination, unequal treatment, exclusion, social and political marginalisation, or violence.”

    Khan’s definition has many good qualities. It brings together stronger elements of previous definitions – for, example, the separation of negative attitudes and outcomes – without being weakened by jargon or strong political ideology. On the other hand, some social scientists may question whether defining something as “irrational” is a matter of preference rather than academic research.

    The working group also needs to decide whether Islamophobia and anti-Muslim hatred are closely related or exactly the same. Failure to do so will cause confusion and inconsistency among those wishing to apply the term precisely. Regardless, Khan’s example is a strong step in the right direction. A better definition of Islamophobia is needed, and now within reach.

    Julian Hargreaves is an Affiliated Researcher at the Prince Alwaleed bin Talal Centre of Islamic Studies, University of Cambridge.

    ref. Why is Islamophobia so hard to define? – https://theconversation.com/why-is-islamophobia-so-hard-to-define-258522

    MIL OSI

  • MIL-OSI Submissions: Could electric brain stimulation lead to better maths skills?

    Source: The Conversation – UK – By Roi Cohen Kadosh, Professor of Cognitive Neuroscience, University of Surrey

    Triff/Shutterstock

    A painless, non-invasive brain stimulation technique can significantly improve how young adults learn maths, my colleagues and I found in a recent study. In a paper in PLOS Biology, we describe how this might be most helpful for those who are likely to struggle with mathematical learning because of how their brain areas involved in this skill communicate with each other.

    Maths is essential for many jobs, especially in science, technology, engineering and finance. However, a 2016 OECD report suggested that a large proportion of adults in developed countries (24% to 29%) have maths skills no better than a typical seven-year-old. This lack of numeracy can contribute to lower income, poor health, reduced political participation and even diminished trust in others.

    Education often widens rather than closes the gap between high and low
    achievers, a phenomenon known as the Matthew effect. Those who start with an advantage, such as being able to read more words when starting school, tend to pull further ahead. Stronger educational achievement has been also associated with socioeconomic status, higher motivation and greater engagement with material learned during a class.

    Biological factors, such as genes, brain connectivity, and chemical signalling, have been shown in some studies to play a stronger role in learning outcomes than environmental ones. This has been well-documented in different areas, including maths, where differences in biology may explain educational achievements.


    Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.


    To explore this question, we recruited 72 young adults (18–30 years old) and taught them new maths calculation techniques over five days. Some received a placebo treatment. Others received transcranial random noise stimulation (tRNS), which delivers gentle electrical currents to the brain. It is painless and often imperceptible, unless you focus hard to try and sense it.

    It is possible tRNS may cause long term side effects, but in previous studies my team assessed participants for cognitive side effects and found no evidence for it.

    Could tRNS help people improve their maths skills?
    Prostock-studio/Shutterstock

    Participants who received tRNS were randomly assigned to receive it in one of two different brain areas. Some received it over the dorsolateral prefrontal cortex, a region critical for memory, attention, or when we acquire a new cognitive skill. Others had tRNS over the posterior parietal cortex, which processes maths information, mainly when the learning has been accomplished.

    Before and after the training, we also scanned their brains and measured levels of key neurochemicals such as gamma-aminobutyric acid (gaba), which we showed previously, in a 2021 study, to play a role in brain plasticity and learning, including maths.

    Some participants started with weaker connections between the prefrontal and parietal brain regions, a biological profile that is associated with poorer learning. The study results showed these participants made significant gains in learning when they received tRNS over the prefrontal cortex.

    Stimulation helped them catch up with peers who had stronger natural connectivity. This finding shows the critical role of the prefrontal cortex in learning and could help reduce educational inequalities that are grounded in neurobiology.

    How does this work? One explanation lies in a principle called stochastic resonance. This is when a weak signal becomes clearer when a small amount of random noise is added.

    In the brain, tRNS may enhance learning by gently boosting the activity of underperforming neurons, helping them get closer to the point at which they become active and send signals. This is a point known as the “firing threshold”, especially in people whose brain activity is suboptimal for a task like maths learning.

    It is important to note what this technique does not do. It does not make the best
    learners even better. That is what makes this approach promising for bridging gaps,
    not widening them. This form of brain stimulation helps level the playing field.

    Our study focused on healthy, high-performing university students. But in similar studies on children with maths learning disabilities (2017) and with attention-deficit/hyperactivity disorder (2023) my colleagues and I found tRNS seemed to improve their learning and performance in cognitive training.

    I argue our findings could open a new direction in education. The biology of the learner matters, and with advances in knowledge and technology, we can develop tools that act on the brain directly, not just work around it. This could give more people the chance to get the best benefit from education.

    In time, perhaps personalised, brain-based interventions like tRNS could support learners who are being left behind not because of poor teaching or personal circumstances, but because of natural differences in how their brains work.

    Of course, very often education systems aren’t operating to their full potential because of inadequate resources, social disadvantage or systemic barriers. And so any brain-based tools must go hand-in-hand with efforts to tackle these obstacles.

    Roi Cohen Kadosh serves on the scientific advisory boards of Neuroelectrics Inc., and Innosphere Ltd. He is the founder and shareholder of Cognite Neurotechnology Ltd. He received funding from the Wellcome Trust, UKRI, the British Academy, IARPA, DASA, Joy Ventures, the James S McDonnell Foundation, and the European Union. He is affiliated with the University of Surrey.

    ref. Could electric brain stimulation lead to better maths skills? – https://theconversation.com/could-electric-brain-stimulation-lead-to-better-maths-skills-260134

    MIL OSI

  • MIL-Evening Report: What did ancient Rome smell like? Honestly, often pretty rank

    Source: The Conversation (Au and NZ) – By Thomas J. Derrick, Gale Research Fellow in Ancient Glass and Material Culture, Macquarie University

    minoandriani/Getty Images

    The roar of the arena crowd, the bustle of the Roman forum, the grand temples, the Roman army in red with glistening shields and armour – when people imagine ancient Rome, they often think of its sights and sounds. We know less, however, about the scents of ancient Rome.

    We cannot, of course, go back and sniff to find out. But the literary texts, physical remains of structures, objects, and environmental evidence (such as plants and animals) can offer clues.

    So what might ancient Rome have smelled like?

    Honestly, often pretty rank

    In describing the smells of plants, author and naturalist Pliny the Elder uses words such as iucundus (agreeable), acutus (pungent), vis (strong), or dilutus (weak).

    None of that language is particularly evocative in its power to transport us back in time, unfortunately.

    But we can probably safely assume that, in many areas, Rome was likely pretty dirty and rank-smelling. Property owners did not commonly connect their toilets to the sewers in large Roman towns and cities – perhaps fearing rodent incursions or odours.

    Roman sewers were more like storm drains, and served to take standing water away from public areas.

    Professionals collected faeces for fertiliser and urine for cloth processing from domestic and public latrines and cesspits. Chamber pots were also used, which could later be dumped in cesspits.

    This waste disposal process was just for those who could afford to live in houses; many lived in small, non-domestic spaces, barely furnished apartments, or on the streets.

    A common whiff in the Roman city would have come from the animals and the waste they created. Roman bakeries frequently used large lava stone mills (or “querns”) turned by mules or donkeys. Then there was the smell of pack animals and livestock being brought into town for slaughter or sale.

    Animals were part of life in the Roman empire.
    Marco_Piunti/Getty Images

    The large “stepping-stones” still seen in the streets of Pompeii were likely so people could cross streets and avoid the assorted feculence that covered the paving stones.

    Disposal of corpses (animals and human) was not formulaic. Depending on the class of the person who had died, people might well have been left out in the open without cremation or burial.

    Bodies, potentially decaying, were a more common sight in ancient Rome than now.

    Suetonius, writing in the first century CE, famously wrote of a dog carrying a severed human hand to the dining table of the Emperor Vespasian.

    Deodorants and toothpastes

    In a world devoid of today’s modern scented products – and daily bathing by most of the population – ancient Roman settlements would have smelt of body odour.

    Classical literature has some recipes for toothpaste and even deodorants.

    However, many of the deodorants were to be used orally (chewed or swallowed) to stop one’s armpits smelling.

    One was made by boiling golden thistle root in fine wine to induce urination (which was thought to flush out odour).

    The Roman baths would likely not have been as hygienic as they may appear to tourists visiting today. A small tub in a public bath could hold between eight and 12 bathers.

    The Romans had soap, but it wasn’t commonly used for personal hygiene. Olive oil (including scented oil) was preferred. It was scraped off the skin with a strigil (a bronze curved tool).

    This oil and skin combination was then discarded (maybe even slung at a wall). Baths had drains – but as oil and water don’t mix, it was likely pretty grimy.

    Scented perfumes

    The Romans did have perfumes and incense.

    The invention of glassblowing in the late first century BCE (likely in Roman-controlled Jerusalem) made glass readily available, and glass perfume bottles are a common archaeological find.

    Animal and plant fats were infused with scents – such as rose, cinnamon, iris, frankincense and saffron – and were mixed with medicinal ingredients and pigments.

    The roses of Paestum in Campania (southern Italy) were particularly prized, and a perfume shop has even been excavated in the city’s Roman forum.

    The trading power of the vast Roman empire meant spices could be sourced from India and the surrounding regions.

    There were warehouses for storing spices such as pepper, cinnamon and myrrh in the centre of Rome.

    In a recent Oxford Journal of Archaeology article, researcher Cecilie Brøns writes that even ancient statues could be perfumed with scented oils.

    Sources frequently do not describe the smell of perfumes used to anoint the statues, but a predominantly rose-based perfume is specifically mentioned for this purpose in inscriptions from the Greek city of Delos (at which archaeologists have also identified perfume workshops). Beeswax was likely added to perfumes as a stabiliser.

    Enhancing the scent of statues (particularly those of gods and goddesses) with perfumes and garlands was important in their veneration and worship.

    An olfactory onslaught

    The ancient city would have smelt like human waste, wood smoke, rotting and decay, cremating flesh, cooking food, perfumes and incense, and many other things.

    It sounds awful to a modern person, but it seems the Romans did not complain about the smell of the ancient city that much.

    Perhaps, as historian Neville Morley has suggested, to them these were the smells of home or even of the height of civilisation.

    Thomas J. Derrick does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. What did ancient Rome smell like? Honestly, often pretty rank – https://theconversation.com/what-did-ancient-rome-smell-like-honestly-often-pretty-rank-257111

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: News laws to make it harder for large Australian and foreign companies to avoid paying tax

    Source: The Conversation (Au and NZ) – By Kerrie Sadiq, Professor of Taxation, QUT Business School, and ARC Future Fellow, Queensland University of Technology

    The Conversation, CC BY

    The beginning of the financial year means for the first time in Australia the public will see previously unreleased tax reports produced by multinational taxpayers.

    These documents, known as country-by-country reports, or CbCR for short, contain information about the tax practices of large Australian businesses and foreign businesses operating in Australia. This information, previously only available to the taxpayer and the Australian Tax Office, will be made public.

    Country-by-country reports, announced in the October 2022-2023 budget, were introduced with other measures designed to improve corporate tax behaviour. The reports will be released from this week as part of corporate reporting practices. Multinationals have 12 months to comply.

    A fairer tax system

    Country-by-country reporting forms part of the government’s multinational tax integrity election commitment package. The aim is to ensure a fairer and more sustainable tax system. Large firms will be required to publish a statement on their global activities plus tax information for each jurisdiction in which they operate.

    Until now, large multinationals only had to prepare annual consolidated financial statements under international financial reporting standards. The traditional reports aggregate results and provide limited geographic reporting information.

    Traditional high-level reporting allows multinationals to conceal their country-level activities. This hides questionable tax practices.

    Country-by-country reporting allows us to better see where a multinational operates. More importantly, the amount of activity in each jurisdiction is reported. The information provides clues as to whether artificial profit shifting has occurred.

    Anyone interested can uncover details about how multinationals structure their global operations. Information may reveal a misalignment between the company’s real economic presence in a country, the profits they book and taxes they pay in that country.

    Bringing Australia into line with the EU

    Country-by-country reporting is not new. It is the requirement that the information be made public that has changed.

    Australian firms have been required to provide such reports to the Australian Tax Office since 2016. However, the information has been confidential.

    The new public disclosure law brings Australia in line with large firms operating in the European Union which brought in the change last year.

    How country-by-country reporting works

    A taxpayer with annual global income above A$1 billion and at least A$10 million of its turnover Australian-sourced will need to produce a report. The obligation to disclose rests with the parent entity no matter where they are located.

    Australia’s largest companies, including mining giants Rio Tinto and BHP, biotech firm CSL, and investment bank Macquarie Group, will be among those expected to report, as will foreign tech behemoths such as Apple, Amazon, Microsoft and Meta.

    These tech giants are the same US firms likely to be excluded from the global minimum tax rules under a G7 agreement reached last week. Under the agreement, US multinationals were exempted from paying more corporate tax overseas. Other G7 members gave in to protect their own companies from the US’s threat of retaliation.

    Under the law change in Australia, a parent entity will provide its name, the names of all members of the group, a description of their approach to tax, and information about operations in certain countries. Included on the list are countries that attract multinationals due to reduced tax obligations, such as Singapore, Switzerland, and the Bahamas.

    Everyone will be able to see where a multinational is operating. They will also see the types of business activities conducted, number of employees, assets, revenue, and taxes paid. Large profits in a country but little business activity and very few employees may raise questions, especially if a country has a low tax rate.

    Benefits of better transparency

    Access to the extra information will help investors assess the tax and reputational risk of a firm. A multinational that shifts profits to low tax countries may be audited and pay extra tax and penalties.

    Increased transparency allows greater scrutiny. In turn, it is hoped multinationals will reduce aggressive tax planning due to potential risk to their reputation.

    If multinationals shift less taxable profits out of Australia to low-tax or no-tax jurisdictions, this will lead to Australia receiving a greater share of much needed corporate tax revenue.

    Reducing profit shifting

    Recent academic research on public country-by-country reporting reveals it provides additional information to better identify tax haven activity. However, it does not result in a significant drop in corporate tax avoidance.

    Increased tax transparency helps investors and tax authorities to better understand a multinational’s economic and tax geographic footprint. It is also important when it seems that US giants will be excluded from the 15% global minimum tax rules. Transparency by itself, however, does not lead to multinationals paying more corporate taxes.

    By its very nature, tax avoidance is legal but pushes the boundaries by going against the spirit of the law. Indeed, many large multinationals argue tax is a legal obligation and is not voluntary. They maintain they pay the tax required of them according to the law.

    Undoubtedly, Australia’s new public country-by-country regime is a positive step for tax transparency. As a country initiative, it has been applauded as groundbreaking and world leading. However, it is not a panacea to corporate tax avoidance.

    To limit corporate tax avoidance and have multinationals pay more corporate taxes, we must get to the heart of the problem. We must change the law that dictates the way multinationals are taxed.

    Kerrie Sadiq currently receives funding from the Australian Research Council. She has previously received research grants from CPA Australia and CAANZ.

    Rodney Brown has previously received research grants from CPA Australia and CAANZ.

    ref. News laws to make it harder for large Australian and foreign companies to avoid paying tax – https://theconversation.com/news-laws-to-make-it-harder-for-large-australian-and-foreign-companies-to-avoid-paying-tax-260004

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: News laws to make it harder for large Australian and foreign companies to avoid paying tax

    Source: The Conversation (Au and NZ) – By Kerrie Sadiq, Professor of Taxation, QUT Business School, and ARC Future Fellow, Queensland University of Technology

    The Conversation, CC BY

    The beginning of the financial year means for the first time in Australia the public will see previously unreleased tax reports produced by multinational taxpayers.

    These documents, known as country-by-country reports, or CbCR for short, contain information about the tax practices of large Australian businesses and foreign businesses operating in Australia. This information, previously only available to the taxpayer and the Australian Tax Office, will be made public.

    Country-by-country reports, announced in the October 2022-2023 budget, were introduced with other measures designed to improve corporate tax behaviour. The reports will be released from this week as part of corporate reporting practices. Multinationals have 12 months to comply.

    A fairer tax system

    Country-by-country reporting forms part of the government’s multinational tax integrity election commitment package. The aim is to ensure a fairer and more sustainable tax system. Large firms will be required to publish a statement on their global activities plus tax information for each jurisdiction in which they operate.

    Until now, large multinationals only had to prepare annual consolidated financial statements under international financial reporting standards. The traditional reports aggregate results and provide limited geographic reporting information.

    Traditional high-level reporting allows multinationals to conceal their country-level activities. This hides questionable tax practices.

    Country-by-country reporting allows us to better see where a multinational operates. More importantly, the amount of activity in each jurisdiction is reported. The information provides clues as to whether artificial profit shifting has occurred.

    Anyone interested can uncover details about how multinationals structure their global operations. Information may reveal a misalignment between the company’s real economic presence in a country, the profits they book and taxes they pay in that country.

    Bringing Australia into line with the EU

    Country-by-country reporting is not new. It is the requirement that the information be made public that has changed.

    Australian firms have been required to provide such reports to the Australian Tax Office since 2016. However, the information has been confidential.

    The new public disclosure law brings Australia in line with large firms operating in the European Union which brought in the change last year.

    How country-by-country reporting works

    A taxpayer with annual global income above A$1 billion and at least A$10 million of its turnover Australian-sourced will need to produce a report. The obligation to disclose rests with the parent entity no matter where they are located.

    Australia’s largest companies, including mining giants Rio Tinto and BHP, biotech firm CSL, and investment bank Macquarie Group, will be among those expected to report, as will foreign tech behemoths such as Apple, Amazon, Microsoft and Meta.

    These tech giants are the same US firms likely to be excluded from the global minimum tax rules under a G7 agreement reached last week. Under the agreement, US multinationals were exempted from paying more corporate tax overseas. Other G7 members gave in to protect their own companies from the US’s threat of retaliation.

    Under the law change in Australia, a parent entity will provide its name, the names of all members of the group, a description of their approach to tax, and information about operations in certain countries. Included on the list are countries that attract multinationals due to reduced tax obligations, such as Singapore, Switzerland, and the Bahamas.

    Everyone will be able to see where a multinational is operating. They will also see the types of business activities conducted, number of employees, assets, revenue, and taxes paid. Large profits in a country but little business activity and very few employees may raise questions, especially if a country has a low tax rate.

    Benefits of better transparency

    Access to the extra information will help investors assess the tax and reputational risk of a firm. A multinational that shifts profits to low tax countries may be audited and pay extra tax and penalties.

    Increased transparency allows greater scrutiny. In turn, it is hoped multinationals will reduce aggressive tax planning due to potential risk to their reputation.

    If multinationals shift less taxable profits out of Australia to low-tax or no-tax jurisdictions, this will lead to Australia receiving a greater share of much needed corporate tax revenue.

    Reducing profit shifting

    Recent academic research on public country-by-country reporting reveals it provides additional information to better identify tax haven activity. However, it does not result in a significant drop in corporate tax avoidance.

    Increased tax transparency helps investors and tax authorities to better understand a multinational’s economic and tax geographic footprint. It is also important when it seems that US giants will be excluded from the 15% global minimum tax rules. Transparency by itself, however, does not lead to multinationals paying more corporate taxes.

    By its very nature, tax avoidance is legal but pushes the boundaries by going against the spirit of the law. Indeed, many large multinationals argue tax is a legal obligation and is not voluntary. They maintain they pay the tax required of them according to the law.

    Undoubtedly, Australia’s new public country-by-country regime is a positive step for tax transparency. As a country initiative, it has been applauded as groundbreaking and world leading. However, it is not a panacea to corporate tax avoidance.

    To limit corporate tax avoidance and have multinationals pay more corporate taxes, we must get to the heart of the problem. We must change the law that dictates the way multinationals are taxed.

    Kerrie Sadiq currently receives funding from the Australian Research Council. She has previously received research grants from CPA Australia and CAANZ.

    Rodney Brown has previously received research grants from CPA Australia and CAANZ.

    ref. News laws to make it harder for large Australian and foreign companies to avoid paying tax – https://theconversation.com/news-laws-to-make-it-harder-for-large-australian-and-foreign-companies-to-avoid-paying-tax-260004

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: Farming within Earth’s limits is still possible – but it will take a Herculean effort

    Source: The Conversation (Au and NZ) – By Michalis Hadjikakou, Senior Lecturer in Environmental Sustainability, School of Life and Environmental Sciences, Faculty of Science, Engineering & Built Environment, Deakin University

    Patrick Pleul/Getty

    The way we currently produce and consume food takes a big toll on the environment.

    Worldwide, farming is responsible for more than 20% of greenhouse gas emissions and uses more than 70% of all fresh water taken from rivers, lakes and groundwater. It’s the leading driver of deforestation and nutrient pollution, largely from fertiliser run-off. All of these pose a serious threat to ecosystems.

    If this sounds serious, it’s because it is. If emissions and land clearing trends continue, the world’s food system alone could make it impossible to meet climate targets. If we continue eating and producing food in the same way we are now, we will almost certainly exceed crucial environmental limits by 2050.

    What can be done? In our new research, we looked for ways to keep the food system within environmental limits by 2050. We found only one approach worked: combine high-impact changes such as shifting to flexitarian (low meat) diets, improving farming practices and reducing food waste.

    Why will farming take us past environmental limits?

    Environmental limits are also known as planetary boundaries. These nine boundaries are Earth’s natural safety limits. They range from freshwater resources to the biosphere to the climate. Human activities have pushed past six out of nine safe boundaries through clearing too much land, overusing water for irrigation, overapplying fertilisers or emitting more than our shrinking carbon budget permits.

    If we cross these thresholds, we risk dangerous and irreversible changes to the conditions supporting a stable planet.

    Transforming the way we farm and eat is essential if we are to keep humanity in a safe operating space within environmental limits.

    The 2021 documentary Breaking Boundaries focused on the very real dangers of breaching planetary limits.

    What does this transformation look like?

    The challenge of making food production sustainable is long-running. Previous research has compared the effectiveness of different changes authorities and consumers could make. But most studies used different models, making it hard to compare changes.

    To overcome this problem, we synthesised information from previous studies and built a database of thousands of future food system scenarios and possible changes. Then we performed a meta-analysis to combine data from multiple studies and draw more robust conclusions.

    This approach allows policymakers and researchers to compare apples and apples, as well as see which combination of changes would let us stay within crucial safety limits by 2050.

    We focused on four vital indicators: how much land and water is used for farming; the amount of greenhouse gases emitted; and the flows of two key nutrients, nitrogen and phosphorus.

    What works best?

    What stood out was the sheer variation in effectiveness. Some changes would work very well across several areas, while others would take a lot of effort for not enough result.

    Two changes punch well above their weight on land, water and emissions.

    The first is shifting to a flexitarian diet with fewer foods sourced from animals. This is similar to traditional regional diets such as the Mediterranean and Okinawan diets, where meat and dairy are eaten in much smaller proportions compared to whole grains, fruits, vegetables, nuts and legumes.

    Returning to this diet could shrink how much land we use for farming by almost a quarter (24%), cut water demand by 14% and slash greenhouse gas emissions by 47%.

    Traditional diets such as the Mediterranean diet rely less on animal products and more on plants, nuts, oils and legumes.
    monticello/Shutterstock

    The second is breeding better livestock. Livestock today are much better at converting their feed into meat or milk than their precursors. But this could be better still. More productive animals could enable an 18% reduction in land use, a 10% drop in water use and a 34% cut to emissions.

    Modern fertilisers have made it possible to produce many more crops and fodder. But if too much fertiliser is applied, it can wash off after rain and pollute waterways.

    Better timed and more precise application of fertiliser is by far the best way to cut nutrient pollution. Major improvements here could cut nitrogen pollution by 39% and phosphorus pollution by 42%. As a side benefit, it could save farmers money.



    Increasing crop yields, lowering agricultural emissions through better soil management and other practices, and taking up technologies such as methane-reducing supplements can significantly reduce our risk of exceeding environmental limits. So too can cutting food waste and using water more wisely in farming. Our extended results show the relative benefits of ten possible interventions.

    There is no silver bullet

    We found no single change was up to the task of making food production and consumption sustainable.

    We considered over a million possible combinations of changes. Of these combinations, only a tiny fraction – 0.02% – give us a fighting chance of staying within all environmental limits.

    In almost all successful combinations, the world would need to make significant cuts to how many calories come from animals, make big improvements to fertiliser use and nutrient management, and focus research and development on finding ways to farm land and livestock with less resources and emissions.

    Most successful combinations also rely on halving food waste and reducing overconsumption.

    Is it still possible?

    Farming within the limits of Earth’s systems will be hard. But it is possible.

    Some work is already being done. Global organisations such as the United Nations are making a concerted effort to accelerate changes to food systems across many countries.

    Research like ours can make people feel powerless. But individual change is always worthwhile. Reducing your intake of animal products benefits your health and the planet.

    Properly addressing these very real issues will take concerted, collective work. If we don’t succeed, we risk triggering ecological collapse – and threatening the foundation for human civilisation.

    The knowledge and tools are at hand. What’s needed now is ambition – and a sense of what’s at stake.

    The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    ref. Farming within Earth’s limits is still possible – but it will take a Herculean effort – https://theconversation.com/farming-within-earths-limits-is-still-possible-but-it-will-take-a-herculean-effort-259901

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: Gum disease, decay, missing teeth: why people with mental illness have poorer oral health

    Source: The Conversation (Au and NZ) – By Bonnie Clough, Senior Lecturer, School of Applied Psychology, Griffith University

    mihailomilovanovic/Getty Images

    People with poor mental health face many challenges. One that’s perhaps lesser known is that they’re more likely than the overall population to have poor oral health.

    Research has shown people with serious mental illness are four times more likely than the general population to have gum disease. They’re nearly three times more likely to have lost all their teeth due to problems such as gum disease and tooth decay.

    Serious mental illnesses include major depressive disorder, bipolar disorder and psychotic disorders such as schizophrenia. These conditions affect about 800,000 Australians.

    People living with schizophrenia have, on average, eight more teeth that are decayed, missing or filled than the general population.

    So why does this link exist? And what can we do to address the problem?

    Why is this a problem?

    Oral health problems are expensive to fix and can make it hard for people to eat, socialise, work or even just smile.

    What’s more, dental issues can land people in hospital. Our research shows dental conditions are the third most common reason for preventable hospital admissions among people with serious mental illness.

    Meanwhile, poor oral health is linked with long-term health conditions such as diabetes, heart disease, some cancers, and even cognitive problems. This is because the bacteria associated with gum diseases can cause inflammation throughout the body, which affects other systems in the body.

    Why are mental health and oral health linked?

    Poor mental and oral health share common risk factors. Social factors such as isolation, unemployment and housing insecurity can worsen both oral and mental health.

    For example, unemployment increases the risk of oral disease. This can be due to financial difficulties, reduced access to oral health care, or potential changes to diet and hygiene practices.

    At the same time, oral disease can increase barriers to finding employment, due to stigma, discrimination, dental pain and associated long-term health conditions.

    It’s clear the relationship between oral health and mental health goes both ways. Dental disease can reduce self-esteem and increase psychological distress. Meanwhile, symptoms of mental health conditions, such as low motivation, can make engaging in good oral health practices, including brushing, flossing, and visiting the dentist, more difficult.

    And like many people, those with serious mental illness can experience significant anxiety about going to the dentist. They may also have experienced trauma in the past, which can make visiting a dental clinic a frightening experience.

    Separately, poor oral health can be made worse by some medications for mental health conditions. Certain medications can interfere with saliva production, reducing the protective barrier that covers the teeth. Some may also increase sugar cravings, which heightens the risk of tooth decay.

    Some medications people take for mental health conditions can affect oral health.
    Gladskikh Tatiana/Shutterstock

    Our research

    In a recent study, we interviewed young people with mental illness. Our findings show the significant personal costs of dental disease among people with mental illness, and highlight the relationship between oral and mental health.

    Smiling is one of our best ways to communicate, but we found people with serious mental illness were sometimes embarrassed and ashamed to smile due to poor oral health.

    One participant told us:

    [poor oral health is] not only [about] the physical aspects of restricting how you eat, but it’s also about your mental health in terms of your self-esteem, your self-confidence, and basic wellbeing, which sort of drives me to become more isolated.

    Another said:

    for me, it was that serious fear of – God my teeth are looking really crap, and in the past they’ve [dental practitioners] asked, “Hey, you’ve missed this spot; what’s happening?”. How do I explain to them, hey, I’ve had some really shitty stuff happening and I have a very serious episode of depression?

    What can we do?

    Another of our recent studies focused on improving oral health awareness and behaviours among young adults experiencing mental health difficulties. We found a brief online oral health education program improved participants’ oral health knowledge and attitudes.

    Improving oral health can result in improved mental wellbeing, self-esteem and quality of life. But achieving this isn’t always easy.

    Limited Medicare coverage for dental care means oral diseases are frequently treated late, particularly among people with mental illness. By this time, more invasive treatments, such as removal of teeth, are often required.

    It’s crucial the health system takes a holistic approach to caring for people experiencing serious mental illness. That means we have mental health staff who ask questions about oral health, and dental practitioners who are trained to manage the unique oral health needs of people with serious mental illness.

    It also means increasing government funding for oral health services – promotion, prevention and improved interdisciplinary care. This includes better collaboration between oral health, mental health, and peer and informal support sectors.

    Amanda Wheeler is an investigator on a MetroSouth Health 2025 grant exploring use of Queensland Emergency Departments for people with mental ill-health seeking acute care for oral health problems.

    Steve Kisely has received a grant on oral health from Metro South Research Foundation and one from the Medical Research Future Fund.

    Bonnie Clough, Caroline Victoria Robertson, and Santosh Tadakamadla do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    ref. Gum disease, decay, missing teeth: why people with mental illness have poorer oral health – https://theconversation.com/gum-disease-decay-missing-teeth-why-people-with-mental-illness-have-poorer-oral-health-258403

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: Gum disease, decay, missing teeth: why people with mental illness have poorer oral health

    Source: The Conversation (Au and NZ) – By Bonnie Clough, Senior Lecturer, School of Applied Psychology, Griffith University

    mihailomilovanovic/Getty Images

    People with poor mental health face many challenges. One that’s perhaps lesser known is that they’re more likely than the overall population to have poor oral health.

    Research has shown people with serious mental illness are four times more likely than the general population to have gum disease. They’re nearly three times more likely to have lost all their teeth due to problems such as gum disease and tooth decay.

    Serious mental illnesses include major depressive disorder, bipolar disorder and psychotic disorders such as schizophrenia. These conditions affect about 800,000 Australians.

    People living with schizophrenia have, on average, eight more teeth that are decayed, missing or filled than the general population.

    So why does this link exist? And what can we do to address the problem?

    Why is this a problem?

    Oral health problems are expensive to fix and can make it hard for people to eat, socialise, work or even just smile.

    What’s more, dental issues can land people in hospital. Our research shows dental conditions are the third most common reason for preventable hospital admissions among people with serious mental illness.

    Meanwhile, poor oral health is linked with long-term health conditions such as diabetes, heart disease, some cancers, and even cognitive problems. This is because the bacteria associated with gum diseases can cause inflammation throughout the body, which affects other systems in the body.

    Why are mental health and oral health linked?

    Poor mental and oral health share common risk factors. Social factors such as isolation, unemployment and housing insecurity can worsen both oral and mental health.

    For example, unemployment increases the risk of oral disease. This can be due to financial difficulties, reduced access to oral health care, or potential changes to diet and hygiene practices.

    At the same time, oral disease can increase barriers to finding employment, due to stigma, discrimination, dental pain and associated long-term health conditions.

    It’s clear the relationship between oral health and mental health goes both ways. Dental disease can reduce self-esteem and increase psychological distress. Meanwhile, symptoms of mental health conditions, such as low motivation, can make engaging in good oral health practices, including brushing, flossing, and visiting the dentist, more difficult.

    And like many people, those with serious mental illness can experience significant anxiety about going to the dentist. They may also have experienced trauma in the past, which can make visiting a dental clinic a frightening experience.

    Separately, poor oral health can be made worse by some medications for mental health conditions. Certain medications can interfere with saliva production, reducing the protective barrier that covers the teeth. Some may also increase sugar cravings, which heightens the risk of tooth decay.

    Some medications people take for mental health conditions can affect oral health.
    Gladskikh Tatiana/Shutterstock

    Our research

    In a recent study, we interviewed young people with mental illness. Our findings show the significant personal costs of dental disease among people with mental illness, and highlight the relationship between oral and mental health.

    Smiling is one of our best ways to communicate, but we found people with serious mental illness were sometimes embarrassed and ashamed to smile due to poor oral health.

    One participant told us:

    [poor oral health is] not only [about] the physical aspects of restricting how you eat, but it’s also about your mental health in terms of your self-esteem, your self-confidence, and basic wellbeing, which sort of drives me to become more isolated.

    Another said:

    for me, it was that serious fear of – God my teeth are looking really crap, and in the past they’ve [dental practitioners] asked, “Hey, you’ve missed this spot; what’s happening?”. How do I explain to them, hey, I’ve had some really shitty stuff happening and I have a very serious episode of depression?

    What can we do?

    Another of our recent studies focused on improving oral health awareness and behaviours among young adults experiencing mental health difficulties. We found a brief online oral health education program improved participants’ oral health knowledge and attitudes.

    Improving oral health can result in improved mental wellbeing, self-esteem and quality of life. But achieving this isn’t always easy.

    Limited Medicare coverage for dental care means oral diseases are frequently treated late, particularly among people with mental illness. By this time, more invasive treatments, such as removal of teeth, are often required.

    It’s crucial the health system takes a holistic approach to caring for people experiencing serious mental illness. That means we have mental health staff who ask questions about oral health, and dental practitioners who are trained to manage the unique oral health needs of people with serious mental illness.

    It also means increasing government funding for oral health services – promotion, prevention and improved interdisciplinary care. This includes better collaboration between oral health, mental health, and peer and informal support sectors.

    Amanda Wheeler is an investigator on a MetroSouth Health 2025 grant exploring use of Queensland Emergency Departments for people with mental ill-health seeking acute care for oral health problems.

    Steve Kisely has received a grant on oral health from Metro South Research Foundation and one from the Medical Research Future Fund.

    Bonnie Clough, Caroline Victoria Robertson, and Santosh Tadakamadla do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    ref. Gum disease, decay, missing teeth: why people with mental illness have poorer oral health – https://theconversation.com/gum-disease-decay-missing-teeth-why-people-with-mental-illness-have-poorer-oral-health-258403

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: The National Anti-Corruption Commission turns 2 – has it restored integrity to federal government?

    Source: The Conversation (Au and NZ) – By A J Brown, Professor of Public Policy & Law, Centre for Governance & Public Policy, Griffith University

    The National Anti-Corruption Commission (NACC) opened its doors two years ago this week amid much fanfare and high expectations.

    Since then the body has attracted considerable criticism, overshadowing a solid, if slow, start to a whole new anti-corruption system across federal government.

    Established with strong powers after a history of much weaker proposals, what has it achieved in its first two years?

    Early hurdles

    On its first day, the decision to livestream the opening ceremony showed the Commission was alive to public expectations.

    However, the Commission’s reputation faced major early challenges: fears its transparency had been “nobbled”, and its damaging initial decision not to investigate officials referred by the Robodebt Royal Commission.

    The first challenge flowed from the politics that birthed the Commission.

    In 2022, despite otherwise state-of-the-art powers, the Albanese government made a late decision to insert an “exceptional circumstances” test to its ability to hold public hearings in corruption investigations.

    The shift created a bad impression. Many voices, including cross-bench parliamentarians, were left with good reason to question the very institution they helped create.

    The problem will haunt the NACC until the unnecessary threshold is removed.

    Public recognition

    In reality, the NACC still has hefty public hearing powers, but they are as yet to be used.

    When the need arises for royal commission-scale transparency, it will deliver an important side benefit the NACC still badly needs: public visibility.

    The challenge is confirmed by research on public trust, yet to be published, by Griffith University. Surveyed in March this year, only 12% of respondents said they knew at least a fair amount about the NACC, while a third had never heard of it at all, or didn’t know.

    This contrasts with the NSW Independent Commission Against Corruption, now 37 years old and the country’s heaviest user of public hearings. Over a quarter (26%) of NSW respondents said they knew at least a fair amount about the ICAC.

    Building visibility is a slow road, and does not mean the NACC is not doing its job. But with recognition a cornerstone of confidence, it’s a key tool the Commission clearly still needs to learn how to use.

    Workload

    In fact, the NACC’s heavy pipeline of work is finally starting to give it more to talk about.

    About 4,500 corruption complaints or referrals have been assessed since 1 July 2023, leading to more then 40 full investigations, including 31 currently underway.

    It will take time for this workload to pay off, in dealing with and preventing corruption, as well as reinforcing the public trust everyone needs. But even if slow, the first results confirm the importance of the investment.

    This week, the Commission published its fourth investigation report, revealing details of serious corrupt conduct by a Department of Home Affairs Senior Executive who abused her office by dishonestly advantaging her sister’s fiance for a job.

    Small fry? Maybe to some. But the fact 15 of the current investigations relate to senior officials takes the fight against nepotism and cronyism right to where it needs to be.

    Before the NACC, there was little confidence in how this kind of soft corruption was being dealt with by federal agencies.

    Hard corruption

    In its first two years, the NACC has also monitored 40 internal investigations by agencies which previously would have gone unsupervised, if they happened at all.

    On harder corruption, some results tell an even stronger tale.

    Last year, the NACC finalised an investigation which saw a former Australian Taxation Office employee jailed for five years, for accepting A$150,000 in bribes to reduce the tax debts of a Sydney businessman – also since jailed.

    And in December, a former Western Sydney Airport manager pleaded guilty to soliciting a A$200,000 bribe in exchange for a A$5 million services contract at Badgery’s Creek.

    Prior to the NACC, this was exactly the type of hard corruption many federal politicians and public servants claimed did not occur. No-one believed it, but now there’s a system for getting it under control.

    Politicians not immune

    The fact 13 of the NACC’s current investigations relate to former or current federal politicians or their staff is also reassuring. Of all the public officials in Australia, they have long been the most immune from integrity oversight.

    Known referrals include former Liberal Minister Stuart Robert in relation to alledged improper financial dealings with Canberra lobbyist, Synergy 360.

    A separate review found $374 million in contracts linked to Robert and the firm were poor value for money or plagued with perceived conflicts of interest.

    Even if Robert’s denials are correct, the NACC has good scope to help ensure no such dealings are possible in the future.

    The NACC’s strategic priorities highlight “senior public official decision-making” as an area where “even the perception of corruption can significantly harm trust in government”. This is especially important given the lack of regulation covering contractor, consultant and departmental relationships.

    Robodebt setback

    Tackling such fundamental issues, and not just driving a hamster wheel of criminal investigations, is the big challenge. It is underscored by the worst hurdle confronted by the NACC: its initial refusal to investigate Robodebt.

    The NACC’s independent inspector, Gail Furness, found that decision was contaminated by a badly managed conflict of interest, which caused the Commission reputational damage.

    But the poor handling also provided the circuit breaker needed for an independent reconsideration.

    Since February, the NACC is now investigating whether six individuals referred by the Robodebt Royal Commission engaged in corrupt conduct.

    It is a chance for the Commission to show it’s more than a compliance-focused enforcement agency, and is ready to play a positive part ensuring accountability and justice for victims when officials abuse their power.

    The larger mission

    Accepting this larger mission is a challenge for all anti-corruption commissions, but the NACC’s ability to do so is aided by some special powers.

    Its broad definition of “corrupt conduct” means it can tackle any kind of serious integrity failure, including breaches of trust or abuses of power, which don’t involve the types of private gain often associated with corruption in the past.

    A second key tool – also the likely solution to its visibility problem – is the Commission’s unique power to tackle larger issues through public inquiries.

    Also yet to be used, this power extends to any “corruption risks and vulnerabilities” or “measures to prevent corruption” the Commission sees fit. Unlike individual investigation hearings, it does not require “exceptional circumstances”.

    The last two years have seen the NACC well and truly blooded in its role as the cornerstone of the federal integrity transformation we needed to have.

    Now the question is more about the Commission’s choices of direction, including how it nurtures its relationship with the public, than whether it has capacity to get the job done.

    A J Brown AM is Chair of Transparency International Australia. He has received funding from the Australian Research Council and all Australian governments for research on public interest whistleblowing, integrity and anti-corruption reform through partners including Australia’s federal and state Ombudsmen and other regulatory agencies, parliaments, state anti-corruption agencies, and private sector industry bodies. He currently leads an ARC Discovery Project on mapping and harnessing public trust and distrust, in partnership with Sydney, La Trobe and Bond Universities. He is a former senior investigator for the Commonwealth Ombudsman, was a member of the Commonwealth Ministerial Expert Panel on Whistleblowing and is a member of the Queensland Public Sector Governance Council.

    ref. The National Anti-Corruption Commission turns 2 – has it restored integrity to federal government? – https://theconversation.com/the-national-anti-corruption-commission-turns-2-has-it-restored-integrity-to-federal-government-257889

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: ‘Shit in, shit out’: AI is coming for agriculture, but farmers aren’t convinced

    Source: The Conversation (Au and NZ) – By Tom Lee, Senior Lecturer, School of Design, University of Technology Sydney

    David Gray / AFP / Getty Images

    Australian farms are at the forefront of a wave of technological change coming to agriculture. Over the past decade, more than US$200 billion (A$305 billion) has been invested globally into the likes of pollination robots, smart soil sensors and artificial intelligence (AI) systems to help make decisions.

    What do the people working the land make of it all? We interviewed dozens of Australian farmers about AI and digital technology, and found they had a sophisticated understanding of their own needs and how technology might help – as well as a wariness of tech companies’ utopian promises.

    The future of farming

    The supposed revolution coming to agriculture goes by several names: “precision agriculture”, “smart farming”, and “agriculture 4.0” are some of the more common ones.

    These names all gesture towards a future in which the relationship between humans, computing and nature have been significantly reconfigured. Perhaps remote sensing technology will monitor ever more of a farm system, autonomous vehicles will patrol it, and AI will predict crop growth or cattle weight gain.

    But there’s another story to tell about the way technological change happens. It involves people and communities creating their own future, their own sense of important change from the past.

    AI, country style

    Our research team conducted more than 35 interviews with farmers, specifically livestock producers, from across Australia.

    The dominant themes of their responses were captured in two pithy quotes: “shit in, shit out” and “more automation, less features”.

    “Shit in, shit out” is an earthier version of the “garbage in, garbage out” adage in computer science. If the data going into a model is unreliable or overly abstract, then the outputs will be shaped by those errors.

    This captured a real concern for many farmers. They didn’t feel they could trust new technologies if they didn’t understand what knowledge and information they had been built with.

    A different kind of automation

    On the other hand, “more automation, less features” is what farmers want: technologies that may not have a lot of bells and whistles, but can reliably take a task off their hands.

    Australian farmers have a ready appetite for labour-saving technologies. When human bodies are scarce, as they often are in rural Australia, machines are created to fill the void.

    Windmills, wire fences, and even the iconic Australian sheepdog have been a crucial part of the technological narrative of settler colonial farming. These things are not “autonomous” in the same way as computer-powered vehicles and drones, but they offer similar advantages to farmers.

    What these classic farm technologies have in common is a simplicity that derives from a clarity of purpose. They are the opposite of the “everything apps” that fuel the dreams of many Silicon Valley entrepreneurs.

    “More automation, less features” is in this sense a farmer envisaging a digital product that fits with their image of a useful technology: transparent in its operations, and a reliable replacement for or an addition to human labour.

    The lesson of the Suzuki Sierra Stockman

    When speaking with one farmer about favoured technologies of her lifetime, she mentioned the Suzuki Sierra Stockman. These small, no-frills, four-wheel-drive vehicles became something of an icon on Australian sheep and cattle farms through the 1970s, ‘80s and ’90s.

    By the 1990s, the Suzuki Sierra Stockman had an iconic status among Australian farmers.
    Turbo_J / Flickr

    Reflecting on her memories of first using the vehicle, the farmer said:

    Once I learnt that I could actually draft cattle out with the Suzuki, that changed everything. You could do exactly what you did on a horse with a vehicle.

    It seems unlikely that Suzuki’s engineers in Japan envisaged their little jeep chasing cattle in the paddocks of Central West of NSW. The Suzuki was in a sense remade by farmers who found innovative uses for it.

    Future technology must be simple, adaptable and reliable

    The combustion engine was a key technological change on farms in the 20th century. Computers may play a similar role in the 21st.

    We are perhaps yet to see a digital product as iconic as wire fences, windmills, sheepdogs and the Suzuki Stockman. Computers are still largely technologies of the office, not the paddock.

    However, this is changing as computers get smaller and are wired into water tanks, soil monitors and in-paddock scales. More data input from these sensors means AI systems have more scope to help farmers make decisions.

    AI may well become a much-loved tool for farmers. But that journey to iconic status will depend as much on how farmers adapt the technology as on how the developers build it. And we can guess at what it will look like: simple, adaptable and reliable.

    This article is based on research conducted by the Foragecaster project, led by AgriWebb and supported by funding from Food Agility CRC Ltd, funded under the Commonwealth Government CRC Program. The CRC Program supports industry-led collaborations between industry, researchers and the community. This project was also supported by funding from Meat and Livestock Australia (MLA).

    ref. ‘Shit in, shit out’: AI is coming for agriculture, but farmers aren’t convinced – https://theconversation.com/shit-in-shit-out-ai-is-coming-for-agriculture-but-farmers-arent-convinced-259997

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: ‘Shit in, shit out’: AI is coming for agriculture, but farmers aren’t convinced

    Source: The Conversation (Au and NZ) – By Tom Lee, Senior Lecturer, School of Design, University of Technology Sydney

    David Gray / AFP / Getty Images

    Australian farms are at the forefront of a wave of technological change coming to agriculture. Over the past decade, more than US$200 billion (A$305 billion) has been invested globally into the likes of pollination robots, smart soil sensors and artificial intelligence (AI) systems to help make decisions.

    What do the people working the land make of it all? We interviewed dozens of Australian farmers about AI and digital technology, and found they had a sophisticated understanding of their own needs and how technology might help – as well as a wariness of tech companies’ utopian promises.

    The future of farming

    The supposed revolution coming to agriculture goes by several names: “precision agriculture”, “smart farming”, and “agriculture 4.0” are some of the more common ones.

    These names all gesture towards a future in which the relationship between humans, computing and nature have been significantly reconfigured. Perhaps remote sensing technology will monitor ever more of a farm system, autonomous vehicles will patrol it, and AI will predict crop growth or cattle weight gain.

    But there’s another story to tell about the way technological change happens. It involves people and communities creating their own future, their own sense of important change from the past.

    AI, country style

    Our research team conducted more than 35 interviews with farmers, specifically livestock producers, from across Australia.

    The dominant themes of their responses were captured in two pithy quotes: “shit in, shit out” and “more automation, less features”.

    “Shit in, shit out” is an earthier version of the “garbage in, garbage out” adage in computer science. If the data going into a model is unreliable or overly abstract, then the outputs will be shaped by those errors.

    This captured a real concern for many farmers. They didn’t feel they could trust new technologies if they didn’t understand what knowledge and information they had been built with.

    A different kind of automation

    On the other hand, “more automation, less features” is what farmers want: technologies that may not have a lot of bells and whistles, but can reliably take a task off their hands.

    Australian farmers have a ready appetite for labour-saving technologies. When human bodies are scarce, as they often are in rural Australia, machines are created to fill the void.

    Windmills, wire fences, and even the iconic Australian sheepdog have been a crucial part of the technological narrative of settler colonial farming. These things are not “autonomous” in the same way as computer-powered vehicles and drones, but they offer similar advantages to farmers.

    What these classic farm technologies have in common is a simplicity that derives from a clarity of purpose. They are the opposite of the “everything apps” that fuel the dreams of many Silicon Valley entrepreneurs.

    “More automation, less features” is in this sense a farmer envisaging a digital product that fits with their image of a useful technology: transparent in its operations, and a reliable replacement for or an addition to human labour.

    The lesson of the Suzuki Sierra Stockman

    When speaking with one farmer about favoured technologies of her lifetime, she mentioned the Suzuki Sierra Stockman. These small, no-frills, four-wheel-drive vehicles became something of an icon on Australian sheep and cattle farms through the 1970s, ‘80s and ’90s.

    By the 1990s, the Suzuki Sierra Stockman had an iconic status among Australian farmers.
    Turbo_J / Flickr

    Reflecting on her memories of first using the vehicle, the farmer said:

    Once I learnt that I could actually draft cattle out with the Suzuki, that changed everything. You could do exactly what you did on a horse with a vehicle.

    It seems unlikely that Suzuki’s engineers in Japan envisaged their little jeep chasing cattle in the paddocks of Central West of NSW. The Suzuki was in a sense remade by farmers who found innovative uses for it.

    Future technology must be simple, adaptable and reliable

    The combustion engine was a key technological change on farms in the 20th century. Computers may play a similar role in the 21st.

    We are perhaps yet to see a digital product as iconic as wire fences, windmills, sheepdogs and the Suzuki Stockman. Computers are still largely technologies of the office, not the paddock.

    However, this is changing as computers get smaller and are wired into water tanks, soil monitors and in-paddock scales. More data input from these sensors means AI systems have more scope to help farmers make decisions.

    AI may well become a much-loved tool for farmers. But that journey to iconic status will depend as much on how farmers adapt the technology as on how the developers build it. And we can guess at what it will look like: simple, adaptable and reliable.

    This article is based on research conducted by the Foragecaster project, led by AgriWebb and supported by funding from Food Agility CRC Ltd, funded under the Commonwealth Government CRC Program. The CRC Program supports industry-led collaborations between industry, researchers and the community. This project was also supported by funding from Meat and Livestock Australia (MLA).

    ref. ‘Shit in, shit out’: AI is coming for agriculture, but farmers aren’t convinced – https://theconversation.com/shit-in-shit-out-ai-is-coming-for-agriculture-but-farmers-arent-convinced-259997

    MIL OSI AnalysisEveningReport.nz

  • MIL-OSI Analysis: Detroit restaurants identified as ‘Black-owned’ on Yelp saw a slight drop in business ratings

    Source: The Conversation – USA – By Matthew Bui, Assistant Professor of Information and Digital Studies, University of Michigan

    Yelp’s Black-owned tag was designed to help business owners like Don Studvent attract more customers. His restaurant closed in 2018 after nine years in business. AP Photo/Carlos Osorio

    When the online review platform Yelp added a “Black-owned” tag in 2020, it boosted the visibility of Black-owned restaurants in Detroit. It also caused their ratings to drop, according to our recent study.

    Both local and nonlocal reviewers who showed awareness of a restaurant’s Black ownership rated restaurants 3.03 stars on average. Those who did not acknowledge Black ownership gave a rating of 3.78 stars on average. The tag seems to have caused the average rating to drop by attracting more reviewers who were aware of Black ownership.

    Why it matters

    Technology companies often introduce new features and tools to influence user behavior and make their platforms more usable.

    Although Yelp intended to support Black communities with the Black-owned tag, the design intervention was harmful to Black restaurant owners in Detroit because Yelp failed to consider platform and community-based factors that significantly shape user interactions.

    Yelp’s user base is predominantly white, educated and affluent. Making Detroit’s Black-owned restaurants more visible to Yelp users may have amplified cross-cultural interactions and frictions. For example, non-Black users sometimes mentioned “slower” and “rude” service as justifications for lower ratings. Close readings of these reviews hinted at intercultural and communicative clashes.

    Even if Black-owned restaurants businesses didn’t select the tag, they appeared in searches for “Black-owned restaurants,” in 2022 when we conducted the study and as recently as 2025. Businesses can remove the “Black-owned” tag, but Yelp doesn’t provide a way for them to opt out of search results.

    How we did our work

    To examine the local impacts of Yelp’s Black-owned tag, we collected over 250,000 Yelp reviews of Black- and non-Black-owned restaurants in Detroit and Los Angeles.

    We identified Black-owned restaurants through community-sourced lists for Detroit and Los Angeles and then generated a random sample for the non-Black-owned restaurants.

    We then identified reviews that explicitly noted “Black ownership” for closer analysis.

    Detroit’s Black-owned businesses saw a greater loss in business compared with “ownership-unreported” restaurants during the COVID-19 pandemic. This means they also potentially had more to gain from the new tag.

    We found the awareness of Black ownership on Yelp significantly increased following Yelp’s addition of the Black-owned tag in June 2020. A year after the tag was added, reviews in Detroit mentioned Black ownership 4.3% more often than a year before it was rolled out.

    Detroit Black-owned restaurants also saw a small temporary spike in their number of reviews, largely around the time Yelp added the Black-owned tag. At the same time, the restaurants’ average star ratings dropped from 3.91 to 3.88. In contrast, non-Black-owned restaurants’ ratings stayed relatively steady at 3.90.

    This metric is an aggregate of all Detroit restaurants’ Yelp reviews over their entire existence, so a .03-star rating change is small but significant.

    Even minor changes to star ratings affect the number of diners restaurants attract, their earning potential and the likelihood they will sell out of food.

    Adding obstacles in digital platforms serves to reproduce and amplify inequalities these businesses already face, rather than alleviate them. For example, Black-owned businesses have a harder time getting loans and are relatively underrepresented in Michigan as a whole.

    These findings may seem surprising given that Detroit is a majority Black city. However, Black users on Yelp are a minority. Keeping in mind the skewed user base of Yelp, we hypothesize the lower reviews for businesses featuring a Black-owned tag reflect existing racial and digital divides in the city.

    Generally, our study provides additional evidence that digital interventions are not “one-size-fits-all,” nor is digital visibility inherently positive for all businesses.

    The Research Brief is a short take on interesting academic work.

    _This article was updated to clarify how labels are added to profiles.

    This research was supported by a research grant from the Ewing Marion Kauffman Foundation.

    Matthew Bui does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    Cameron Moy does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Detroit restaurants identified as ‘Black-owned’ on Yelp saw a slight drop in business ratings – https://theconversation.com/detroit-restaurants-identified-as-black-owned-on-yelp-saw-a-slight-drop-in-business-ratings-256306

    MIL OSI Analysis

  • MIL-OSI Global: Could electric brain stimulation lead to better maths skills?

    Source: The Conversation – UK – By Roi Cohen Kadosh, Professor of Cognitive Neuroscience, University of Surrey

    Triff/Shutterstock

    A painless, non-invasive brain stimulation technique can significantly improve how young adults learn maths, my colleagues and I found in a recent study. In a paper in PLOS Biology, we describe how this might be most helpful for those who are likely to struggle with mathematical learning because of how their brain areas involved in this skill communicate with each other.

    Maths is essential for many jobs, especially in science, technology, engineering and finance. However, a 2016 OECD report suggested that a large proportion of adults in developed countries (24% to 29%) have maths skills no better than a typical seven-year-old. This lack of numeracy can contribute to lower income, poor health, reduced political participation and even diminished trust in others.

    Education often widens rather than closes the gap between high and low
    achievers, a phenomenon known as the Matthew effect. Those who start with an advantage, such as being able to read more words when starting school, tend to pull further ahead. Stronger educational achievement has been also associated with socioeconomic status, higher motivation and greater engagement with material learned during a class.

    Biological factors, such as genes, brain connectivity, and chemical signalling, have been shown in some studies to play a stronger role in learning outcomes than environmental ones. This has been well-documented in different areas, including maths, where differences in biology may explain educational achievements.


    Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.


    To explore this question, we recruited 72 young adults (18–30 years old) and taught them new maths calculation techniques over five days. Some received a placebo treatment. Others received transcranial random noise stimulation (tRNS), which delivers gentle electrical currents to the brain. It is painless and often imperceptible, unless you focus hard to try and sense it.

    It is possible tRNS may cause long term side effects, but in previous studies my team assessed participants for cognitive side effects and found no evidence for it.

    Could tRNS help people improve their maths skills?
    Prostock-studio/Shutterstock

    Participants who received tRNS were randomly assigned to receive it in one of two different brain areas. Some received it over the dorsolateral prefrontal cortex, a region critical for memory, attention, or when we acquire a new cognitive skill. Others had tRNS over the posterior parietal cortex, which processes maths information, mainly when the learning has been accomplished.

    Before and after the training, we also scanned their brains and measured levels of key neurochemicals such as gamma-aminobutyric acid (gaba), which we showed previously, in a 2021 study, to play a role in brain plasticity and learning, including maths.

    Some participants started with weaker connections between the prefrontal and parietal brain regions, a biological profile that is associated with poorer learning. The study results showed these participants made significant gains in learning when they received tRNS over the prefrontal cortex.

    Stimulation helped them catch up with peers who had stronger natural connectivity. This finding shows the critical role of the prefrontal cortex in learning and could help reduce educational inequalities that are grounded in neurobiology.

    How does this work? One explanation lies in a principle called stochastic resonance. This is when a weak signal becomes clearer when a small amount of random noise is added.

    In the brain, tRNS may enhance learning by gently boosting the activity of underperforming neurons, helping them get closer to the point at which they become active and send signals. This is a point known as the “firing threshold”, especially in people whose brain activity is suboptimal for a task like maths learning.

    It is important to note what this technique does not do. It does not make the best
    learners even better. That is what makes this approach promising for bridging gaps,
    not widening them. This form of brain stimulation helps level the playing field.

    Our study focused on healthy, high-performing university students. But in similar studies on children with maths learning disabilities (2017) and with attention-deficit/hyperactivity disorder (2023) my colleagues and I found tRNS seemed to improve their learning and performance in cognitive training.

    I argue our findings could open a new direction in education. The biology of the learner matters, and with advances in knowledge and technology, we can develop tools that act on the brain directly, not just work around it. This could give more people the chance to get the best benefit from education.

    In time, perhaps personalised, brain-based interventions like tRNS could support learners who are being left behind not because of poor teaching or personal circumstances, but because of natural differences in how their brains work.

    Of course, very often education systems aren’t operating to their full potential because of inadequate resources, social disadvantage or systemic barriers. And so any brain-based tools must go hand-in-hand with efforts to tackle these obstacles.

    Roi Cohen Kadosh serves on the scientific advisory boards of Neuroelectrics Inc., and Innosphere Ltd. He is the founder and shareholder of Cognite Neurotechnology Ltd. He received funding from the Wellcome Trust, UKRI, the British Academy, IARPA, DASA, Joy Ventures, the James S McDonnell Foundation, and the European Union. He is affiliated with the University of Surrey.

    ref. Could electric brain stimulation lead to better maths skills? – https://theconversation.com/could-electric-brain-stimulation-lead-to-better-maths-skills-260134

    MIL OSI – Global Reports

  • MIL-Evening Report: Memo to Shane Jones: what if NZ needs more regional government, not less?

    Source: The Conversation (Au and NZ) – By Jeffrey McNeill, Honorary Research Associate, School of People, Environment and Planning, Te Kunenga ki Pūrehuroa – Massey University

    If the headlines are anything to go by, New Zealand’s regional councils are on life support.

    Regional Development Minister Shane Jones recently wondered whether “there’s going to be a compelling case for regional government to continue to exist”. And Prime Minister Christopher Luxon is open to exploring the possibility of scrapping the councils.

    This has all been driven by the realisation that the government’s proposed resource management reforms would essentially gut local authorities of their basic planning and environmental management functions. Various mayors and other interested parties have agreed. While some are circumspect, there’s broad agreement a review is needed.

    At present, each territorial council writes its own city or district plan. Regional councils write a series of thematic plans addressing different environmental issues. All the plans contain the councils’ regulatory “rules” that determine what people can or cannot do.

    Under the coming reforms, the territorial and regional councils of each region would have only a single chapter each within a broader regional spatial plan. Their function would, for the main part, involve tweaking all-embracing national policies and standards.

    Further, all compliance and monitoring – now a predominantly regional council activity – is to be taken over by a national agency (possibly the Environment Protection Authority). This won’t leave much for regional councils to do, compared with their broad remits now.

    How regional government evolved

    In truth, regional councils have been targets since they were created as part of the Labour government’s 1989 local government reform. Carried out in lockstep with the drafting of the Resource Management Act (passed in 1991), this established two levels of local government.

    City and district councils were to be responsible for infrastructure and the built environment. The new regional councils were more opaque, essentially multi-function, special-purpose authorities, recognising that some government actions are bigger than local but smaller than national.

    In the event, they became what in many countries would be thought of as environmental protection agencies. Their boundaries were drawn to capture river catchments, reflecting their catchment board antecedents, which looked after soil erosion and flood management.

    Other functions were drawn from other government departments. Air-quality management came from the old Department of Health. Coastal management was partly inherited from the Ministry of Transport, shared with the Department of Conservation.

    Public transport and civil defence were tacked on, given their cross-territorial scale and lack of anywhere else to put them.

    Parochialism and politics

    All their various functions have meant regional councils determine who gets to use the region’s resources – and who misses out. And political decisions are a surefire way to make enemies.

    For example, the Resource Management Act applied the presumption that no one could discharge any contaminant into water unless expressly allowed by a rule or a resource consent. Regional councils therefore required their territorial councils to upgrade their rubbish dumps and sewage treatment systems.

    Similarly, farmers could no longer simply take water to irrigate or empty cowshed effluent straight into the nearest stream as of right. The necessary infrastructure upgrades were expensive.

    Ironically, these attempts to minimise the immediate impacts of such demands on water users saw urban voters and environmental groups criticise the councils and the government for being too soft on “dirty dairying” and other polluters.

    Parochialism also plays a part, as does the feeling in some rural communities that they’re forgotten by their regions’ cities, where most voters live. The perceived poor handling of events such as last year’s Hawke’s Bay flooding and the 2018 Wellington bus network failure have not helped.

    The government even replaced Environment Canterbury’s elected council with appointed commissioners in 2010 over performance concerns, particularly in water management.

    Yet the regional council model has largely survived intact – with two exceptions. The Nelson-Marlborough Regional Council was replaced by the Nelson City and Marlborough and Tasman District unitary councils in 1992, as a token sacrifice to the conservative wing of the National government, which vehemently opposed the new regions.

    The genesis of the Auckland Council super-region can be traced to the 1999–2008 Labour government’s frustration at getting a unified position from the city’s seven councils on where to build a stadium for the 2011 Rugby World Cup. Not everyone is happy with the resulting metro-regional solution.

    Who will be accountable?

    If regional government is indeed put to rest, it will be another phase in this piecemeal evolutionary process. But the new model will still require central government to have a significant regional presence – and commensurate central government funding.

    But central government has had a regional-scale presence for a long time. Police, the fire service, economic development and social welfare agencies all have their own regional boundaries. Public health and tertiary training and education are also essentially regional.

    All these functions are inherently political. And in many other countries, they are are delivered by regional governments. Maybe, once the implications are looked at more closely, leaving regional councils intact will seem the easier and cheaper option. Indeed, there is a counter argument that we need more regional government, not less.

    The current impulse for local government change – including district council amalgamation – continues an ad hoc process going back more than 30 years. As I have argued previously, the form, function and funding of local government need to be considered together.

    The regional level of administration will not go away. But the overriding question remains: who should speak for and be accountable to their communities for what are ultimately still political decisions, whoever makes them?

    Jeffrey McNeill does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Memo to Shane Jones: what if NZ needs more regional government, not less? – https://theconversation.com/memo-to-shane-jones-what-if-nz-needs-more-regional-government-not-less-259778

    MIL OSI AnalysisEveningReport.nz

  • MIL-OSI Analysis: Self determination theory: how to use it to boost wellbeing

    Source: The Conversation – UK – By Mark Fabian, Reader of Public Policy, University of Warwick

    Self-determination theory (SDT) is one of the most well established and powerful approaches to wellbeing in psychological research literature. Yet it doesn’t seem to have broken through into popular discussions about wellbeing, happiness and self-help. That’s a shame, because it has so much to contribute.

    A foundational idea in self-determination theory is that we have three basic psychological needs: for autonomy, competence and relatedness.

    Autonomy is the need to be in control of your own life rather than being controlled by others. Competence is the need to feel skilful at the tasks one values or needs to thrive. Relatedness refers to feeling loved and cared for, and a sense of belonging to a group that provides social support.

    If our basic psychological needs are met, then we are more likely to experience wellbeing. Symptoms include emotions such as joy, vitality and excitement because we’re doing the things we love, for example. We’ll probably have a sense of meaning and purpose because we live within a community whose culture we value.


    Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.


    Conversely, when our basic needs are thwarted we should see symptoms of illbeing. Anger, frustration and boredom grow when our behaviour is controlled by parents, bureaucrats, bosses or other forces that press our energies towards their ends instead of ours.

    Depression is liable when we our competence is overwhelmed by failure. And anxiety is often a social emotion that arises when we’re worried about whether our group cares for us.

    So we should cultivate our basic psychological needs – but how? You need to discover what you want to do with your life, what skills to become competent in, who to relate to and what communities to contribute to.

    Using motivation to find your way

    Here’s where the second foundational idea in SDT can be super helpful, as I explain in my new book, Beyond Happy: How to rethink happiness and find fulfilment. SDT proposes a motivational spectrum running from extrinsic at one end to intrinsic at the other. Finding out where you are on the spectrum for a certain activity or task can help you work out how to be happier.

    The more extrinsically motivated something is, the more self-regulation it requires. For example, when refugees flee their homes due to encroaching war, there is often a large part of them that wants to stay. Willpower is required to act. In contrast, intrinsically motivated behaviour springs spontaneously from us. You don’t need willpower to get stuck into your hobbies.

    Each type of motivation comes with different emotional signals and deciphering them can help us find what values, behaviour and groups suit us.

    The spectrum of motivation according to self-determination theory.
    CC BY-NC

    “Identified” motivation, for example, sits between extrinsic and intrinsic motivation. It occurs when we value an activity but don’t inherently enjoy it. That’s why success in identified behaviour is usually met with a feeling of accomplishment or the warm and fuzzy feeling you get when you do the right thing, like going a bit out of your way to put your rubbish in a bin.

    In contrast, “introjected” motivation is where you value something contingent to the behaviour itself. Many of us loathe the gym, for example, but we want to be healthy. A child might not want to practice the cello, but they do want their parent’s approval.

    Because introjection is relatively extrinsic, it requires willpower, and probably a bit more of it than for identified behaviour. Completion of an introjected activity is often met with relief rather than accomplishment and little desire to keep going.

    Sometimes things that are dependent on introjected behaviour can make us unhappy. In teen dramas, for example, the protagonist often does something because they want to be popular, but when they win the approval of the cool kids they realise those kids are mean and lame.

    Why money, power and status won’t make you happy

    If that’s how you feel, you’ve found something inauthentic to you. Then there’s very little chance the introjected activity will lead to your wellbeing. In fact, SDT has identified some common values. You’ll recognise them immediately: popularity, fame, status, power, wealth and success.

    They’re extrinsic because they’re not peculiar to you. If you get rich doing the thing you love, that’s great, but many of us never even think about what we love because we’re too busy thinking about how to get rich.

    Extrinsic pursuits are ultimately bad for our wellbeing because they’re all poor substitutes for basic psychological needs. When our autonomy is thwarted by strict parents or disciplinarian teachers, we crave power. When we don’t know what sort of life to build and thus what skills we need competence in, we adopt other people’s notions of success instead.

    Extrinsic pursuits often emerge from a wounded place and a defensive reaction. When we’re lonely or feel unloved for who we are, for example, we might compensate by seeking fame or popularity. We’ll start talking about our accomplishments on LinkedIn, for example.

    The problem is that the people this attracts don’t value you specifically, only your power, status or money. You sense that if you ever lost those things, you would lose these people too.

    SDT can help you learn to listen to your emotions and interpret your motivations instead, and use them to guide you towards the values, activities and people that are right for you.

    For example, if you feel joyful and fulfilled when you solve a complex puzzle, perhaps consider a career that involves that activity, such as law or engineering. If such puzzles feel like torture, that’s a signal too. Perhaps something more relational or intuitive, like social work, would work better.

    When you pursue things that are authentic to you it will nourish your sense of autonomy. You’ll build competence in those activities because they’re intrinsically motivated. And you’ll form deep relationships with the people you encounter because you genuinely like each other. Wellbeing will follow.

    Mark Fabian does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Self determination theory: how to use it to boost wellbeing – https://theconversation.com/self-determination-theory-how-to-use-it-to-boost-wellbeing-259829

    MIL OSI Analysis

  • MIL-OSI Analysis: Dune director Denis Villeneuve will helm the next Bond – but what will his 007 be like?

    Source: The Conversation – UK – By William Proctor, Associate Professor in Popular Culture, Bournemouth University

    Wiki Commons/Canva, CC BY-SA

    The James Bond franchise has lain dormant for four years, since Daniel Craig’s swansong as 007, No Time to Die. A legal quarrel between Bond’s producers, Michael G. Wilson and Barbara Broccoli, and Amazon Studios resulted in a stalemate and production on a new Bond film has remained in limbo.

    Nevertheless, speculation has been rife about which actor will next play Ian Fleming’s super-spy (the latest actor to be associated with the role is former Spider-man Tom Holland).

    When news surfaced in February 2025 that Amazon MGM (Amazon purchased MGM in 2021) had effectively become Bond’s new custodians, critics and audiences alike expressed concern – to put it lightly. Many feared that Jeff Bezos was more interested in stimulating Amazon Prime membership by driving multiple content streams through spin-offs and merchandising than protecting Fleming’s legacy.

    However, last week’s announcement that Denis Villeneuve has been appointed as the director of the 26th Bond film is a savvy move. It’s a declaration of intent that seeks to promote and market Amazon MGM as safe harbour for the Bond franchise.


    Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.


    The announcement positions the next era of Bond as a prestigious exercise helmed by “a cinematic master”, not a journeyman director. Villeneuve was previously offered the opportunity to direct No Time to Die, but turned the role down because of his commitment to the Dune films.

    By appointing Villeneuve, Amazon has managed to radically shift the public debate. Villeneuve is “much more than a technical director”, wrote Guardian film critic Peter Bradshaw. “He is an alpha-grade auteur in the same league as Christopher Nolan.”

    Other critics have pointed to his rare ability to “combine blockbuster momentum (and ticket sales) with the finer, more nuanced sensibilities of a filmmaker always concerned with slowing down, honing in on character and theme”.

    Although Sam Mendes, director of Skyfall (2012) and Spectre (2015), came with artistic status, Villeneuve is something different – a marquee name frequently described as an auteur.

    Villeneuve talks about his love for Bond.

    Since his transition from making mostly low-key independent films in his native Canada to his arrival in Hollywood with Prisoners, starring Hugh Jackman and Jake Gyllenhaal (2013), Villeneuve has amassed an impressively eclectic filmography.

    He has proven that he is as comfortable shooting realistic crime thrillers (Sicario, 2015) and surrealist cinema that David Lynch would be proud of (Enemy, 2013), as he is with science fiction (Arrival, 2016, Blade Runner 2049, 2017, and the Dune films, 2021 and 2024).

    Villeneuve’s Bond

    Although Sicario may be the closest in terms of genre to the Bond films, establishing Villeneuve as a director who can expertly shoot action sequences, it is nevertheless difficult at this stage to conceptualise what a Villeneuve Bond film might be like.

    Some critics have suggested that the director’s cinematic resume, eclectic as it is, might not bode well for Bond. The Hollywood Reporter’s film critic Benjamin Svetkey, for instance, worries that Villeneuve’s “lugubrious, meditative filmmaking” is sorely lacking in humour – which could be fatal for 007. “A certain amount of wit and winking is critical to the character,” he claims.

    It is early days for Amazon MGM and Villeneuve. As yet, there is reportedly no treatment, no script, no writer and – more pointedly – no actor appointed to the role. Whatever happens, the 26th Bond film is likely to be a hard reboot that wipes the slate clean (again) after the fate of 007 in No Time to Die.

    Villeneuve’s choice for Bond is unlikely to be as cartoonish as Pierce Brosnan’s iteration.

    Although Villeneuve has said that he intends to honour tradition and that Bond is “sacred territory” for him, Bond’s capacity for revision and regeneration has been key to the franchise’s longevity.

    As socoiologists Tony Bennett and Janet Woollacott argue in their seminal study, Bond and Beyond, the figure of Bond has over the past six decades “been differently constructed at different moments,” with “different sets of ideological and cultural concerns”.

    So what kind of Bond film Villeneuve ends up directing largely depends on the story and whichever actor is anointed as the next James Bond. It is doubtful that audiences will expect a campy pantomime Bond like Roger Moore, or a Bond with an invisible car, like Pierce Brosnan in the cartoonish Die Another Day (2002). Villeneuve’s choice of Casino Royale as his favourite 007 may provide a clue. But it is also unlikely that the director will be satisfied with slavishly repeating the past.

    William Proctor does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Dune director Denis Villeneuve will helm the next Bond – but what will his 007 be like? – https://theconversation.com/dune-director-denis-villeneuve-will-helm-the-next-bond-but-what-will-his-007-be-like-260140

    MIL OSI Analysis

  • MIL-OSI Global: Why frequent nightmares may shorten your life by years

    Source: The Conversation – UK – By Timothy Hearn, Senior Lecturer in Bioinformatics, Anglia Ruskin University

    Lightfield Studios/Shutterstock.com

    Waking up from a nightmare can leave your heart pounding, but the effects may reach far beyond a restless night. Adults who suffer bad dreams every week were almost three times more likely to die before age 75 than people who rarely have them.

    This alarming conclusion – which is yet to be peer reviewed – comes from researchers who combined data from four large long-term studies in the US, following more than 4,000 people between the ages of 26 and 74. At the beginning, participants reported how often nightmares disrupted their sleep. Over the next 18 years, the researchers kept track of how many participants died prematurely – 227 in total.

    Even after considering common risk factors like age, sex, mental health, smoking and weight, people who had nightmares every week were still found to be nearly three times more likely to die prematurely – about the same risk as heavy smoking.

    The team also examined “epigenetic clocks” – chemical marks on DNA that act as biological mileage counters. People haunted by frequent nightmares were biologically older than their birth certificates suggested, across all three clocks used (DunedinPACE, GrimAge and PhenoAge).

    The science behind the silent scream

    Faster ageing accounted for about 39% of the link between nightmares and early death, implying that whatever is driving the bad dreams is simultaneously driving the body’s cells towards the finish line.

    How might a scream you never utter leave a mark on your genome? Nightmares happen during so-called rapid-eye-movement sleep when the brain is highly active but muscles are paralysed. The sudden surge of adrenaline, cortisol and other fight-or-flight chemicals can be as strong as anything experienced while awake. If that alarm bell rings night after night, the stress response may stay partially switched on throughout the day.

    Continuous stress takes its toll on the body. It triggers inflammation, raises blood pressure and speeds up the ageing process by wearing down the protective tips of our chromosomes.

    On top of that, being jolted awake by nightmares disrupts deep sleep, the crucial time when the body repairs itself and clears out waste at the cellular level. Together, these two effects – constant stress and poor sleep – may be the main reasons the body seems to age faster.

    Your brain clears out waste when you sleep.
    Teeradej/Shutterstock.com

    The idea that disturbing dreams foreshadow poor health is not entirely new. Earlier studies have shown that adults tormented by weekly nightmares are more likely to develop dementia and Parkinson’s disease, years before any daytime symptoms appear.

    Growing evidence suggests that the brain areas involved in dreaming are also those affected by brain diseases, so frequent nightmares might be an early warning sign of neurological problems.

    Nightmares are also surprisingly common. Roughly 5% of adults report at least one each week and another 12.5% experience them monthly.

    Because they are both frequent and treatable, the new findings elevate bad dreams from a spooky nuisance to a potential public health target. Cognitive behavioural therapy for insomnia, imagery-rehearsal therapy – where sufferers rewrite the ending of a recurrent nightmare while awake – and simple steps such as keeping bedrooms cool, dark and screen free have all been shown to curb nightmare frequency.

    Before jumping to conclusions, there are a few important things to keep in mind. The study used people’s own reports of their dreams, which can make it hard to tell the difference between a typical bad dream and a true nightmare. Also, most of the people in the study were white Americans, so the findings might not apply to everyone.

    And biological age was measured only once, so we cannot yet say whether treating nightmares slows the clock. Crucially, the work was presented as a conference abstract and has not yet navigated the gauntlet of peer review.

    Despite these limitations, the study has important strengths that make it worth taking seriously. The researchers used multiple groups of participants, followed them for many years and relied on official death records rather than self-reported data. This means we can’t simply dismiss the findings as a statistical fluke.

    If other research teams can replicate these results, doctors might start asking patients about their nightmares during routine check-ups – alongside taking blood pressure and checking cholesterol levels.

    Therapies that tame frightening dreams are inexpensive, non-invasive and already available. Scaling them could offer a rare chance to add years to life while improving the quality of the hours we spend asleep.

    Timothy Hearn does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Why frequent nightmares may shorten your life by years – https://theconversation.com/why-frequent-nightmares-may-shorten-your-life-by-years-260008

    MIL OSI – Global Reports

  • MIL-OSI Global: Where does the UK most need more public EV chargers?

    Source: The Conversation – UK – By Labib Azzouz, Research Associate in Transport and Energy Innovation, University of Oxford

    Electric vehicle chargers at a motorway service station in Grantham, England. Angus Reid/Shutterstock

    The automotive and EV industry has repeatedly insisted that the UK needs more electric vehicle (EV) chargers to help motorists make the switch from conventional fossil-fuel burning cars.

    The Labour government has announced £400 million to install EV chargers, mainly on streets in poorer residential neighbourhoods, in place of the Conservative’s £950 million rapid charging fund that was directed at installing chargers in motorway service stations.

    Does it matter where these chargers are – and who pays to build them?

    The short answer is yes, it does matter. Our research conducted at motorway and local EV charging stations across England – including those located in residential areas, high streets and community centres – indicates that these two types of infrastructure serve distinct groups of users and fulfil different purposes.

    Suggesting that one can substitute for the other risks sending mixed signals to both the industry and the driving public.

    We found that motorway charging stations tend to cater to wealthier men, who are more likely to own premium EVs with long-range batteries and better performance. Many of these drivers have access to home chargers, so their use of public chargers is only for occasional, long-distance travel for business, leisure, or holidays – trips that require chargers along motorways.

    Convenience and charging speed are often more important than the price of public charging, particularly when the travel costs of these drivers are covered by their employers.

    Local public charging stations, on the other hand, serve more diverse groups. These include drivers from lower-income households who are more likely to own older and smaller EVs with shorter ranges. Access to home charging is often limited, especially for people living in flats or urban areas without driveways, garages or off-street parking.

    Not everyone can plug in at home.
    Andersen EV/Shutterstock

    Local chargers are also vital for taxi and delivery drivers who depend on their vehicles for work and make frequent short trips throughout the day. There are many professional drivers without access to workplace charging stations who need alternative local provision – something the Conservative government recognised in its 2022 EV charging strategy.

    Ultimately, the transition to EVs should take a balanced approach that carefully considers social equity, economic viability and environmental impact.

    Different locations serve different drivers

    Motorway charging stations are commercially attractive to private investors, such as energy companies, specialist charging providers and car manufacturers, despite their higher upfront costs and complex requirements.

    This is because service stations offer greater short-term revenue due to their ability to set premium prices. This is a result of there being limited alternatives and high demand for rapid charging, especially among long-distance travellers, and the willingness of EV drivers to pay for speed and convenience – unlike in more price-sensitive neighbourhood settings.

    Unsurprisingly, the government found that the rapid deployment of motorway chargers in recent years has been largely driven by the private sector. Our research highlighted that these revenues could be enhanced by a broader range of retail, dining and relaxation amenities, turning the time waiting for a car to charge into a more productive and pleasurable experience.

    Residential charging stations may not offer high profits per charge, but they typically require lower capital investment and benefit from consistent and predictable use. They are also suited to measures for reducing strain on the grid and balancing energy supply and demand.

    These measures include tariffs that make it cheaper to charge EVs during off-peak hours, or technology that allows cars to feed electricity stored in batteries back into the grid. These features make them appropriate for public funding, where return on investment is measured not just in profit but in value for the public.

    Considering that local EV charging serves those who do not have access to home charging and who drive for a living, the case for public funding is even stronger. These sorts of chargers make switching to an EV easier for different groups.

    For example, safe and carefully placed public chargers could help more women switch to EVs – although our research suggests that, while “careful placement” might refer to residential areas, it doesn’t necessarily mean on streets. Well-lit car parks and community destinations are sometimes considered safer options.

    Charging points outside a community centre in the Outer Hebrides, Scotland.
    AlanMorris/Shutterstock

    By helping EV drivers make frequent short trips, local chargers can also significantly reduce urban air pollution, emissions and noise, contributing to more liveable, healthier cities.

    That said, motorway charging stations and those near key transport corridors still play a crucial role in a comprehensive national network, and public funding may be required in more peripheral and rural areas of the UK where installations lag and commercial interest is limited.

    While long-distance trips are less frequent than short ones, they account for a disproportionately large share of energy use and emissions. Switching such trips to electric will be essential to reaching net zero goals.

    It seems reasonable to prioritise public investment in local EV charging infrastructure to support a fairer EV transition, but this should not be limited to on-street chargers. Investment is needed in residential and non-residential areas, public car parks, community centres and workplaces.

    Different types of EV charging are not interchangeable – all are needed to support the switch.


    Don’t have time to read about climate change as much as you’d like?

    Get a weekly roundup in your inbox instead. Every Wednesday, The Conversation’s environment editor writes Imagine, a short email that goes a little deeper into just one climate issue. Join the 45,000+ readers who’ve subscribed so far.


    Labib Azzouz has received funding from the UK Research and Innovation via the UK Energy Research Centre and Innovate UK as part of the Energy Superhub Oxford (ESO) project.

    Hannah Budnitz receives government funding from UK Research and Innovation grants via the Economic and Social Research Council and the Engineering and Physical Sciences Research Council. She has also previously received funding from Innovate UK and the Department for Transport.

    ref. Where does the UK most need more public EV chargers? – https://theconversation.com/where-does-the-uk-most-need-more-public-ev-chargers-259623

    MIL OSI – Global Reports

  • MIL-OSI Global: The Bear season 4: this meaty restaurant drama is still an enticing bingeable prospect

    Source: The Conversation – UK – By Jane Steventon, Course Leader, BA (Hons) Screenwriting; Deputy Course Leader & Senior Lecturer, BA (Hons) Film Production, University of Portsmouth

    Take a soupçon of identity crisis, a pinch of perfectionism, a scoop of burnout and mix thoroughly with a large measure of fraternal grief and sear over a hot grill and voilà! You have The Bear, a perfectly blended drama about a chef on the edge, driven by relentless ambition and exacting standards as he turns his family’s humble sandwich shop into a fine-dining restaurant.

    This intoxicating family drama was eaten up by critics and audiences alike in 2022, its first season garnering a rare perfect 100% score on Rotten Tomatoes, the subsequent two reaching scores of 99% and 89% respectively. It’s certainly a hard act to follow for season four.

    The first ten minutes of The Bear’s pilot episode thrillingly defined what was to come in high-octane style and scene-setting detail. The first season delivered a clever mix of authentic dialogue and setting, relatable family dysfunction and dynamic production style.

    Showstopping scenes of stressful kitchen heat were served up alongside a delectable range of new and established talent in the form of Jeremy Allen White (Carmy), Ebon Moss-Bachrach (Richie), Ayo Edebiri (Sydney) and Oliver Platt (Cicero/Uncle Jimmy).


    Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.


    In charge is showrunner Christopher Storer, who came up with the concept after being inspired by his friend’s father Chris Zucchero, the owner of Chicago sandwich joint Mr Beef.

    With his professional chef sister also serving as a consultant, Storer succeeded in creating a deliciously authentic and intensely real drama. Buoyed along the way by 21 Emmys and five Golden Globes, Storer also watched his cast ascend, the tortured-soul performance of White garnering particular praise.

    Testing the parameters of a long-running show, Storer focused in on the entire cast of characters and their backstories, a successful tactic used by shows such as Orange is the New Black to keep the drama – largely confined to a kitchen set – fresh.

    Pulling in Hollywood die-hards Oliver Platt and Jamie Lee Curtis for familial tough-love roles further enriched the mix, often using a non-chronological timeframe to go back to moments of family turbulence and tension. This made for three-dimensional characters and enabled evolution around difficult themes such as the aftermath of suicide and generational trauma.

    The Bear has come a long way in three seasons, starting with a spit and sawdust establishment serving up the lunchtime beef sandwiches for its working customers.

    Carmy’s experience and longing for the high-end restaurant of his dreams hurtled forward in season two, as he sent his core crew off in different directions to hone their skills and help form his vision. A restaurant trying to win success but plagued with challenges, there were exhausting familial tensions embedded in every episode of season three.

    Several themes play out in The Bear: love, family, loyalty, community and purpose. The relationship between Carmy and cousin Richie (not a real cousin, but a term of endearment) is key to linking past and future. Richie provides some of the highlights of comedy and pathos as he spits truth bombs, most frequently at talented sous-chef Syd.

    It is Syd who follows Carmy’s aspirations for gastronomic perfection but can’t abide the lack of order or the intense highs and lows that inevitably go hand in hand with his talent. And this is one central question to consider for the latest series: just how long will the audience remain loyal to Carmy and his endless quest for artistry in a high-failure rate industry?

    It’s all in the sauce

    Storer begins season four with a ghost. Carmy and his dead brother Mikey (Jon Berthal) banter in a seven-minute scene, with Carmy ultimately confiding the dream of a restaurant as Mikey watches him make tomato sauce (“too much garlic”). The tomatoes resonate: Mikey left behind money hidden in tomato cans that ended up saving Carmy’s sanity and his dream of a proper restaurant.

    Just as oranges represent death to Frances Ford Coppola, Storer uses tomatoes to underscore themes; here they symbolise familial loyalty and history, a solid base to a meal, a core ingredient. Mikey was one of the core ingredients in Carmy’s life, and now he’s gone.

    Carmy awakens to a rerun of Groundhog Day on late-night TV and fittingly, we too are back – same dish, now more seasoned and enriched with its core ingredients and ready to serve up a big bowlful of family, love, ambition, strife and grief.

    The episode furthers the theme of loyalty as the restaurant receives The Tribune’s review – the cliffhanger of the season three finale. Naturally, Storer doesn’t let up – the food critic highlights “dissonance” and Carmy is back in emotional chaos, with Syd urging him to lighten up and lose the misery.

    In truth, this series could do with adding some more humour in the mix; the teasing and frivolous banter of season one has got somewhat lost in the seasons that followed.

    Storer ramps up the tension, setting several ticking clocks in place: chiefly Uncle Jimmy’s notice period for the business to turn a profit is literally installed on a digital clock in the kitchen. Then Syd’s headhunter calls, offering her desired autonomy and an exit strategy from the chaos.

    And Carmy raises the stakes with an intention to gain a Michelin star. Thus a heroic journey is set in place for the whole cast, with future battles both internal and external laid out.

    There’s too much going on at this feast and the feeling of being stuffed full of story is tangible by the end of the first episode. Still, with a season lining up more emotional turbulence steered by White, more celebrity cameos (Brie Larson and Rob Reiner are lined up) and the excellent cinematography and performances that we have come to expect, Storer stirs his secret sauce.

    The Bear still offers an entertaining and enticing proposition, bingeable and mostly satisfying.

    Jane Steventon does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. The Bear season 4: this meaty restaurant drama is still an enticing bingeable prospect – https://theconversation.com/the-bear-season-4-this-meaty-restaurant-drama-is-still-an-enticing-bingeable-prospect-260143

    MIL OSI – Global Reports

  • MIL-OSI Global: Five ways to avoid illness like the Lionesses

    Source: The Conversation – UK – By Samantha Abbott, Doctoral Researcher, Department of Sport Science, Nottingham Trent University

    England’s Beth Mead cheering on podium after win v Germany in the Women European Championship Final 2022 photographyjp/Shutterstock

    Think back to the last time you had a cold or the flu. Now imagine stepping onto the pitch for a European Cup final, while battling through those symptoms. For elite athletes, illness can strike at the worst possible time – and it could hit women harder.

    Research suggests that female athletes are more susceptible to cold and flu-like illnesses than their male counterparts. For England women’s national football team, the Lionesses, this risk only increases before a major tournament like the Euros.

    Close contact, shared kit, disrupted sleep and travel all add up to a perfect storm for infection. But targeted nutritional strategies, alongside good sleep and hand hygiene, can offer a crucial line of defence.


    Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.


    1. Fuel first: energy matters for immunity

    Before anything else, players need to eat enough. Energy supports both performance and immune function. In fact, female athletes who didn’t meet their energy needs in the run-up to the 2016 Olympics were four times more likely to report cold or flu symptoms.

    This is especially relevant in women’s football, where low energy and carbohydrate intake has been documented among professional players and recreational players too. Regular meals and snacks that include carbohydrate-rich foods like oats, bread and pasta, especially around training, are essential to meet energy demands and support immune health.

    2. Eat the rainbow

    Athletes are often encouraged to go beyond the public’s five-a-day fruit and veg target, aiming instead for eight to ten portions daily. Why? Because colourful plant foods are packed with vitamins, minerals, antioxidants and anti-inflammatory compounds: all vital for immunity.




    Read more:
    We’re told to ‘eat a rainbow’ of fruit and vegetables. Here’s what each colour does in our body


    Each colour offers unique benefits. For instance, red fruits and vegetables, such as tomatoes, contain lycopene, a powerful antioxidant. Orange produce like carrots get their colour from beta-carotene, which is converted by the body into vitamin A – a key vitamin for immune health.

    Eating a rainbow of colours means getting a wide range of nutrients.

    3. Vitamin C: powerful but timing matters

    Vitamin C has long been linked with reducing the risk and severity of cold and flu symptoms. One Cochrane review found that regular vitamin C intake halved the risk of illness in physically active people.

    However, more isn’t always better. Long-term use of high-dose vitamin C supplements could blunt training adaptations – the structural and functional changes the body undergoes in response to repeated exercise – because of its anti-inflammatory effects. That’s why vitamin C is most effective when used strategically, such as during high-risk periods like travel or intense competition. Good food sources include oranges, kiwis, blackcurrants, red and yellow peppers, broccoli and even potatoes.

    4. Gut health supports immune health

    Around 70% of the immune system is located in the gut, making gut health a key player in illness prevention. This is where probiotics (live bacteria) and prebiotics (which feed those bacteria) come in.

    Probiotics, found in fermented foods like kefir and kimchi or in supplement form, have been shown to reduce the duration and severity of respiratory illnesses in athletes. Prebiotics have similarly shown promise. In one study, a 24-week prebiotic intervention in elite rugby players reduced the duration of cold and flu symptoms by over two days.




    Read more:
    Gut microbiome: meet Lactobacillus acidophilus – the gut health superhero


    In the build-up to the Euros, including probiotic-rich foods in their diet or taking a daily prebiotic and probiotic supplement may help players stay healthy and return to training faster if they do get ill.

    5. Zinc lozenges: first aid for a sore throat

    If cold-like symptoms do appear, zinc lozenges can offer fast-acting relief. Zinc has antiviral, antioxidant and anti-inflammatory properties. When zinc is delivered as a lozenge, it acts directly in the throat, where many infections begin. Taken within 24 hours of symptoms starting, zinc lozenges could shorten illness duration by a third.

    But caution is key. Long-term use of high-dose zinc supplements can actually suppress immune function. Zinc lozenges should only be used short-term at symptom onset, not as a daily supplement.

    Staying match-ready during major tournaments means more than just tactical drills and fitness. Nutrition is a powerful ally in illness prevention, especially for women’s teams like the Lionesses. From fuelling adequately to supporting gut health and knowing when to supplement, these nutritional strategies can make the difference between sitting on the bench and bringing a trophy home.

    The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    ref. Five ways to avoid illness like the Lionesses – https://theconversation.com/five-ways-to-avoid-illness-like-the-lionesses-259302

    MIL OSI – Global Reports

  • MIL-OSI Global: Why is Islamophobia so hard to define?

    Source: The Conversation – UK – By Julian Hargreaves, Lecturer, Department of Sociology and Criminology, City St George’s, University of London

    The UK government wants a new definition of Islamophobia and has created a working group of politicians, academics and independent experts to provide one. It aims to settle long-running political debates over the term.

    The concept of Islamophobia describes anti-Muslim and anti-Islamic prejudices and their impact on Muslim communities. The term became familiar in the UK following publication of the Runnymede Trust report, Islamophobia: A Challenge for Us All, in 1997.

    The concept is now used to discuss negative public opinion towards Muslims and Islam, biased media reporting, verbal and physical assaults and online attacks. It is also used when discussing social and economic inequalities, discrimination within various institutional settings and unfair treatment from the police and security services.

    Previous definitions have been controversial, failing to unite politicians, academics and British Muslims, and leading to charged debates over free speech.

    Some academics have argued that the word “Islamophobia” – which suggests a phobia or fear of Islam – is an inaccurate label for a prejudice which often targets skin colour, ethnicity and culture.

    Many Muslim-led organisations accept that the term is imperfect and interchangeable with others such as “anti-Muslim hatred”. However, they maintain the term “Islamophobia” is needed to focus attention on a growing problem.

    Definitions and controversy

    The 1997 Runnymede Trust report defined Islamophobia as an “unfounded hostility towards Islam”, “the practical consequences of such hostility in unfair discrimination against Muslim individuals and communities” and “the exclusion of Muslims from mainstream political and social affairs”.

    The Runnymede Trust revised its definition in a follow-up report published in 2017. The report defines Islamophobia in two ways.

    The first is “anti-Muslim racism”. A longer, second version amends the United Nation’s 1965 definition of “racial discrimination”. These revised definitions are important because they re-framed Islamophobia as a product of racist thinking rather than religious prejudices.

    Other attempts to define Islamophobia include British academic Chris Allen’s 200-word definition. Allen defined it as an ideology like racism that spreads negative views of Muslims and Islam, influencing social attitudes and leading to discrimination and violence. US political scientist Erik Bleich defined it more succinctly as “indiscriminate negative attitudes or emotions directed at Islam or Muslims”.

    In 2018, the all-party parliamentary group on British Muslims published another definition linking Islamophobia to racism. According to the APPG, “Islamophobia is rooted in racism and is a type of racism that targets expressions of Muslimness or perceived Muslimness.” The APPG called for its definition to be legally binding.

    The APPG definition was adopted by various organisations including local authorities, UK universities and the Labour party while in opposition. But it was rejected by the then Conservative government and later by the current Labour government, which argued it was seeking “a more integrated and cohesive approach”.

    This lack of consensus over previous definitions led Angela Rayner, the deputy prime minister, to announce the working group in March 2025. The group’s aim is to provide a new definition of “anti-Muslim hatred and Islamophobia” which is “reflective of a wide range of perspectives and priorities for British Muslims”.

    Former Conservative MP and attorney general Dominic Grieve was appointed to chair the group, evidence of Labour’s ambition to build consensus.

    A march in London against Islamophobia, racism and anti-migrant views.
    Shutterstock

    Some are concerned that use of the term “Islamophobia”, and particularly the APPG definition, stifles legitimate criticism of Islam. Free speech campaigners have argued that it is “blasphemy via the back door”.

    The centre-right thinktank Policy Exchange published a report claiming that the term is used in bad faith to divert attention away from serious social problems within some Muslim communities – specifically, discussion of the grooming gangs scandal.

    These debates bear resemblance to those surrounding the term “antisemitism” and the adoption of a definition proposed by the International Holocaust Memorial Alliance. The term is widely accepted, although critics have argued this specific definition stifles legitimate criticism of the Israeli state.

    A new approach

    A new definition of “Islamophobia” must balance the protection of Muslim communities and freedoms of religion, expression and assembly for all Muslims and non-Muslims in the UK. It must be clear enough for everyday use, specific enough for academic and policy research, and capable of generating support across the UK’s diverse Muslim population.

    A proposed definition by an emerging thought leader on British Islam addresses these challenges. Mamnun Khan is a writer whose work explores the social integration of Muslims in contemporary British society. Khan is associated with Equi, a thinktank which describes its work as “drawing on Muslim insight”. Other members of Equi are members of the government’s working group.

    Khan sets out three tests that a definition must pass, based on Islamic law, moral teachings within Islam and other more universal values. First, a definition must serve the public interest. Second, it must be just and balanced and preserve freedom of expression. Third, it must uphold the dignity of Muslim communities.

    For Khan, “Islamophobia, also known as anti-Muslim hatred, is an irrational fear, hostility, or prejudice toward Muslims that leads to discrimination, unequal treatment, exclusion, social and political marginalisation, or violence.”

    Khan’s definition has many good qualities. It brings together stronger elements of previous definitions – for, example, the separation of negative attitudes and outcomes – without being weakened by jargon or strong political ideology. On the other hand, some social scientists may question whether defining something as “irrational” is a matter of preference rather than academic research.

    The working group also needs to decide whether Islamophobia and anti-Muslim hatred are closely related or exactly the same. Failure to do so will cause confusion and inconsistency among those wishing to apply the term precisely. Regardless, Khan’s example is a strong step in the right direction. A better definition of Islamophobia is needed, and now within reach.

    Julian Hargreaves is an Affiliated Researcher at the Prince Alwaleed bin Talal Centre of Islamic Studies, University of Cambridge.

    ref. Why is Islamophobia so hard to define? – https://theconversation.com/why-is-islamophobia-so-hard-to-define-258522

    MIL OSI – Global Reports

  • MIL-OSI Global: Toxic fungus from King Tutankhamun’s tomb yields cancer-fighting compounds – new study

    Source: The Conversation – UK – By Justin Stebbing, Professor of Biomedical Sciences, Anglia Ruskin University

    Miro Varcek / Shutterstock.com

    In November 1922, archaeologist Howard Carter peered through a small hole into the sealed tomb of King Tutankhamun. When asked if he could see anything, he replied: “Yes, wonderful things.” Within months, however, Carter’s financial backer Lord Carnarvon was dead from a mysterious illness. Over the following years, several other members of the excavation team would meet similar fates, fuelling legends of the “pharaoh’s curse” that have captivated the public imagination for just over a century.

    For decades, these mysterious deaths were attributed to supernatural forces. But modern science has revealed a more likely culprit: a toxic fungus known as Aspergillus flavus. Now, in an unexpected twist, this same deadly organism is being transformed into a powerful new weapon in the fight against cancer.

    Aspergillus flavus is a common mould found in soil, decaying vegetation and stored grains. It is infamous for its ability to survive in harsh environments, including the sealed chambers of ancient tombs, where it can lie dormant for thousands of years.

    When disturbed, the fungus releases spores that can cause severe respiratory infections, particularly in people with weakened immune systems. This may explain the so-called “curse” of King Tutankhamun and similar incidents, such as the deaths of several scientists who entered the tomb of Casimir IV in Poland in the 1970s. In both cases, investigations later found that A flavus was present, and its toxins were probably responsible for the illnesses and deaths.

    Despite its deadly reputation, Aspergillus flavus is now at the centre of a remarkable scientific finding. Researchers at the University of Pennsylvania have discovered that this fungus produces a unique class of molecules with the potential to fight cancer.

    These molecules belong to a group called ribosomally synthesised and post-translationally modified peptides, or RiPPs. RiPPs are made by the ribosome – the cell’s protein factory – and are later chemically altered to enhance their function.

    While thousands of RiPPs have been identified in bacteria, only a handful have been found in fungi – until now.

    The process of finding these fungal RiPPs was far from simple. The research team screened a dozen different strains or types of aspergillus, searching for chemical clues that might indicate the presence of these promising molecules. Aspergillus flavus quickly stood out as a prime candidate.

    The researchers compared the chemicals from different fungal strains to known RiPP compounds and found promising matches. To confirm their discovery, they switched off the relevant genes and, sure enough, the target chemicals vanished, proving they had found the source.

    Purifying these chemicals proved to be a significant challenge. However, this complexity is also what gives fungal RiPPs their remarkable biological activity.

    The team eventually succeeded in isolating four different RiPPs from Aspergillus flavus. These molecules shared a unique structure of interlocking rings, a feature that had never been described before. The researchers named these new compounds “asperigimycins”, after the fungus in which they were found.

    The next step was to test these asperigimycins against human cancer cells. In some cases, they stopped the growth of cancer cells, suggesting that asperigimycins could one day become a new treatment for certain types of cancer.

    The team also worked out how these chemicals get inside cancer cells. This discovery is significant because many chemicals, like asperigimycins, have medicinal properties but struggle to enter cells in large enough quantities to be useful. Knowing that particular fats (lipids) can enhance this process gives scientists a new tool for drug development.

    Further experiments revealed that asperigimycins probably disrupt the process of cell division in cancer cells. Cancer cells divide uncontrollably, and these compounds appear to block the formation of microtubules, the scaffolding inside cells that are essential for cell division.

    Tremendous untapped potential

    This disruption is specific to certain types of cells, so this may in turn reduce the risk of side-effects. But the discovery of asperigimycins is just the beginning. The researchers also identified similar clusters of genes in other fungi, suggesting that many more fungal RiPPs remain to be discovered.

    Almost all the fungal RiPPs found so far have strong biological activity, making this an area with tremendous untapped potential. The next step is to test asperigimycins in other systems and models, with the hope of eventually moving to human clinical trials. If successful, these molecules could join the ranks of other fungal-derived medicines, such as penicillin, which revolutionised modern medicine.

    The story of Aspergillus flavus is a powerful example of how nature can be both a source of danger and a wellspring of healing. For centuries, this fungus was feared as a silent killer lurking in ancient tombs, responsible for mysterious deaths and the legend of the pharaoh’s curse. Today, scientists are turning that fear into hope, harnessing the same deadly spores to create life-saving medicines.

    This transformation, from curse to cure, highlights the importance of continued exploration and innovation in the natural world. Nature has in fact provided us with an incredible pharmacy, filled with compounds that can heal as well as harm. It is up to scientists and engineers to uncover these secrets, using the latest technologies to identify, modify and test new molecules for their potential to treat disease.

    The discovery of asperigimycins is a reminder that even the most unlikely sources – such as a toxic tomb fungus – can hold the key to revolutionary new treatments. As researchers continue to explore the hidden world of fungi, who knows what other medical breakthroughs may lie just beneath the surface?

    Justin Stebbing does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Toxic fungus from King Tutankhamun’s tomb yields cancer-fighting compounds – new study – https://theconversation.com/toxic-fungus-from-king-tutankhamuns-tomb-yields-cancer-fighting-compounds-new-study-259706

    MIL OSI – Global Reports

  • MIL-OSI Global: When do we first feel pain?

    Source: The Conversation – UK – By Laurenz Casser, Leverhulme Trust Early Career Fellow, University of Sheffield

    Alina Troeva/Shutterstock.com

    At some point between conception and early childhood, pain makes its debut. But when exactly that happens remains one of medicine’s most challenging questions.

    Some have claimed that foetuses as young as twelve weeks can already be seen wincing in agony, while others have flat-out denied that even infants show any true signs of pain until long after birth.

    New research from University College London offers fresh insights into this puzzle. By mapping the development of pain-processing networks in the brain – what researchers call the “pain connectome” – scientists have begun to trace exactly when and how our capacity for pain emerges. What they discovered challenges simple answers about when pain “begins”.


    Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.


    The researchers used advanced brain imaging to compare the neural networks of foetuses and infants with those of adults, tracking how different components of pain processing mature over time. Until about 32 weeks after conception, all pain-related brain networks remain significantly underdeveloped compared with adult brains. But then development accelerates dramatically.

    The sensory aspects of pain – the basic detection of harmful stimuli – mature first, becoming functional around 34 to 36 weeks of pregnancy. The emotional components that make pain distressing follow shortly after, developing between 36 and 38 weeks. However, the cognitive centres responsible for consciously interpreting and evaluating pain lag far behind, and remain largely immature by the time of birth, about 40 weeks after conception.

    This staged development suggests that while late-term foetuses and newborns can detect and respond to harmful stimuli, they probably experience pain very differently from older children and adults. Most significantly, newborns probably can’t consciously evaluate their pain – they can’t form the thought: “This hurts and it’s bad!”

    Does it hurt?
    Martin Valigursky/Shutterstock.com

    A history of changing views

    These findings represent the latest chapter in a long-running scientific debate that has swung dramatically over the centuries, often with profound consequences for medical practice.

    For most physiologists in the 18th and 19th centuries, the perceived delicacy of the infant’s body meant that it must be exquisitely sensitive to pain, so much so that some have had their doubts if infants ever felt anything else. Birth, in particular, was imagined to be an extremely painful event for a newborn.

    However, advances in embryology during the 1870s reversed this thinking. As scientists discovered that infant brains and nervous systems were far less developed than adult versions, many began questioning whether babies could truly feel pain at all. If the neural machinery wasn’t fully formed, how could genuine pain experiences exist?

    This scepticism had troubling practical consequences. For nearly a century, many doctors performed surgery on infants without anaesthesia, convinced that their patients were essentially immune to suffering. The practice continued well into the 1980s in some medical centres.

    Towards the end of the 20th century, public outrage about the medical treatment of infants and new scientific results turned the tables yet again. It was found that newborns exhibited many of the signs (neurological, physiological and behavioural) of pain after all, and that, if anything, pain in infants had probably been underestimated.

    The ambiguous brain

    The reason why there has been endless disagreement about infant pain is that we cannot access their experiences directly.

    Sure, we can observe their behaviour and study their brains, but these are not the same thing. Pain is an experience, something that’s felt in the privacy of a person’s own mind, and that’s inaccessible to anyone but the person whose pain it is.

    Of course, pain experiences are typically accompanied by telltale signs: be it the retraction of a body part from a sharp object or the increased activity of certain brain regions. Those we can measure. But the trouble is that no one behaviour or brain event is ever unambiguous.

    The fact that an infant pulls back their hand from a pin prick may mean that it experiences the pricking as painful, but it may also just be an unconscious reflex. Similarly, the fact that the brain is simultaneously showing pain-related activity may be a sign of pain, but it may also be that the processing unfolds entirely unconsciously. We simply don’t know.

    Perhaps the infant knows. But even if they do, they can’t tell us about their experiences yet, and until they can, scientists are left guessing. Fortunately, their guesses are becoming increasingly well informed, but for now, that is all they can be – guesses.

    What would it take to get certainty? Well, it would require an explanation that connects our brains and behaviour to our conscious experiences. But so far, no scientifically respectable explanation of this kind has been forthcoming.

    Laurenz Casser receives funding from the Leverhulme Trust.

    ref. When do we first feel pain? – https://theconversation.com/when-do-we-first-feel-pain-259588

    MIL OSI – Global Reports

  • MIL-OSI Global: From Roman drains to ancient filters, these artefacts show how solutions to water contamination have evolved

    Source: The Conversation – UK – By Rosa Busquets, Associate Professor, School of Life Sciences, Pharmacy and Chemistry, Kingston University

    Thirst: In Search of Freshwater, an exhibition at Wellcome Collection. Benjamin Gilbert., CC BY-NC-ND

    A new exhibition in London (open until February 2026) called Thirst: In search of freshwater highlights how civilisations have treasured – and been intrinsically linked to – safe, clean water.

    As a chemist, I research how freshwater is polluted by modern civilisation. Common contaminants in rivers include pharmaceuticals,
    microplastics
    (which degrade further when exposed to sunlight and wave power), and forever chemicals or per- and polyfluoroalkyl substances (PFAS) (some of which are carcinogenic).

    Synthetic toxic chemicals are introduced into the environment from the products we make, use and dispose of. This wasn’t a problem centuries ago, where we had a totally different manufacturing industry and technologies.

    Some, such as PFAS from stain-resistant textiles or nonstick materials such as cookware, can be particularly difficult to remove from wastewater. PFAS don’t degrade easily, they resist conventional heat treatments and can easily pass through wastewater treatments, so they contaminate rivers or lakes that are sources of our drinking water.


    Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.


    Testing for pollutants is even more critical in developing nations that lack sanitation and face drought or flooding.
    Having to protect and conserve drinking water and its sources is as relevant today as it always has been.

    For this exhibition, curator at the Wellcome Collection in London, Janice Li, has selected 125 historical objects, photographs and feats of engineering that link to drought, rain, glaciers, rivers and lakes. These three artefacts from Thirst illustrate how our relationship with water contamination has evolved:

    1. Ancient water filters

    Made from natural materials such as clay, water jug filters have been used for hundreds of years in every continent by ancient civilisations. They show that purifying water for drinking was commonplace. The sand and soil particles that naturally get suspended in water and removed by these filters would have carried microbes.

    Water jug filters with Arabic inscription, found in Egypt, dating back to 900-1,200.
    Victoria and Albert Museum London/Wellcome Collection, CC BY-NC-ND

    But in ancient times, pharmaceuticals and other drugs, pesticides, forever chemicals and microplastics would not have been a problem. Those filters could work relatively well despite being made of simple materials with wide pores.

    Today, those ancient filters would no longer be effective. Modern water filters are made using more advanced materials which typically have small pores (called micropores and mesopores). For example, filters often include activated carbon (a highly porous type of carbon that can be manufactured to capture contaminants) or membranes that filter water. Only then is it safe for people to drink.




    Read more:
    Forever chemicals are in our drinking water – here’s how to reduce them


    2. Roman water pipes

    Lead water pipes (known as fistulae) were useful parts of a relatively advanced plumbing system that distributed drinking water throughout Roman cities. They are still common in water systems in our cities today. In the US, there are about 9.2 million lead service lines in use. Exposure to lead causes severe human health problems. Lead exposure, not necessarily from drinking water only, was attributed to more than 1.5 million deaths in 2021.

    A Roman lead water pipe that dates back to 1-300CE.
    Courtesy of Wellcome Collection/Science Museum Group., CC BY-NC-ND

    It’s now understood that lead is neurotoxic and it can diffuse or spread from the pipes to drinking water. Lead from paints and batteries, including car batteries, can also contaminate drinking water.

    To protect us from lead leaching or flaking off from pipes, some government agencies are calling for the replacement of lead pipes with copper or plastic pipes. Water companies routinely add phosphates (mined powder that contains phosphorus) to drinking water to help capture potential lead contamination and make it safe to drink.

    3. The horror of unhealthy water

    One caricature titled The Monster Soup by artist William Heath (1828) is part of the Wellcome Trust’s permanent collection. The graphics read “microcosms dedicated to the London Water companies” and “Monster soup, commonly called Thames Water being a correct representation of the precious stuff doled out to us”. The cartoon shows a lady so terrified at the sight of microbes in river water from the Thames that she drops her cup of tea.

    Monster Soup by William Heath.
    Courtesy of the Wellcome Collection., CC BY-NC-ND

    Even today, many people remain shocked at the toxic contamination in rivers and sewage pollution prevents people from swimming.

    By 2030, 2 billion people will still not have safely managed drinking water and 1.2 billion will lack basic hygiene services. Drinking water will still be contaminated by bacteria such as E. coli and other dangerous pathogens that cause waterborne diseases. So advancing technologies to filter out contamination will be just as crucial in the future as it has been in the past.


    Don’t have time to read about climate change as much as you’d like?

    Get a weekly roundup in your inbox instead. Every Wednesday, The Conversation’s environment editor writes Imagine, a short email that goes a little deeper into just one climate issue. Join the 45,000+ readers who’ve subscribed so far.


    Rosa Busquets receives funding from UKRI/ EU Horizons MSCA Staff exchanges Clean Water project 101131182, DASA, project ACC6093561. She is affiliated with Kingston University, UCL, Al-Farabi Kazakh National University, UNEP EEAP.

    ref. From Roman drains to ancient filters, these artefacts show how solutions to water contamination have evolved – https://theconversation.com/from-roman-drains-to-ancient-filters-these-artefacts-show-how-solutions-to-water-contamination-have-evolved-253876

    MIL OSI – Global Reports