Category: The Conversation

  • MIL-OSI Submissions: Where does the UK most need more public EV chargers?

    Source: The Conversation – UK – By Labib Azzouz, Research Associate in Transport and Energy Innovation, University of Oxford

    Electric vehicle chargers at a motorway service station in Grantham, England. Angus Reid/Shutterstock

    The automotive and EV industry has repeatedly insisted that the UK needs more electric vehicle (EV) chargers to help motorists make the switch from conventional fossil-fuel burning cars.

    The Labour government has announced £400 million to install EV chargers, mainly on streets in poorer residential neighbourhoods, in place of the Conservative’s £950 million rapid charging fund that was directed at installing chargers in motorway service stations.

    Does it matter where these chargers are – and who pays to build them?

    The short answer is yes, it does matter. Our research conducted at motorway and local EV charging stations across England – including those located in residential areas, high streets and community centres – indicates that these two types of infrastructure serve distinct groups of users and fulfil different purposes.

    Suggesting that one can substitute for the other risks sending mixed signals to both the industry and the driving public.

    We found that motorway charging stations tend to cater to wealthier men, who are more likely to own premium EVs with long-range batteries and better performance. Many of these drivers have access to home chargers, so their use of public chargers is only for occasional, long-distance travel for business, leisure, or holidays – trips that require chargers along motorways.

    Convenience and charging speed are often more important than the price of public charging, particularly when the travel costs of these drivers are covered by their employers.

    Local public charging stations, on the other hand, serve more diverse groups. These include drivers from lower-income households who are more likely to own older and smaller EVs with shorter ranges. Access to home charging is often limited, especially for people living in flats or urban areas without driveways, garages or off-street parking.

    Not everyone can plug in at home.
    Andersen EV/Shutterstock

    Local chargers are also vital for taxi and delivery drivers who depend on their vehicles for work and make frequent short trips throughout the day. There are many professional drivers without access to workplace charging stations who need alternative local provision – something the Conservative government recognised in its 2022 EV charging strategy.

    Ultimately, the transition to EVs should take a balanced approach that carefully considers social equity, economic viability and environmental impact.

    Different locations serve different drivers

    Motorway charging stations are commercially attractive to private investors, such as energy companies, specialist charging providers and car manufacturers, despite their higher upfront costs and complex requirements.

    This is because service stations offer greater short-term revenue due to their ability to set premium prices. This is a result of there being limited alternatives and high demand for rapid charging, especially among long-distance travellers, and the willingness of EV drivers to pay for speed and convenience – unlike in more price-sensitive neighbourhood settings.

    Unsurprisingly, the government found that the rapid deployment of motorway chargers in recent years has been largely driven by the private sector. Our research highlighted that these revenues could be enhanced by a broader range of retail, dining and relaxation amenities, turning the time waiting for a car to charge into a more productive and pleasurable experience.

    Residential charging stations may not offer high profits per charge, but they typically require lower capital investment and benefit from consistent and predictable use. They are also suited to measures for reducing strain on the grid and balancing energy supply and demand.

    These measures include tariffs that make it cheaper to charge EVs during off-peak hours, or technology that allows cars to feed electricity stored in batteries back into the grid. These features make them appropriate for public funding, where return on investment is measured not just in profit but in value for the public.

    Considering that local EV charging serves those who do not have access to home charging and who drive for a living, the case for public funding is even stronger. These sorts of chargers make switching to an EV easier for different groups.

    For example, safe and carefully placed public chargers could help more women switch to EVs – although our research suggests that, while “careful placement” might refer to residential areas, it doesn’t necessarily mean on streets. Well-lit car parks and community destinations are sometimes considered safer options.

    Charging points outside a community centre in the Outer Hebrides, Scotland.
    AlanMorris/Shutterstock

    By helping EV drivers make frequent short trips, local chargers can also significantly reduce urban air pollution, emissions and noise, contributing to more liveable, healthier cities.

    That said, motorway charging stations and those near key transport corridors still play a crucial role in a comprehensive national network, and public funding may be required in more peripheral and rural areas of the UK where installations lag and commercial interest is limited.

    While long-distance trips are less frequent than short ones, they account for a disproportionately large share of energy use and emissions. Switching such trips to electric will be essential to reaching net zero goals.

    It seems reasonable to prioritise public investment in local EV charging infrastructure to support a fairer EV transition, but this should not be limited to on-street chargers. Investment is needed in residential and non-residential areas, public car parks, community centres and workplaces.

    Different types of EV charging are not interchangeable – all are needed to support the switch.


    Don’t have time to read about climate change as much as you’d like?

    Get a weekly roundup in your inbox instead. Every Wednesday, The Conversation’s environment editor writes Imagine, a short email that goes a little deeper into just one climate issue. Join the 45,000+ readers who’ve subscribed so far.


    Labib Azzouz has received funding from the UK Research and Innovation via the UK Energy Research Centre and Innovate UK as part of the Energy Superhub Oxford (ESO) project.

    Hannah Budnitz receives government funding from UK Research and Innovation grants via the Economic and Social Research Council and the Engineering and Physical Sciences Research Council. She has also previously received funding from Innovate UK and the Department for Transport.

    ref. Where does the UK most need more public EV chargers? – https://theconversation.com/where-does-the-uk-most-need-more-public-ev-chargers-259623

    MIL OSI

  • MIL-OSI Submissions: The Bear season 4: this meaty restaurant drama is still an enticing bingeable prospect

    Source: The Conversation – UK – By Jane Steventon, Course Leader, BA (Hons) Screenwriting; Deputy Course Leader & Senior Lecturer, BA (Hons) Film Production, University of Portsmouth

    Take a soupçon of identity crisis, a pinch of perfectionism, a scoop of burnout and mix thoroughly with a large measure of fraternal grief and sear over a hot grill and voilà! You have The Bear, a perfectly blended drama about a chef on the edge, driven by relentless ambition and exacting standards as he turns his family’s humble sandwich shop into a fine-dining restaurant.

    This intoxicating family drama was eaten up by critics and audiences alike in 2022, its first season garnering a rare perfect 100% score on Rotten Tomatoes, the subsequent two reaching scores of 99% and 89% respectively. It’s certainly a hard act to follow for season four.

    The first ten minutes of The Bear’s pilot episode thrillingly defined what was to come in high-octane style and scene-setting detail. The first season delivered a clever mix of authentic dialogue and setting, relatable family dysfunction and dynamic production style.

    Showstopping scenes of stressful kitchen heat were served up alongside a delectable range of new and established talent in the form of Jeremy Allen White (Carmy), Ebon Moss-Bachrach (Richie), Ayo Edebiri (Sydney) and Oliver Platt (Cicero/Uncle Jimmy).


    Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.


    In charge is showrunner Christopher Storer, who came up with the concept after being inspired by his friend’s father Chris Zucchero, the owner of Chicago sandwich joint Mr Beef.

    With his professional chef sister also serving as a consultant, Storer succeeded in creating a deliciously authentic and intensely real drama. Buoyed along the way by 21 Emmys and five Golden Globes, Storer also watched his cast ascend, the tortured-soul performance of White garnering particular praise.

    Testing the parameters of a long-running show, Storer focused in on the entire cast of characters and their backstories, a successful tactic used by shows such as Orange is the New Black to keep the drama – largely confined to a kitchen set – fresh.

    Pulling in Hollywood die-hards Oliver Platt and Jamie Lee Curtis for familial tough-love roles further enriched the mix, often using a non-chronological timeframe to go back to moments of family turbulence and tension. This made for three-dimensional characters and enabled evolution around difficult themes such as the aftermath of suicide and generational trauma.

    The Bear has come a long way in three seasons, starting with a spit and sawdust establishment serving up the lunchtime beef sandwiches for its working customers.

    Carmy’s experience and longing for the high-end restaurant of his dreams hurtled forward in season two, as he sent his core crew off in different directions to hone their skills and help form his vision. A restaurant trying to win success but plagued with challenges, there were exhausting familial tensions embedded in every episode of season three.

    Several themes play out in The Bear: love, family, loyalty, community and purpose. The relationship between Carmy and cousin Richie (not a real cousin, but a term of endearment) is key to linking past and future. Richie provides some of the highlights of comedy and pathos as he spits truth bombs, most frequently at talented sous-chef Syd.

    It is Syd who follows Carmy’s aspirations for gastronomic perfection but can’t abide the lack of order or the intense highs and lows that inevitably go hand in hand with his talent. And this is one central question to consider for the latest series: just how long will the audience remain loyal to Carmy and his endless quest for artistry in a high-failure rate industry?

    It’s all in the sauce

    Storer begins season four with a ghost. Carmy and his dead brother Mikey (Jon Berthal) banter in a seven-minute scene, with Carmy ultimately confiding the dream of a restaurant as Mikey watches him make tomato sauce (“too much garlic”). The tomatoes resonate: Mikey left behind money hidden in tomato cans that ended up saving Carmy’s sanity and his dream of a proper restaurant.

    Just as oranges represent death to Frances Ford Coppola, Storer uses tomatoes to underscore themes; here they symbolise familial loyalty and history, a solid base to a meal, a core ingredient. Mikey was one of the core ingredients in Carmy’s life, and now he’s gone.

    Carmy awakens to a rerun of Groundhog Day on late-night TV and fittingly, we too are back – same dish, now more seasoned and enriched with its core ingredients and ready to serve up a big bowlful of family, love, ambition, strife and grief.

    The episode furthers the theme of loyalty as the restaurant receives The Tribune’s review – the cliffhanger of the season three finale. Naturally, Storer doesn’t let up – the food critic highlights “dissonance” and Carmy is back in emotional chaos, with Syd urging him to lighten up and lose the misery.

    In truth, this series could do with adding some more humour in the mix; the teasing and frivolous banter of season one has got somewhat lost in the seasons that followed.

    Storer ramps up the tension, setting several ticking clocks in place: chiefly Uncle Jimmy’s notice period for the business to turn a profit is literally installed on a digital clock in the kitchen. Then Syd’s headhunter calls, offering her desired autonomy and an exit strategy from the chaos.

    And Carmy raises the stakes with an intention to gain a Michelin star. Thus a heroic journey is set in place for the whole cast, with future battles both internal and external laid out.

    There’s too much going on at this feast and the feeling of being stuffed full of story is tangible by the end of the first episode. Still, with a season lining up more emotional turbulence steered by White, more celebrity cameos (Brie Larson and Rob Reiner are lined up) and the excellent cinematography and performances that we have come to expect, Storer stirs his secret sauce.

    The Bear still offers an entertaining and enticing proposition, bingeable and mostly satisfying.

    Jane Steventon does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. The Bear season 4: this meaty restaurant drama is still an enticing bingeable prospect – https://theconversation.com/the-bear-season-4-this-meaty-restaurant-drama-is-still-an-enticing-bingeable-prospect-260143

    MIL OSI

  • MIL-OSI Submissions: Five ways to avoid illness like the Lionesses

    Source: The Conversation – UK – By Samantha Abbott, Doctoral Researcher, Department of Sport Science, Nottingham Trent University

    England’s Beth Mead cheering on podium after win v Germany in the Women European Championship Final 2022 photographyjp/Shutterstock

    Think back to the last time you had a cold or the flu. Now imagine stepping onto the pitch for a European Cup final, while battling through those symptoms. For elite athletes, illness can strike at the worst possible time – and it could hit women harder.

    Research suggests that female athletes are more susceptible to cold and flu-like illnesses than their male counterparts. For England women’s national football team, the Lionesses, this risk only increases before a major tournament like the Euros.

    Close contact, shared kit, disrupted sleep and travel all add up to a perfect storm for infection. But targeted nutritional strategies, alongside good sleep and hand hygiene, can offer a crucial line of defence.


    Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.


    1. Fuel first: energy matters for immunity

    Before anything else, players need to eat enough. Energy supports both performance and immune function. In fact, female athletes who didn’t meet their energy needs in the run-up to the 2016 Olympics were four times more likely to report cold or flu symptoms.

    This is especially relevant in women’s football, where low energy and carbohydrate intake has been documented among professional players and recreational players too. Regular meals and snacks that include carbohydrate-rich foods like oats, bread and pasta, especially around training, are essential to meet energy demands and support immune health.

    2. Eat the rainbow

    Athletes are often encouraged to go beyond the public’s five-a-day fruit and veg target, aiming instead for eight to ten portions daily. Why? Because colourful plant foods are packed with vitamins, minerals, antioxidants and anti-inflammatory compounds: all vital for immunity.




    Read more:
    We’re told to ‘eat a rainbow’ of fruit and vegetables. Here’s what each colour does in our body


    Each colour offers unique benefits. For instance, red fruits and vegetables, such as tomatoes, contain lycopene, a powerful antioxidant. Orange produce like carrots get their colour from beta-carotene, which is converted by the body into vitamin A – a key vitamin for immune health.

    Eating a rainbow of colours means getting a wide range of nutrients.

    3. Vitamin C: powerful but timing matters

    Vitamin C has long been linked with reducing the risk and severity of cold and flu symptoms. One Cochrane review found that regular vitamin C intake halved the risk of illness in physically active people.

    However, more isn’t always better. Long-term use of high-dose vitamin C supplements could blunt training adaptations – the structural and functional changes the body undergoes in response to repeated exercise – because of its anti-inflammatory effects. That’s why vitamin C is most effective when used strategically, such as during high-risk periods like travel or intense competition. Good food sources include oranges, kiwis, blackcurrants, red and yellow peppers, broccoli and even potatoes.

    4. Gut health supports immune health

    Around 70% of the immune system is located in the gut, making gut health a key player in illness prevention. This is where probiotics (live bacteria) and prebiotics (which feed those bacteria) come in.

    Probiotics, found in fermented foods like kefir and kimchi or in supplement form, have been shown to reduce the duration and severity of respiratory illnesses in athletes. Prebiotics have similarly shown promise. In one study, a 24-week prebiotic intervention in elite rugby players reduced the duration of cold and flu symptoms by over two days.




    Read more:
    Gut microbiome: meet Lactobacillus acidophilus – the gut health superhero


    In the build-up to the Euros, including probiotic-rich foods in their diet or taking a daily prebiotic and probiotic supplement may help players stay healthy and return to training faster if they do get ill.

    5. Zinc lozenges: first aid for a sore throat

    If cold-like symptoms do appear, zinc lozenges can offer fast-acting relief. Zinc has antiviral, antioxidant and anti-inflammatory properties. When zinc is delivered as a lozenge, it acts directly in the throat, where many infections begin. Taken within 24 hours of symptoms starting, zinc lozenges could shorten illness duration by a third.

    But caution is key. Long-term use of high-dose zinc supplements can actually suppress immune function. Zinc lozenges should only be used short-term at symptom onset, not as a daily supplement.

    Staying match-ready during major tournaments means more than just tactical drills and fitness. Nutrition is a powerful ally in illness prevention, especially for women’s teams like the Lionesses. From fuelling adequately to supporting gut health and knowing when to supplement, these nutritional strategies can make the difference between sitting on the bench and bringing a trophy home.

    The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    ref. Five ways to avoid illness like the Lionesses – https://theconversation.com/five-ways-to-avoid-illness-like-the-lionesses-259302

    MIL OSI

  • MIL-OSI Submissions: Why is Islamophobia so hard to define?

    Source: The Conversation – UK – By Julian Hargreaves, Lecturer, Department of Sociology and Criminology, City St George’s, University of London

    The UK government wants a new definition of Islamophobia and has created a working group of politicians, academics and independent experts to provide one. It aims to settle long-running political debates over the term.

    The concept of Islamophobia describes anti-Muslim and anti-Islamic prejudices and their impact on Muslim communities. The term became familiar in the UK following publication of the Runnymede Trust report, Islamophobia: A Challenge for Us All, in 1997.

    The concept is now used to discuss negative public opinion towards Muslims and Islam, biased media reporting, verbal and physical assaults and online attacks. It is also used when discussing social and economic inequalities, discrimination within various institutional settings and unfair treatment from the police and security services.

    Previous definitions have been controversial, failing to unite politicians, academics and British Muslims, and leading to charged debates over free speech.

    Some academics have argued that the word “Islamophobia” – which suggests a phobia or fear of Islam – is an inaccurate label for a prejudice which often targets skin colour, ethnicity and culture.

    Many Muslim-led organisations accept that the term is imperfect and interchangeable with others such as “anti-Muslim hatred”. However, they maintain the term “Islamophobia” is needed to focus attention on a growing problem.

    Definitions and controversy

    The 1997 Runnymede Trust report defined Islamophobia as an “unfounded hostility towards Islam”, “the practical consequences of such hostility in unfair discrimination against Muslim individuals and communities” and “the exclusion of Muslims from mainstream political and social affairs”.

    The Runnymede Trust revised its definition in a follow-up report published in 2017. The report defines Islamophobia in two ways.

    The first is “anti-Muslim racism”. A longer, second version amends the United Nation’s 1965 definition of “racial discrimination”. These revised definitions are important because they re-framed Islamophobia as a product of racist thinking rather than religious prejudices.

    Other attempts to define Islamophobia include British academic Chris Allen’s 200-word definition. Allen defined it as an ideology like racism that spreads negative views of Muslims and Islam, influencing social attitudes and leading to discrimination and violence. US political scientist Erik Bleich defined it more succinctly as “indiscriminate negative attitudes or emotions directed at Islam or Muslims”.

    In 2018, the all-party parliamentary group on British Muslims published another definition linking Islamophobia to racism. According to the APPG, “Islamophobia is rooted in racism and is a type of racism that targets expressions of Muslimness or perceived Muslimness.” The APPG called for its definition to be legally binding.

    The APPG definition was adopted by various organisations including local authorities, UK universities and the Labour party while in opposition. But it was rejected by the then Conservative government and later by the current Labour government, which argued it was seeking “a more integrated and cohesive approach”.

    This lack of consensus over previous definitions led Angela Rayner, the deputy prime minister, to announce the working group in March 2025. The group’s aim is to provide a new definition of “anti-Muslim hatred and Islamophobia” which is “reflective of a wide range of perspectives and priorities for British Muslims”.

    Former Conservative MP and attorney general Dominic Grieve was appointed to chair the group, evidence of Labour’s ambition to build consensus.

    A march in London against Islamophobia, racism and anti-migrant views.
    Shutterstock

    Some are concerned that use of the term “Islamophobia”, and particularly the APPG definition, stifles legitimate criticism of Islam. Free speech campaigners have argued that it is “blasphemy via the back door”.

    The centre-right thinktank Policy Exchange published a report claiming that the term is used in bad faith to divert attention away from serious social problems within some Muslim communities – specifically, discussion of the grooming gangs scandal.

    These debates bear resemblance to those surrounding the term “antisemitism” and the adoption of a definition proposed by the International Holocaust Memorial Alliance. The term is widely accepted, although critics have argued this specific definition stifles legitimate criticism of the Israeli state.

    A new approach

    A new definition of “Islamophobia” must balance the protection of Muslim communities and freedoms of religion, expression and assembly for all Muslims and non-Muslims in the UK. It must be clear enough for everyday use, specific enough for academic and policy research, and capable of generating support across the UK’s diverse Muslim population.

    A proposed definition by an emerging thought leader on British Islam addresses these challenges. Mamnun Khan is a writer whose work explores the social integration of Muslims in contemporary British society. Khan is associated with Equi, a thinktank which describes its work as “drawing on Muslim insight”. Other members of Equi are members of the government’s working group.

    Khan sets out three tests that a definition must pass, based on Islamic law, moral teachings within Islam and other more universal values. First, a definition must serve the public interest. Second, it must be just and balanced and preserve freedom of expression. Third, it must uphold the dignity of Muslim communities.

    For Khan, “Islamophobia, also known as anti-Muslim hatred, is an irrational fear, hostility, or prejudice toward Muslims that leads to discrimination, unequal treatment, exclusion, social and political marginalisation, or violence.”

    Khan’s definition has many good qualities. It brings together stronger elements of previous definitions – for, example, the separation of negative attitudes and outcomes – without being weakened by jargon or strong political ideology. On the other hand, some social scientists may question whether defining something as “irrational” is a matter of preference rather than academic research.

    The working group also needs to decide whether Islamophobia and anti-Muslim hatred are closely related or exactly the same. Failure to do so will cause confusion and inconsistency among those wishing to apply the term precisely. Regardless, Khan’s example is a strong step in the right direction. A better definition of Islamophobia is needed, and now within reach.

    Julian Hargreaves is an Affiliated Researcher at the Prince Alwaleed bin Talal Centre of Islamic Studies, University of Cambridge.

    ref. Why is Islamophobia so hard to define? – https://theconversation.com/why-is-islamophobia-so-hard-to-define-258522

    MIL OSI

  • MIL-OSI Submissions: Could electric brain stimulation lead to better maths skills?

    Source: The Conversation – UK – By Roi Cohen Kadosh, Professor of Cognitive Neuroscience, University of Surrey

    Triff/Shutterstock

    A painless, non-invasive brain stimulation technique can significantly improve how young adults learn maths, my colleagues and I found in a recent study. In a paper in PLOS Biology, we describe how this might be most helpful for those who are likely to struggle with mathematical learning because of how their brain areas involved in this skill communicate with each other.

    Maths is essential for many jobs, especially in science, technology, engineering and finance. However, a 2016 OECD report suggested that a large proportion of adults in developed countries (24% to 29%) have maths skills no better than a typical seven-year-old. This lack of numeracy can contribute to lower income, poor health, reduced political participation and even diminished trust in others.

    Education often widens rather than closes the gap between high and low
    achievers, a phenomenon known as the Matthew effect. Those who start with an advantage, such as being able to read more words when starting school, tend to pull further ahead. Stronger educational achievement has been also associated with socioeconomic status, higher motivation and greater engagement with material learned during a class.

    Biological factors, such as genes, brain connectivity, and chemical signalling, have been shown in some studies to play a stronger role in learning outcomes than environmental ones. This has been well-documented in different areas, including maths, where differences in biology may explain educational achievements.


    Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.


    To explore this question, we recruited 72 young adults (18–30 years old) and taught them new maths calculation techniques over five days. Some received a placebo treatment. Others received transcranial random noise stimulation (tRNS), which delivers gentle electrical currents to the brain. It is painless and often imperceptible, unless you focus hard to try and sense it.

    It is possible tRNS may cause long term side effects, but in previous studies my team assessed participants for cognitive side effects and found no evidence for it.

    Could tRNS help people improve their maths skills?
    Prostock-studio/Shutterstock

    Participants who received tRNS were randomly assigned to receive it in one of two different brain areas. Some received it over the dorsolateral prefrontal cortex, a region critical for memory, attention, or when we acquire a new cognitive skill. Others had tRNS over the posterior parietal cortex, which processes maths information, mainly when the learning has been accomplished.

    Before and after the training, we also scanned their brains and measured levels of key neurochemicals such as gamma-aminobutyric acid (gaba), which we showed previously, in a 2021 study, to play a role in brain plasticity and learning, including maths.

    Some participants started with weaker connections between the prefrontal and parietal brain regions, a biological profile that is associated with poorer learning. The study results showed these participants made significant gains in learning when they received tRNS over the prefrontal cortex.

    Stimulation helped them catch up with peers who had stronger natural connectivity. This finding shows the critical role of the prefrontal cortex in learning and could help reduce educational inequalities that are grounded in neurobiology.

    How does this work? One explanation lies in a principle called stochastic resonance. This is when a weak signal becomes clearer when a small amount of random noise is added.

    In the brain, tRNS may enhance learning by gently boosting the activity of underperforming neurons, helping them get closer to the point at which they become active and send signals. This is a point known as the “firing threshold”, especially in people whose brain activity is suboptimal for a task like maths learning.

    It is important to note what this technique does not do. It does not make the best
    learners even better. That is what makes this approach promising for bridging gaps,
    not widening them. This form of brain stimulation helps level the playing field.

    Our study focused on healthy, high-performing university students. But in similar studies on children with maths learning disabilities (2017) and with attention-deficit/hyperactivity disorder (2023) my colleagues and I found tRNS seemed to improve their learning and performance in cognitive training.

    I argue our findings could open a new direction in education. The biology of the learner matters, and with advances in knowledge and technology, we can develop tools that act on the brain directly, not just work around it. This could give more people the chance to get the best benefit from education.

    In time, perhaps personalised, brain-based interventions like tRNS could support learners who are being left behind not because of poor teaching or personal circumstances, but because of natural differences in how their brains work.

    Of course, very often education systems aren’t operating to their full potential because of inadequate resources, social disadvantage or systemic barriers. And so any brain-based tools must go hand-in-hand with efforts to tackle these obstacles.

    Roi Cohen Kadosh serves on the scientific advisory boards of Neuroelectrics Inc., and Innosphere Ltd. He is the founder and shareholder of Cognite Neurotechnology Ltd. He received funding from the Wellcome Trust, UKRI, the British Academy, IARPA, DASA, Joy Ventures, the James S McDonnell Foundation, and the European Union. He is affiliated with the University of Surrey.

    ref. Could electric brain stimulation lead to better maths skills? – https://theconversation.com/could-electric-brain-stimulation-lead-to-better-maths-skills-260134

    MIL OSI

  • MIL-OSI Analysis: Detroit restaurants identified as ‘Black-owned’ on Yelp saw a slight drop in business ratings

    Source: The Conversation – USA – By Matthew Bui, Assistant Professor of Information and Digital Studies, University of Michigan

    Yelp’s Black-owned tag was designed to help business owners like Don Studvent attract more customers. His restaurant closed in 2018 after nine years in business. AP Photo/Carlos Osorio

    When the online review platform Yelp added a “Black-owned” tag in 2020, it boosted the visibility of Black-owned restaurants in Detroit. It also caused their ratings to drop, according to our recent study.

    Both local and nonlocal reviewers who showed awareness of a restaurant’s Black ownership rated restaurants 3.03 stars on average. Those who did not acknowledge Black ownership gave a rating of 3.78 stars on average. The tag seems to have caused the average rating to drop by attracting more reviewers who were aware of Black ownership.

    Why it matters

    Technology companies often introduce new features and tools to influence user behavior and make their platforms more usable.

    Although Yelp intended to support Black communities with the Black-owned tag, the design intervention was harmful to Black restaurant owners in Detroit because Yelp failed to consider platform and community-based factors that significantly shape user interactions.

    Yelp’s user base is predominantly white, educated and affluent. Making Detroit’s Black-owned restaurants more visible to Yelp users may have amplified cross-cultural interactions and frictions. For example, non-Black users sometimes mentioned “slower” and “rude” service as justifications for lower ratings. Close readings of these reviews hinted at intercultural and communicative clashes.

    Even if Black-owned restaurants businesses didn’t select the tag, they appeared in searches for “Black-owned restaurants,” in 2022 when we conducted the study and as recently as 2025. Businesses can remove the “Black-owned” tag, but Yelp doesn’t provide a way for them to opt out of search results.

    How we did our work

    To examine the local impacts of Yelp’s Black-owned tag, we collected over 250,000 Yelp reviews of Black- and non-Black-owned restaurants in Detroit and Los Angeles.

    We identified Black-owned restaurants through community-sourced lists for Detroit and Los Angeles and then generated a random sample for the non-Black-owned restaurants.

    We then identified reviews that explicitly noted “Black ownership” for closer analysis.

    Detroit’s Black-owned businesses saw a greater loss in business compared with “ownership-unreported” restaurants during the COVID-19 pandemic. This means they also potentially had more to gain from the new tag.

    We found the awareness of Black ownership on Yelp significantly increased following Yelp’s addition of the Black-owned tag in June 2020. A year after the tag was added, reviews in Detroit mentioned Black ownership 4.3% more often than a year before it was rolled out.

    Detroit Black-owned restaurants also saw a small temporary spike in their number of reviews, largely around the time Yelp added the Black-owned tag. At the same time, the restaurants’ average star ratings dropped from 3.91 to 3.88. In contrast, non-Black-owned restaurants’ ratings stayed relatively steady at 3.90.

    This metric is an aggregate of all Detroit restaurants’ Yelp reviews over their entire existence, so a .03-star rating change is small but significant.

    Even minor changes to star ratings affect the number of diners restaurants attract, their earning potential and the likelihood they will sell out of food.

    Adding obstacles in digital platforms serves to reproduce and amplify inequalities these businesses already face, rather than alleviate them. For example, Black-owned businesses have a harder time getting loans and are relatively underrepresented in Michigan as a whole.

    These findings may seem surprising given that Detroit is a majority Black city. However, Black users on Yelp are a minority. Keeping in mind the skewed user base of Yelp, we hypothesize the lower reviews for businesses featuring a Black-owned tag reflect existing racial and digital divides in the city.

    Generally, our study provides additional evidence that digital interventions are not “one-size-fits-all,” nor is digital visibility inherently positive for all businesses.

    The Research Brief is a short take on interesting academic work.

    _This article was updated to clarify how labels are added to profiles.

    This research was supported by a research grant from the Ewing Marion Kauffman Foundation.

    Matthew Bui does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    Cameron Moy does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Detroit restaurants identified as ‘Black-owned’ on Yelp saw a slight drop in business ratings – https://theconversation.com/detroit-restaurants-identified-as-black-owned-on-yelp-saw-a-slight-drop-in-business-ratings-256306

    MIL OSI Analysis

  • MIL-OSI Analysis: Self determination theory: how to use it to boost wellbeing

    Source: The Conversation – UK – By Mark Fabian, Reader of Public Policy, University of Warwick

    Self-determination theory (SDT) is one of the most well established and powerful approaches to wellbeing in psychological research literature. Yet it doesn’t seem to have broken through into popular discussions about wellbeing, happiness and self-help. That’s a shame, because it has so much to contribute.

    A foundational idea in self-determination theory is that we have three basic psychological needs: for autonomy, competence and relatedness.

    Autonomy is the need to be in control of your own life rather than being controlled by others. Competence is the need to feel skilful at the tasks one values or needs to thrive. Relatedness refers to feeling loved and cared for, and a sense of belonging to a group that provides social support.

    If our basic psychological needs are met, then we are more likely to experience wellbeing. Symptoms include emotions such as joy, vitality and excitement because we’re doing the things we love, for example. We’ll probably have a sense of meaning and purpose because we live within a community whose culture we value.


    Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.


    Conversely, when our basic needs are thwarted we should see symptoms of illbeing. Anger, frustration and boredom grow when our behaviour is controlled by parents, bureaucrats, bosses or other forces that press our energies towards their ends instead of ours.

    Depression is liable when we our competence is overwhelmed by failure. And anxiety is often a social emotion that arises when we’re worried about whether our group cares for us.

    So we should cultivate our basic psychological needs – but how? You need to discover what you want to do with your life, what skills to become competent in, who to relate to and what communities to contribute to.

    Using motivation to find your way

    Here’s where the second foundational idea in SDT can be super helpful, as I explain in my new book, Beyond Happy: How to rethink happiness and find fulfilment. SDT proposes a motivational spectrum running from extrinsic at one end to intrinsic at the other. Finding out where you are on the spectrum for a certain activity or task can help you work out how to be happier.

    The more extrinsically motivated something is, the more self-regulation it requires. For example, when refugees flee their homes due to encroaching war, there is often a large part of them that wants to stay. Willpower is required to act. In contrast, intrinsically motivated behaviour springs spontaneously from us. You don’t need willpower to get stuck into your hobbies.

    Each type of motivation comes with different emotional signals and deciphering them can help us find what values, behaviour and groups suit us.

    The spectrum of motivation according to self-determination theory.
    CC BY-NC

    “Identified” motivation, for example, sits between extrinsic and intrinsic motivation. It occurs when we value an activity but don’t inherently enjoy it. That’s why success in identified behaviour is usually met with a feeling of accomplishment or the warm and fuzzy feeling you get when you do the right thing, like going a bit out of your way to put your rubbish in a bin.

    In contrast, “introjected” motivation is where you value something contingent to the behaviour itself. Many of us loathe the gym, for example, but we want to be healthy. A child might not want to practice the cello, but they do want their parent’s approval.

    Because introjection is relatively extrinsic, it requires willpower, and probably a bit more of it than for identified behaviour. Completion of an introjected activity is often met with relief rather than accomplishment and little desire to keep going.

    Sometimes things that are dependent on introjected behaviour can make us unhappy. In teen dramas, for example, the protagonist often does something because they want to be popular, but when they win the approval of the cool kids they realise those kids are mean and lame.

    Why money, power and status won’t make you happy

    If that’s how you feel, you’ve found something inauthentic to you. Then there’s very little chance the introjected activity will lead to your wellbeing. In fact, SDT has identified some common values. You’ll recognise them immediately: popularity, fame, status, power, wealth and success.

    They’re extrinsic because they’re not peculiar to you. If you get rich doing the thing you love, that’s great, but many of us never even think about what we love because we’re too busy thinking about how to get rich.

    Extrinsic pursuits are ultimately bad for our wellbeing because they’re all poor substitutes for basic psychological needs. When our autonomy is thwarted by strict parents or disciplinarian teachers, we crave power. When we don’t know what sort of life to build and thus what skills we need competence in, we adopt other people’s notions of success instead.

    Extrinsic pursuits often emerge from a wounded place and a defensive reaction. When we’re lonely or feel unloved for who we are, for example, we might compensate by seeking fame or popularity. We’ll start talking about our accomplishments on LinkedIn, for example.

    The problem is that the people this attracts don’t value you specifically, only your power, status or money. You sense that if you ever lost those things, you would lose these people too.

    SDT can help you learn to listen to your emotions and interpret your motivations instead, and use them to guide you towards the values, activities and people that are right for you.

    For example, if you feel joyful and fulfilled when you solve a complex puzzle, perhaps consider a career that involves that activity, such as law or engineering. If such puzzles feel like torture, that’s a signal too. Perhaps something more relational or intuitive, like social work, would work better.

    When you pursue things that are authentic to you it will nourish your sense of autonomy. You’ll build competence in those activities because they’re intrinsically motivated. And you’ll form deep relationships with the people you encounter because you genuinely like each other. Wellbeing will follow.

    Mark Fabian does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Self determination theory: how to use it to boost wellbeing – https://theconversation.com/self-determination-theory-how-to-use-it-to-boost-wellbeing-259829

    MIL OSI Analysis

  • MIL-OSI Analysis: Dune director Denis Villeneuve will helm the next Bond – but what will his 007 be like?

    Source: The Conversation – UK – By William Proctor, Associate Professor in Popular Culture, Bournemouth University

    Wiki Commons/Canva, CC BY-SA

    The James Bond franchise has lain dormant for four years, since Daniel Craig’s swansong as 007, No Time to Die. A legal quarrel between Bond’s producers, Michael G. Wilson and Barbara Broccoli, and Amazon Studios resulted in a stalemate and production on a new Bond film has remained in limbo.

    Nevertheless, speculation has been rife about which actor will next play Ian Fleming’s super-spy (the latest actor to be associated with the role is former Spider-man Tom Holland).

    When news surfaced in February 2025 that Amazon MGM (Amazon purchased MGM in 2021) had effectively become Bond’s new custodians, critics and audiences alike expressed concern – to put it lightly. Many feared that Jeff Bezos was more interested in stimulating Amazon Prime membership by driving multiple content streams through spin-offs and merchandising than protecting Fleming’s legacy.

    However, last week’s announcement that Denis Villeneuve has been appointed as the director of the 26th Bond film is a savvy move. It’s a declaration of intent that seeks to promote and market Amazon MGM as safe harbour for the Bond franchise.


    Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.


    The announcement positions the next era of Bond as a prestigious exercise helmed by “a cinematic master”, not a journeyman director. Villeneuve was previously offered the opportunity to direct No Time to Die, but turned the role down because of his commitment to the Dune films.

    By appointing Villeneuve, Amazon has managed to radically shift the public debate. Villeneuve is “much more than a technical director”, wrote Guardian film critic Peter Bradshaw. “He is an alpha-grade auteur in the same league as Christopher Nolan.”

    Other critics have pointed to his rare ability to “combine blockbuster momentum (and ticket sales) with the finer, more nuanced sensibilities of a filmmaker always concerned with slowing down, honing in on character and theme”.

    Although Sam Mendes, director of Skyfall (2012) and Spectre (2015), came with artistic status, Villeneuve is something different – a marquee name frequently described as an auteur.

    Villeneuve talks about his love for Bond.

    Since his transition from making mostly low-key independent films in his native Canada to his arrival in Hollywood with Prisoners, starring Hugh Jackman and Jake Gyllenhaal (2013), Villeneuve has amassed an impressively eclectic filmography.

    He has proven that he is as comfortable shooting realistic crime thrillers (Sicario, 2015) and surrealist cinema that David Lynch would be proud of (Enemy, 2013), as he is with science fiction (Arrival, 2016, Blade Runner 2049, 2017, and the Dune films, 2021 and 2024).

    Villeneuve’s Bond

    Although Sicario may be the closest in terms of genre to the Bond films, establishing Villeneuve as a director who can expertly shoot action sequences, it is nevertheless difficult at this stage to conceptualise what a Villeneuve Bond film might be like.

    Some critics have suggested that the director’s cinematic resume, eclectic as it is, might not bode well for Bond. The Hollywood Reporter’s film critic Benjamin Svetkey, for instance, worries that Villeneuve’s “lugubrious, meditative filmmaking” is sorely lacking in humour – which could be fatal for 007. “A certain amount of wit and winking is critical to the character,” he claims.

    It is early days for Amazon MGM and Villeneuve. As yet, there is reportedly no treatment, no script, no writer and – more pointedly – no actor appointed to the role. Whatever happens, the 26th Bond film is likely to be a hard reboot that wipes the slate clean (again) after the fate of 007 in No Time to Die.

    Villeneuve’s choice for Bond is unlikely to be as cartoonish as Pierce Brosnan’s iteration.

    Although Villeneuve has said that he intends to honour tradition and that Bond is “sacred territory” for him, Bond’s capacity for revision and regeneration has been key to the franchise’s longevity.

    As socoiologists Tony Bennett and Janet Woollacott argue in their seminal study, Bond and Beyond, the figure of Bond has over the past six decades “been differently constructed at different moments,” with “different sets of ideological and cultural concerns”.

    So what kind of Bond film Villeneuve ends up directing largely depends on the story and whichever actor is anointed as the next James Bond. It is doubtful that audiences will expect a campy pantomime Bond like Roger Moore, or a Bond with an invisible car, like Pierce Brosnan in the cartoonish Die Another Day (2002). Villeneuve’s choice of Casino Royale as his favourite 007 may provide a clue. But it is also unlikely that the director will be satisfied with slavishly repeating the past.

    William Proctor does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Dune director Denis Villeneuve will helm the next Bond – but what will his 007 be like? – https://theconversation.com/dune-director-denis-villeneuve-will-helm-the-next-bond-but-what-will-his-007-be-like-260140

    MIL OSI Analysis

  • MIL-OSI Analysis: The NHS plan to genetically test all newborns sounds smart – until it creates patients who aren’t sick

    Source: The Conversation – UK – By Luca Stroppa, Postdoctoral fellow (“borsista di ricerca) at the University of Turin, former Postdoctoral Fellow on the project “Early Diagnosis – Handling Knowing”, University of St Andrews

    The current heel-prick test checks for nine rare genetic conditions, antibydni/Shutterstock

    By 2030, every baby born in England could have their entire genome sequenced under a new NHS initiative to “predict and prevent illness”. This would dramatically expand the current heel-prick test, which checks for nine rare genetic conditions, into a far more extensive screen of hundreds of potential risks.

    On the surface, the idea sounds like an obvious win for public health: spot problems early, intervene sooner and save lives. But genetic testing on this scale carries real risks, especially if the results are misunderstood or poorly communicated.

    The new plan builds on a recent NHS pilot study that sequenced the genomes of 100,000 newborns in England to identify more than 200 genetic conditions. However, these tests don’t provide clear cut answers. They don’t offer a diagnosis or certainty, just an estimate of risk.

    A genetic result might suggest a child has a higher (or lower) probability of developing a certain disease later in life. But risk is not prediction. If parents, or even clinicians, misinterpret that nuance, the consequences could be serious.

    Some families may come to see a child flagged as “at risk” as a patient-in-waiting. In extreme cases, they may treat a probability as a certainty; assuming, for instance, that a child “has the gene” and will inevitably become ill. That assumption could reshape how children are raised, how they’re treated and how they could see themselves.

    Alarming language

    This isn’t speculation. Research shows that while some people understand risk scores accurately, many struggle with statistical information. Words like “high risk” or “likely” are interpreted differently by different people and often more seriously than intended. Even trained doctors can overestimate what a positive test result means. When it comes to genomics, the line between “you might get sick” and “you will get sick” can blur quickly.

    Policymakers haven’t helped this confusion. Government messaging refers to “diagnosis before symptoms even occur” and “leapfrogging disease.” But this language overpromises what genomic data can do and downplays its uncertainty.

    When testing is indiscriminate and communication unclear, the fallout can be wide ranging. Children identified as “high risk” may undergo years of monitoring, unnecessary medical appointments, or even treatment for diseases they never develop. In some cases, this leads to physical harms, from unnecessary medications to procedures with side effects. In others, the damage is psychological: shaping a child’s identity around an anticipated future of illness. These psychological effects can be lasting. Being told you’re likely to develop a condition like dementia may influence how a person plans their life, even if that illness never materialises.

    False positives

    There are also broader issues with applying this kind of screening to everyone. Risk based testing works best when it’s targeted; for example, among those with symptoms or a strong family history. But in the general population, where most people are healthy, false positives can far outnumber accurate results. Even well designed tests can produce misleading outcomes when applied at scale.

    This is a well-known statistical effect, discussed during the COVID pandemic. In populations where a disease is rare, even highly accurate tests produce more false positives than true ones. If DNA screening is rolled out universally, many families will be told their child is at risk when they are not. These false positives can lead to a cascade of further tests, stress and unnecessary clinical interventions; all of which consume time and resources and may cause real harm.

    This issue already affects adult testing. For example, Alzheimer’s tests that measure early changes in the brain work well in memory clinics, where patients already show symptoms. But when these same tests are used on the general population, where most people are healthy, they produce false positives in up to two-thirds of cases. If genetic screening in newborns is rolled out in the same way, it could lead to similar problems: mislabelling healthy children as sick, and causing unnecessary worry and follow-up tests.

    So what’s the solution? It’s not to abandon genetic testing altogether – far from it. When used carefully, genomic data can offer real benefits, particularly for patients with symptoms or in research settings. But if we’re going to roll this out to every newborn, the surrounding infrastructure needs to be robust.

    That includes:

    • Clear, consistent communication: Risk scores must be explained in ways that emphasise uncertainty, not oversold as definitive predictions.

    • Support for parents: For consent to be truly informed, parents need help understanding that a genetic flag is not a diagnosis – and that many people with elevated risk never go on to develop the condition.

    • Training for clinicians: Many doctors still lack the tools to interpret and explain genetic information accurately and responsibly.

    • A national network of genetic counsellors Genetic counsellors are essential for supporting families through testing and interpretation. But current numbers in the England fall far short of what universal newborn screening would require.

    Genomic data holds great promise. But using it as a blanket tool for all newborns demands caution, clarity, and investment in communication and care. Without these safeguards, we risk turning healthy babies into patients-in-waiting.

    Correction: An earlier version of this article incorrectly stated that every baby born in the UK could have their genome sequenced under a new NHS initiative. In fact, the initiative applies to England only.

    The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    ref. The NHS plan to genetically test all newborns sounds smart – until it creates patients who aren’t sick – https://theconversation.com/the-nhs-plan-to-genetically-test-all-newborns-sounds-smart-until-it-creates-patients-who-arent-sick-259816

    MIL OSI Analysis

  • MIL-OSI Analysis: Air quality isn’t just bad in cities – here’s why and how we’re tracking pollution from upland fires

    Source: The Conversation – UK – By Rebecca Brownlow, Senior Lecturer in Environmental Science, Sheffield Hallam University

    Peatland burns over the reservoir in Langsett, a village in South Yorkshire. Wendy Birks, CC BY-NC-ND

    Early one October afternoon in 2023, thick grey smoke drifted across Sheffield’s western skyline. As much of the city became blanketed, residents turned to social media to complain about “bonfire smoke”, while others were forced to leave the city due to breathing difficulties.

    However, this smoke did not originate within the city. It was drifting in from the Peak District, more than nine miles away, where controlled heather burning was taking place on the moorlands. For around six hours, levels of fine particulate matter (known as PM2.5), tiny airborne pollutants known to harm human health, exceeded 40 micrograms per cubic metre of air (µg/m³) and peaked at 70µg/m³, well above the guidelines recommended by the World Health Organization.

    This single incident points to the wider and largely invisible problem of the routine burning of the UK’s uplands. This can be a serious source of air pollution, but because most official air pollution monitoring concentrates on urban areas, the effects are overlooked. This is why we have started monitoring upland fires and the pollution they cause.

    Prescribed burning is a longstanding land management practice often used to control vegetation for grouse shooting or livestock grazing. It happens across a range of upland landscapes. Many of the areas being burned sit on deep peat, an organic-rich soil made from layers of slowly decomposed plant material formed over thousands of years in waterlogged conditions.

    Peatlands are incredibly important. They are one of the most carbon-rich ecosystems on the planet. In the UK, they cover around 12% of the land area and store an estimated 3.2 billion tonnes of carbon. This is equivalent to all the forests of Germany, France and the UK combined. Most of the UK’s peat is found in Scotland, but notable areas in England include the Peak District and North York Moors. However, their value goes well beyond carbon.

    Around 70% of Britain’s drinking water comes from upland areas that are largely peatland, and healthy peatlands help reduce flooding by slowing the flow of water from hills to towns and cities. They also provide vital habitats for birds, insects and rare plants, forming the UK’s largest area of semi-natural habitat.




    Read more:
    Wildfire smoke can harm human health, even when the fire is burning hundreds of miles away – a toxicologist explains why


    Despite their ecological importance, more than 80% of English peatlands are classified as degraded, often through historic air pollution, draining, overgrazing and, importantly, repeated burning.

    One hidden consequence of that burning is air pollution. These burns are often viewed as isolated rural events, but their effect on regional air quality can be substantial. On that day in Sheffield, pollution levels briefly rivalled those seen across the city during bonfire night, a well-known peak in urban air pollution.

    In response to that October event, our research team launched a new pilot monitoring network across part of the Peak District national park. This FireUp project combines air quality sensors, satellite data and community observations to detect and measure pollution from upland fires.

    Planned burning event in the Peak District captured via Copernicus Sentinel-2 data (2024), retrieved from Copernicus SciHub and processed by European Space Agency.
    CC BY

    By using a mix of technologies and local reporting, we have documented spikes in PM2.5 pollution that would have otherwise been missed. Our system offers a clearer picture of when and where fires occur, and how far their smoke spreads, opening the door for better planning and stronger protections for public health. But the problem is not just a lack of data, it is also a failure of regulation. England’s current upland burning regulations are limited on four fronts.

    Heather and grass burning regulations introduced in 2021 prohibit burning only on peat deeper than 40cm inside designated sites. That means 60% of upland peat is excluded from these protections.

    With more than 95% of PM2.5 monitors located in urban areas, smoke from moorland fires in remote rural locations is rarely registered on official networks.

    The resources for organisations responsible for enforcing regulations have shrunk over the last decade. Natural England, one of the government’s statutory bodies responsible for environmental protection, has experienced a 4% decrease in funding for 2024-25 compared to the previous year.

    Prosecutions for illegal burning are exceptionally rare, with satellite analyses pointing to a higher level of unlicensed activity than official records suggest.

    In short, narrow legal scope, limited monitoring coverage and under-resourced enforcement leave many prescribed burns undetected and unaccounted for, along with the health and environmental risks they carry.

    Our FireUp system improves fire detections and helps quantify the effects of air pollution from these burns. As the UK government reviews regulations as part of the 2025 heather and grass burning consultation for England, and as upland fire risk increases, this kind of evidence is essential, not just to track what is happening, but to help shape a healthier and better future for the UK’s uplands.

    Our next step is to develop a citizen science app that makes it easier for people to report peatland fire incidents and upland burning to help improve regulation and log the effects of changes in air quality.


    Don’t have time to read about climate change as much as you’d like?

    Get a weekly roundup in your inbox instead. Every Wednesday, The Conversation’s environment editor writes Imagine, a short email that goes a little deeper into just one climate issue. Join the 45,000+ readers who’ve subscribed so far.


    James is a member of the Welsh Government Clean Air Advisory Panel, and Promoting Awareness of Air Quality Delivery Group. James also sits on the Scottish Government’s Air Quality Advisory Group.

    Maria Val Martin receives funding from UKRI and is a member of the DEFRA Air Quality Expert Group.

    Rebecca Brownlow does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Air quality isn’t just bad in cities – here’s why and how we’re tracking pollution from upland fires – https://theconversation.com/air-quality-isnt-just-bad-in-cities-heres-why-and-how-were-tracking-pollution-from-upland-fires-258034

    MIL OSI Analysis

  • MIL-OSI Analysis: With fresh songs and a spectacular set, Disney’s Hercules musical goes the distance

    Source: The Conversation – UK – By Emma Stafford, Professor of Greek Culture, University of Leeds

    “Whose daring deeds are great theatre? Hercules!” So sing the Muses, as they close act one of Disney’s Hercules, which opened at London’s Theatre Royal, Drury Lane last week.

    The 1997 Disney animation this new show is based on is, of course, already a successful musical film. The hit song Go the Distance was nominated for a Golden Globe and an Academy Award. The new West End version includes all the film’s familiar musical numbers, notably The Gospel Truth (which is reprised as many as six times) but also I Won’t Say (I’m In Love), Zero to Hero and A Star is Born.

    There are plenty of new original songs, too, by the composer Alan Menken and lyricist David Zippel.

    Some of the changes to the film’s story, however, are puzzling. In place of adoptive mortal parents Amphitryon and Alcmene, Hercules is born to a single mother, who is given a new (modern Greek) name and her own song: Despina’s Lullaby.


    Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.


    More understandable is the skipping over of Hercules’ childhood, allowing Luke Brady’s engaging Hercules to emerge fully grown not too long into the show.

    Likewise, Meg (Mae Ann Jorolan) is made even feistier than her 1990s incarnation. Instead of being in the clutches of the centaur Nessus when Hercules first meets her, she has two Hydra-venom traders in a headlock, and she sings “let me tell you a little something about saving women who don’t need to be saved” in the great new duet Forget About It.

    Fans of the film may be disappointed that Pegasus – Hercules’s trusty flying steed – has been written out. Though he is nicely referenced through a topiary cameo. But there was effective use of puppetry for a suitably dramatic Hydra – the monster who grows two more heads for every one Hercules cuts off.

    Other highlights of stage-trickery include the contributions of air sculptor Daniel Wurtzel. The spirits of the dead are represented by light material floating in a stream of air, and statues of Zeus and Hera appeared to come to life – I really don’t know how they did it.

    In another controversial change, the shape-shifting comedy sidekicks Pain and Panic have been downgraded to the humans Bob (Craig Gallivan) and Charles (Lee Zarrett). They are an endearing pair nonetheless, who get their own new song Getting Even.

    Indeed, there’s more of an emphasis on both humanity and community throughout the show. In place of Danny de Vito’s satyr Philoctetes, with his hero-training facility based on a remote island, Phil (Trevor Dion Nicholas) operates out of his local pub – Medusa’s bar – with the help of a whole bunch of neighbours from Hercules’ hometown of Thebes.

    Also toned down is Hades, at least compared to James Wood’s flamboyant character in the animated film. Stephen Carlisle (previously seen as Scar in Lion King) plays Hades more in the tradition of the upper-class British villain we all love to boo. At the end of the show, however, he becomes literally larger-than-life as a giant puppet. The animation’s battle of the gods against the Titans is turned into a highly stylised confrontation between this turbo-charged Hades and everyone else.

    The trailer for Hercules.

    The show’s visuals, masterminded by Dane Laffrey, are undeniably impressive. Even before the curtain goes up, the theatre’s usual proscenium arch has been transformed into a monumental Greek temple facade. Thereafter the sets are dominated by four massive pairs of Doric columns, which glide smoothly into different formations. The backdrop to the gods’ home on Olympus is a giant gold sunburst motif, and everything to do with the gods is golden.

    Video-projected backgrounds (by George Reeve) feature further temples and a mosaic texture – really a Roman touch. But a more properly Greek element is the use of vases in the Attic black-figure style. These are seen especially in the early “young Hercules” scene in the market-place and again to go with the Zero to Hero line “they slapped his face on every vase”.

    And finally, the real stars of the show are the five Muses (played by Sharlene Hector, Brianna Ogunbawo, Robyn Rose-Li, Kamilla Fernandes and Kimmy Edwards the evening I attended).

    Their role – as a cross between the chorus of a Greek tragedy and a gospel choir – is even bigger here than in the animation, of which they were such an innovative feature. They must spend the whole evening on costume changes, appearing in a series of fabulous frocks (designed by Gregg Barnes and Sky Switser), each more spectacular than the last.

    Some early reviews have been critical of the show as lacking in emotional depth, and it’s true that the more serious theme of “finding where I belong” is subservient to the high-octane razzmatazz – but I suspect this won’t matter to the majority of West End audiences. Disney’s Hercules is indeed great (musical) theatre.

    Emma Stafford has received funding from the AHRC for the Hercules Project (https://herculesproject.leeds.ac.uk/).

    ref. With fresh songs and a spectacular set, Disney’s Hercules musical goes the distance – https://theconversation.com/with-fresh-songs-and-a-spectacular-set-disneys-hercules-musical-goes-the-distance-260024

    MIL OSI Analysis

  • MIL-OSI Analysis: From sore muscles to smartwatches and stubborn belly fat: answers to six of the most common fitness questions

    Source: The Conversation – UK – By Paul Hough, Lecturer Sport & Exercise Physiology , University of Westminster

    PeopleImages.com – Yuri A/Shutterstock

    In a world flooded with fitness fads and “quick-fix” workout plans, solid evidence can often get drowned out. Yet the science is clear: jogging for just five to ten minutes a day can lower your risk of dying from heart disease and even reduce your overall risk of dying from any cause. This kind of research rarely gets the attention it deserves.

    As a sport and exercise scientist, I’ve been asked hundreds of fitness questions over the years by athletes, clients and on social media. Many of these questions are rooted in persistent myths or internet misinformation. Here are six of the most common ones, starting with one of the most popular:

    1. What exercise is best for fat loss?

    No specific exercise can reduce fat in one area, despite what ads or fitness influencers might promise.

    Instead, losing body fat comes down to maintaining a caloric deficit over time: burning more calories than you consume. If you eat more than you burn, even the most intense workouts won’t shift body fat.

    That said, exercise plays a key role in fat loss. Combining a healthy diet with physical activity is the most effective strategy for fat loss and long-term weight maintenance. Exercise helps by burning calories, improving sleep regulation, increasing confidence, and promoting metabolic adaptations like improved insulin sensitivity.

    Resistance training is especially important. It helps preserve muscle during calorie restriction, meaning the weight you lose is more likely to come from fat rather than lean tissue.




    Read more:
    Weight loss: why you don’t just lose fat when you’re on a diet


    2. Does fasting before exercise help you burn more fat?

    Fasted exercise (working out on an empty stomach, typically in the morning) increases fat oxidation, the metabolic process where fatty acids are broken down to produce energy due to low blood glucose and insulin levels, paired with elevated cortisol.

    But does it lead to greater fat loss overall? Not really.
    Studies comparing fasted versus fed exercise show no significant differences in long-term fat loss when total calories are matched. In short: fasted workouts might burn more fat during the session, but it doesn’t translate into greater weight loss over time.

    3. Why do my muscles feel sore two days after training?

    That ache you feel 24 to 48 hours after an intense or unfamiliar workout is called delayed onset muscle soreness (Doms). The delay in soreness is caused by inflammation, which takes time to fully develop. The inflammation is beneficial because it signals your body to rebuild stronger tissue by breaking down damaged proteins and building new ones. In response to the inflammation, the muscle and connective tissues release “protein messengers” that sensitise pain receptors in the connective tissues, which can make even basic movements feel uncomfortable.

    Doms often peaks two days after exercise. But the good news? Your body adapts quickly. Doms is a normal part of muscle adaptation that enables you to experience less soreness when you next perform the same activity.

    4. Should I train if my muscles are sore?

    If your muscles feel sore after exercise, they are temporarily weakened and it’s best to avoid high-intensity exercise.

    Mild Doms? Low-intensity, low impact activities like swimming or cycling can help improve blood flow and reduce stiffness, easing the sensation of soreness. However, light activity won’t necessarily speed up the recovery process. Another option is to train different muscle groups, such as the upper body if your legs are sore.

    5. Is running bad for your knees?

    This myth is surprisingly persistent but the evidence says otherwise. A 2023 study found no higher rates of knee osteoarthritis among runners compared to non-runners. In fact, running may even strengthen cartilage by stimulating collagen production.

    That said, certain risk factors, such as previous knee injury, excess body weight, or ramping up mileage too quickly, can raise your risk of knee pain or injury. But with smart training, including resistance work and gradual progression, running can be safe and beneficial for your knees.

    6. Do smartwatches accurately track calories burned?

    Not quite. While wearables can give a rough estimate of your energy expenditure, they’re not precise enough to rely on for dietary or fitness planning.

    A 2022 study found that smartwatches significantly miscalculated calories burned across different activities like walking, cycling and resistance training. These findings align with a wider systematic review that concluded most fitness trackers are inaccurate for energy expenditure.

    These devices can still be helpful for tracking heart rate trends, daily step counts and staying motivated but if you’re planning your diet or workouts around the calorie numbers they give you, it’s time to think again.




    Read more:
    Wearable fitness trackers can make you seven times more likely to stick to your workouts – new research


    When it comes to exercise and fat loss, there’s no one-size-fits-all solution – and no shortcut. The basics still matter: eat well, move often and listen to your body. And when in doubt, stick with exercise and nutrition advice supported by science – not what’s trending online.

    Paul Hough does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. From sore muscles to smartwatches and stubborn belly fat: answers to six of the most common fitness questions – https://theconversation.com/from-sore-muscles-to-smartwatches-and-stubborn-belly-fat-answers-to-six-of-the-most-common-fitness-questions-259305

    MIL OSI Analysis

  • MIL-OSI Analysis: Radical listening: two big ideas and six core skills that could help you connect more deeply with others

    Source: The Conversation – UK – By Christian van Nieuwerburgh, Professor of Coaching and Positive Psychology, RCSI University of Medicine and Health Sciences

    brizmaker/Shutterstock

    Even though we live in a constantly connected world, more people feel lonely than ever before. According to public polling company Gallup, nearly a quarter of the world’s population reports feeling lonely.

    At the same time, we’re overwhelmed by distractions: 80% of desk-based workers admit to losing concentration during meetings. And with just a scroll through our newsfeeds, we see growing polarisation and political division on a global scale.

    In such uncertain times, the practice of radical listening – listening with greater intention – offers a way to reconnect and to foster a deeper sense of empathy, engagement and hope.

    In our book, Radical listening: the art of true connection, which I co-authored with positive psychology expert Dr Robert Biswas-Diener, we explore how radical listening can improve motivation, wellbeing and meaningful connection. To become a radical listener, you’ll need to embrace two core ideas and develop six essential skills.

    The first idea is about clarifying your intention when listening. At the heart of radical listening is the belief that we always listen with a purpose — even if we’re not fully aware of it. For example, we might listen to a podcast with the intention of learning something, or attend a comedy show with the goal of being entertained.

    When we set a clear intention, we become more attuned to what matters. If your aim is to show appreciation during a conversation, you’ll naturally tune in to the qualities you value in the other person — a thoughtful comment, a kind gesture. If you want to elevate your listening, enter conversations with a positive, deliberate intention.

    The second idea is about matching your listening intention to what will be most helpful for your conversation partner. This is grounded in the principle of optimal matching of social support. Biswas-Diener explains it well here: meaningful conversations happen when there’s alignment between what the speaker needs and what the listener offers.

    This may sound obvious, but we often miss the mark. Say your partner has had a tough day. Should you offer advice? Reassure them with a personal story? Just listen and empathise? Change the subject to distract them? The most effective response might be asking: “What do you need from me right now?” When you get the match right, you’ll feel the connection.

    Six core skills

    We all have our own listening styles: empathetic, animated, quiet, curious. The good news is that everyone can improve their listening by practising these six core skills:

    1. Noticing: This means scanning for subtle but relevant cues: body language, facial expressions, changes in tone, or unusual word choices. Noticing shows you’re fully present. For example: “I noticed you lit up when you talked about your previous job.”

    2. Quieting: Managing distractions, both external and internal. Great listeners reduce interruptions by putting away their phones or turning off notifications – but also by calming their internal chatter. Being rested and mentally present makes quieting possible.

    3. Accepting: Respecting others’ right to their views – even when you disagree. Acceptance doesn’t mean agreement. It means acknowledging that others have a valid perspective. Try practising this by listening to someone whose views challenge your own.

    4. Acknowledging: Validating your conversation partner’s experiences and contributions. Look for opportunities to highlight their strengths, reflect their feelings and show empathy through both your words and expressions.

    5. Questioning: Curiosity is a cornerstone of radical listening. Ask questions that express genuine interest and invite deeper sharing. Try: “What was it about that moment that made it so special for you?”

    6. Interjecting: Jump in (briefly) with minimal encouragers to show you’re engaged – then jump back out. Minimal encouragers are short verbal or nonverbal cues used during a conversation to show you’re engaged without interrupting or taking over. They’re a key skill in radical listening because they let the speaker know you’re present and responsive while keeping the focus on them. Think of it as offering small bursts of energy, like “That’s amazing!” or “Wow, I didn’t know that.” It shows you’re actively listening, not passively absorbing.

    Radical listening is a hyper-intentional, purposeful and proactive approach to connection. It’s about helping others feel seen, valued and heard. The benefits for your conversation partner are clear — but there are also real advantages for you. You’ll build deeper relationships, experience more satisfying interactions, and be able to create trust quickly.

    In a world of loneliness, distraction, and division, radical listening isn’t just a nice idea – it’s a powerful tool for human connection.


    This article features references to books that have been included for editorial reasons, and may contain links to bookshop.org. If you click on one of the links and go on to buy something from bookshop.org The Conversation UK may earn a commission.

    Christian van Nieuwerburgh does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Radical listening: two big ideas and six core skills that could help you connect more deeply with others – https://theconversation.com/radical-listening-two-big-ideas-and-six-core-skills-that-could-help-you-connect-more-deeply-with-others-256289

    MIL OSI Analysis

  • MIL-OSI Analysis: What happens to your brain when you watch videos online at faster speeds than normal

    Source: The Conversation – UK – By Marcus Pearce, Reader in Cognitive Science, Queen Mary University of London

    ‘Hare speed, please.’ Pressmaster

    Many of us have got into the habit of listening to podcasts, audiobooks and other online content at increased playback speeds. For younger people, it might even be the norm. One survey of students in California, for instance, showed that 89% changed the playback speed of online lectures, while there have been numerous articles in the media about how common speedy viewing has become.

    It is easy to think of some advantages to watching things more quickly. It can let you consume more content in the same amount of time, or go through the same piece of content a couple of times to get the most out of it.

    This could be particularly useful in an educational context, where it might free up time for consolidating knowledge, doing practice tests and so forth. Watching quickly is also potentially a good way of making sure you sustain your attention and engagement for the entire duration to avoid the mind wandering.

    But what about the disadvantages? It turns out that there are one or two of those as well.

    When a person is exposed to spoken information, researchers distinguish three phases of memory: encoding the information, storing it and subsequently retrieving it. At the encoding phase, it takes the brain some time to process and comprehend the incoming speech-stream. Words must be extracted and their contextual meaning retrieved from the memory in real-time.

    People generally speak at a rate of about 150 words per minute, though doubling the rate to 300 or even tripling it to 450 words per minute is still within the range of what we can find intelligible. The question is more about the quality and longevity of the memories that we form.

    Incoming information is stored temporarily in a memory system called working memory. This allows chunks of information to be transformed, combined and manipulated into a form that is ready for transfer to the long-term memory. Because our working memory has a limited capacity, if too much information arrives too quickly it can be exceeded. This leads to cognitive overload and loss of information.

    Speedy viewing and information recall

    A recent meta analysis in this area examined 24 studies of learning from lecture videos. The studies varied in their design but generally involved playing a video lecture to one group at original speed (1x) and playing the same video lecture to another group at a faster speed (1.25x, 1.5x, 2x and 2.5x).

    Just like in a randomised controlled trial used to test medical treatments, participants were randomly assigned to each of the two groups. Both groups then completed an identical test after watching the video to assess their knowledge of the material. The tests either required them to recall information, used multiple choice questions to assess their recall, or both.

    Faster playback may not help with study.
    V.Studio

    The meta-analysis showed that increasing playback speed had increasingly negative effects on test performance. At speeds of up to 1.5x, the cost was very small. But at 2x and above, the negative effect was moderate to large.

    To put this in context, if the average score for a cohort of students was 75% with a typical variation of 20 percentage points in either direction, then increasing the playback speed to 1.5x would bring down the average person’s result by 2 percentage points. And increasing the playback speed to 2.5x would lead to an average loss of 17 percentage points.

    Older people

    Interestingly, one of the studies included in the meta-analysis also investigated older adults (aged 61-94) and found that they were more affected by watching content at faster speeds than younger adults (aged 18-36). This may reflect a weakening of memory capacity in otherwise healthy people, suggesting that older adults should watch at normal speed or even slower playback speeds to compensate.

    However, we don’t yet know whether you can reduce the negative effects of fast playback by doing it regularly. So it could be that younger adults simply have more experience of fast playback and are therefore better able to cope with the increased cognitive load. Similarly, it means we don’t know whether younger people can mitigate the negative effects on their ability to retain information by using faster playback more often.

    Another unknown is whether there are any long-term effects on mental function and brain activity from watching videos at increased playback speeds. In theory, such effects could be positive, such as a better ability to handle increased cognitive load. Or they could be negative, such as greater mental fatigue resulting from increased cognitive load, but we currently lack the scientific evidence to answer this question.

    A final observation is that even if playing back content at, say, 1.5 times the normal speed doesn’t affect memory performance, there is evidence to suggest the experience is less enjoyable. That may affect people’s motivation and experience at learning things, which might make them find more excuses not to do it. On the other hand, faster playback has become popular, so maybe once people get used to it, it’s fine – hopefully we’ll understand these processes better in the years to come.

    Marcus Pearce does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. What happens to your brain when you watch videos online at faster speeds than normal – https://theconversation.com/what-happens-to-your-brain-when-you-watch-videos-online-at-faster-speeds-than-normal-259930

    MIL OSI Analysis

  • MIL-OSI Analysis: What happens to your brain when you watch videos online at faster speeds than normal

    Source: The Conversation – UK – By Marcus Pearce, Reader in Cognitive Science, Queen Mary University of London

    ‘Hare speed, please.’ Pressmaster

    Many of us have got into the habit of listening to podcasts, audiobooks and other online content at increased playback speeds. For younger people, it might even be the norm. One survey of students in California, for instance, showed that 89% changed the playback speed of online lectures, while there have been numerous articles in the media about how common speedy viewing has become.

    It is easy to think of some advantages to watching things more quickly. It can let you consume more content in the same amount of time, or go through the same piece of content a couple of times to get the most out of it.

    This could be particularly useful in an educational context, where it might free up time for consolidating knowledge, doing practice tests and so forth. Watching quickly is also potentially a good way of making sure you sustain your attention and engagement for the entire duration to avoid the mind wandering.

    But what about the disadvantages? It turns out that there are one or two of those as well.

    When a person is exposed to spoken information, researchers distinguish three phases of memory: encoding the information, storing it and subsequently retrieving it. At the encoding phase, it takes the brain some time to process and comprehend the incoming speech-stream. Words must be extracted and their contextual meaning retrieved from the memory in real-time.

    People generally speak at a rate of about 150 words per minute, though doubling the rate to 300 or even tripling it to 450 words per minute is still within the range of what we can find intelligible. The question is more about the quality and longevity of the memories that we form.

    Incoming information is stored temporarily in a memory system called working memory. This allows chunks of information to be transformed, combined and manipulated into a form that is ready for transfer to the long-term memory. Because our working memory has a limited capacity, if too much information arrives too quickly it can be exceeded. This leads to cognitive overload and loss of information.

    Speedy viewing and information recall

    A recent meta analysis in this area examined 24 studies of learning from lecture videos. The studies varied in their design but generally involved playing a video lecture to one group at original speed (1x) and playing the same video lecture to another group at a faster speed (1.25x, 1.5x, 2x and 2.5x).

    Just like in a randomised controlled trial used to test medical treatments, participants were randomly assigned to each of the two groups. Both groups then completed an identical test after watching the video to assess their knowledge of the material. The tests either required them to recall information, used multiple choice questions to assess their recall, or both.

    Faster playback may not help with study.
    V.Studio

    The meta-analysis showed that increasing playback speed had increasingly negative effects on test performance. At speeds of up to 1.5x, the cost was very small. But at 2x and above, the negative effect was moderate to large.

    To put this in context, if the average score for a cohort of students was 75% with a typical variation of 20 percentage points in either direction, then increasing the playback speed to 1.5x would bring down the average person’s result by 2 percentage points. And increasing the playback speed to 2.5x would lead to an average loss of 17 percentage points.

    Older people

    Interestingly, one of the studies included in the meta-analysis also investigated older adults (aged 61-94) and found that they were more affected by watching content at faster speeds than younger adults (aged 18-36). This may reflect a weakening of memory capacity in otherwise healthy people, suggesting that older adults should watch at normal speed or even slower playback speeds to compensate.

    However, we don’t yet know whether you can reduce the negative effects of fast playback by doing it regularly. So it could be that younger adults simply have more experience of fast playback and are therefore better able to cope with the increased cognitive load. Similarly, it means we don’t know whether younger people can mitigate the negative effects on their ability to retain information by using faster playback more often.

    Another unknown is whether there are any long-term effects on mental function and brain activity from watching videos at increased playback speeds. In theory, such effects could be positive, such as a better ability to handle increased cognitive load. Or they could be negative, such as greater mental fatigue resulting from increased cognitive load, but we currently lack the scientific evidence to answer this question.

    A final observation is that even if playing back content at, say, 1.5 times the normal speed doesn’t affect memory performance, there is evidence to suggest the experience is less enjoyable. That may affect people’s motivation and experience at learning things, which might make them find more excuses not to do it. On the other hand, faster playback has become popular, so maybe once people get used to it, it’s fine – hopefully we’ll understand these processes better in the years to come.

    Marcus Pearce does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. What happens to your brain when you watch videos online at faster speeds than normal – https://theconversation.com/what-happens-to-your-brain-when-you-watch-videos-online-at-faster-speeds-than-normal-259930

    MIL OSI Analysis

  • MIL-OSI Analysis: Row over damage to Iran’s nuclear programme raises questions about intelligence

    Source: The Conversation – UK – By Robert Dover, Professor of Intelligence and National Security & Dean of Faculty, University of Hull

    The ongoing debate over whether Iranian nuclear sites were “obliterated”, as the US president and his team insist, or merely “damaged”, as much of the intelligence suggest, should make us pause and think about the nature and purpose of intelligence.

    As Donald Rumsfeld famously said “if it was a fact it wouldn’t be called intelligence”.

    The recorded fate of the Iranian nuclear sites will be decided by the collection and assessment of difficult to reach raw intelligence feeds. These will include imagery, technical, communications and human intelligence, among many secret techniques.

    The classified conclusions of these efforts are unlikely to make their way into the public realm, unless there is Congressional or Senate inquiry, like the one held after 9/11.

    So, why does it matter?

    There has been strong public interest in intelligence assessments since 9/11 and the 2003 invasion of Iraq. Intelligence is often only seen in public when something has gone wrong – either that something was missed or the public has been misled. Inquiries into 9/11 criticised intelligence agencies for not putting together single strands of intelligence into a whole picture, revealing the plot and the attack.

    Inquiries into the approach to the 2003 Iraq war suggested intelligence agencies had allowed their assessments to become shaped by political need, or had failed to adequately caution about what they did not know.

    Successful intelligence operations nearly always mean that something damaging to the country or the public has been prevented. If agencies celebrated these successes loudly they might reveal something about their techniques and reach that is useful to our adversaries. So, our understanding of intelligence tends to be framed by popular culture – or by the inquiries around intelligence failures.

    From these two sources, intelligence is simultaneously all-seeing and deeply flawed. Add in narratives around the “deep state” – a shorthand that accuses unnamed and publicly unaccountable government officials of frustrating the will of the people – and it should be no surprise that the public and politicians are sometimes confused about security intelligence and published assessmements.

    In the case of the Iranian nuclear facilities, the importance of the intelligence picture is focused around politics, diplomacy and security. Donald Trump would obviously prefer an official narrative that his decision and orders have put back the Iranian nuclear programme by years. This is why he talks about the sites being obliterated. And it’s why his director of national intelligence, Tulsi Gabbard, has affirmed that her intelligence-led assessment agrees. That said, she has opted not to give testimony to the Senate.

    When it comes diplomacy, the judgement of intelligence officials could do one of two things. It could either place Iran in a poorer negotiating position with no nuclear programme to provide it with the ultimate security. Or it could allow Tehran to present the country as an emerging nuclear power, with the added muscle that implies. This judgement will have an impact on Israel’s need to preemptively contain Iran. And in security terms, the classified judgement will also help to shape the next steps of the US president, his diplomats and his armed forces.

    Tulsi Gabbard, the US director of niitonal intellgence, delivers the annual threat assessment. She testifies that Iran is not actively building a nuclear weapon.

    The assessment given to the public may well be different from the one held within the administration. While uncomfortable for us outside of government circles, this is often a perfectly reasonable choice for a government to make. Security diplomacy is best done behind closed doors. Or at least, this used to be the case. Now Trump appears to be remaking the art of statecraft in public with his TruthSocial posts and his earthy and authentic language in press conferences.

    Misinformation and public mistrust

    Having a large gap between the secret intelligence assessment and the publicly acknowledged position can have stark consequences for a government. The 1971 Pentagon Papers are a good example of this.

    These were prepared for the government about the progress of the Vietnam war and leaked to the press. The leaks highlighted the inaccuracy in government reporting to the American public about the progress of the war. The fallout included a number of official inquiries that shone a negative light on intelligence agencies. They also resulted in a strengthening of media freedoms.

    Similarly, the 2003 Iraq war damaged the credibility of the US intelligence community. It became clear to that the unequivocal statements about Iraqi possession of weapons of mass destruction turned out to be overstated and under-evidenced. The loss of trust, limitations on the executive use of intelligence and the losses to the US in blood and treasure in the Iraq campaign are still being felt in American politics.

    Last, the Snowden leaks of 2013 highlighted the mismatch between what was understood about intelligence intrusion into private communications data, including internet browsing activities, and what was happening in the National Security Agency through programmes such as Prism.

    The Snowden leaks had an impact on America’s standing with its allies and resulted in the USA Freedom Act in 2015. This imposed some limits on the data that US intelligence agencies can collect on American citizens and also clarified the use of wiretaps and tracking “lone wolf” terrorists.

    The Snowden affair also fuelled a growing narrative about unaccountable deep state activity that has foregrounded online phenomena such as the conspiracy site QAnon. It has also boosted some populist politics that point to, and feed off the public suspicion on, mass surveillance and hidden government activities.

    The lessons for the current debate are clear. The first is that using intelligence assessments to justify military actions contain enduring hazards for governments, given the propensity among public servants for leaking.

    From that, it naturally follows that when published intelligence is shown to be incorrect, the unintended consequence for governments is a loss of trust and having fewer freedoms to make use of intelligence to protect the nation state.

    Robert Dover has previously received research funding from the AHRC to examine lessons that can be drawn from intelligence and he and Michael Goodman published an edited collection from this project.

    ref. Row over damage to Iran’s nuclear programme raises questions about intelligence – https://theconversation.com/row-over-damage-to-irans-nuclear-programme-raises-questions-about-intelligence-260021

    MIL OSI Analysis

  • MIL-OSI Analysis: Row over damage to Iran’s nuclear programme raises questions about intelligence

    Source: The Conversation – UK – By Robert Dover, Professor of Intelligence and National Security & Dean of Faculty, University of Hull

    The ongoing debate over whether Iranian nuclear sites were “obliterated”, as the US president and his team insist, or merely “damaged”, as much of the intelligence suggest, should make us pause and think about the nature and purpose of intelligence.

    As Donald Rumsfeld famously said “if it was a fact it wouldn’t be called intelligence”.

    The recorded fate of the Iranian nuclear sites will be decided by the collection and assessment of difficult to reach raw intelligence feeds. These will include imagery, technical, communications and human intelligence, among many secret techniques.

    The classified conclusions of these efforts are unlikely to make their way into the public realm, unless there is Congressional or Senate inquiry, like the one held after 9/11.

    So, why does it matter?

    There has been strong public interest in intelligence assessments since 9/11 and the 2003 invasion of Iraq. Intelligence is often only seen in public when something has gone wrong – either that something was missed or the public has been misled. Inquiries into 9/11 criticised intelligence agencies for not putting together single strands of intelligence into a whole picture, revealing the plot and the attack.

    Inquiries into the approach to the 2003 Iraq war suggested intelligence agencies had allowed their assessments to become shaped by political need, or had failed to adequately caution about what they did not know.

    Successful intelligence operations nearly always mean that something damaging to the country or the public has been prevented. If agencies celebrated these successes loudly they might reveal something about their techniques and reach that is useful to our adversaries. So, our understanding of intelligence tends to be framed by popular culture – or by the inquiries around intelligence failures.

    From these two sources, intelligence is simultaneously all-seeing and deeply flawed. Add in narratives around the “deep state” – a shorthand that accuses unnamed and publicly unaccountable government officials of frustrating the will of the people – and it should be no surprise that the public and politicians are sometimes confused about security intelligence and published assessmements.

    In the case of the Iranian nuclear facilities, the importance of the intelligence picture is focused around politics, diplomacy and security. Donald Trump would obviously prefer an official narrative that his decision and orders have put back the Iranian nuclear programme by years. This is why he talks about the sites being obliterated. And it’s why his director of national intelligence, Tulsi Gabbard, has affirmed that her intelligence-led assessment agrees. That said, she has opted not to give testimony to the Senate.

    When it comes diplomacy, the judgement of intelligence officials could do one of two things. It could either place Iran in a poorer negotiating position with no nuclear programme to provide it with the ultimate security. Or it could allow Tehran to present the country as an emerging nuclear power, with the added muscle that implies. This judgement will have an impact on Israel’s need to preemptively contain Iran. And in security terms, the classified judgement will also help to shape the next steps of the US president, his diplomats and his armed forces.

    Tulsi Gabbard, the US director of niitonal intellgence, delivers the annual threat assessment. She testifies that Iran is not actively building a nuclear weapon.

    The assessment given to the public may well be different from the one held within the administration. While uncomfortable for us outside of government circles, this is often a perfectly reasonable choice for a government to make. Security diplomacy is best done behind closed doors. Or at least, this used to be the case. Now Trump appears to be remaking the art of statecraft in public with his TruthSocial posts and his earthy and authentic language in press conferences.

    Misinformation and public mistrust

    Having a large gap between the secret intelligence assessment and the publicly acknowledged position can have stark consequences for a government. The 1971 Pentagon Papers are a good example of this.

    These were prepared for the government about the progress of the Vietnam war and leaked to the press. The leaks highlighted the inaccuracy in government reporting to the American public about the progress of the war. The fallout included a number of official inquiries that shone a negative light on intelligence agencies. They also resulted in a strengthening of media freedoms.

    Similarly, the 2003 Iraq war damaged the credibility of the US intelligence community. It became clear to that the unequivocal statements about Iraqi possession of weapons of mass destruction turned out to be overstated and under-evidenced. The loss of trust, limitations on the executive use of intelligence and the losses to the US in blood and treasure in the Iraq campaign are still being felt in American politics.

    Last, the Snowden leaks of 2013 highlighted the mismatch between what was understood about intelligence intrusion into private communications data, including internet browsing activities, and what was happening in the National Security Agency through programmes such as Prism.

    The Snowden leaks had an impact on America’s standing with its allies and resulted in the USA Freedom Act in 2015. This imposed some limits on the data that US intelligence agencies can collect on American citizens and also clarified the use of wiretaps and tracking “lone wolf” terrorists.

    The Snowden affair also fuelled a growing narrative about unaccountable deep state activity that has foregrounded online phenomena such as the conspiracy site QAnon. It has also boosted some populist politics that point to, and feed off the public suspicion on, mass surveillance and hidden government activities.

    The lessons for the current debate are clear. The first is that using intelligence assessments to justify military actions contain enduring hazards for governments, given the propensity among public servants for leaking.

    From that, it naturally follows that when published intelligence is shown to be incorrect, the unintended consequence for governments is a loss of trust and having fewer freedoms to make use of intelligence to protect the nation state.

    Robert Dover has previously received research funding from the AHRC to examine lessons that can be drawn from intelligence and he and Michael Goodman published an edited collection from this project.

    ref. Row over damage to Iran’s nuclear programme raises questions about intelligence – https://theconversation.com/row-over-damage-to-irans-nuclear-programme-raises-questions-about-intelligence-260021

    MIL OSI Analysis

  • MIL-OSI Analysis: How tennis takes a toll: the leg and foot injuries players need to watch out for

    Source: The Conversation – UK – By Craig Gwynne, Senior Lecturer in Podiatry, Cardiff Metropolitan University

    When Novak Djokovic limped out of the 2024 French Open with a torn meniscus in his knee, all eyes turned to whether he’d be fit for Wimbledon. And when Nick Kyrgios pulled out of Wimbledon for the third year running earlier this month due to a knee injury, fans were disappointed, but medical experts may not have been surprised.

    These weren’t freak accidents. They were reminders of just how much stress elite tennis puts on the legs and feet. But the same risks apply to anyone picking up a racket this summer. From Centre Court to local parks, tennis takes a toll on the body that many players don’t appreciate.

    Tennis demands explosive movement like lunges, pivots, sprints and sudden stops. Every serve starts with a push from the toes. Every rally shifts weight between the heel and forefoot. Unlike sports with linear movement, like sprinting, tennis places constant multi-directional stress on the feet and ankles – two of the most frequently injured body parts in the game.

    Grass courts like Wimbledon’s are notoriously slick, even when dry. They offer less traction than hard courts and can increase the risk of slipping and twisting injuries. Ankle sprains and midfoot stress injuries are more common on these surfaces, particularly for players not wearing surface-appropriate shoes.

    But problems aren’t limited to grass. Hard courts often trigger repetitive strain in the heel or forefoot. And while clay is more forgiving, it still demands relentless lateral movement. No matter the surface, tennis puts pressure on the small joints and bones of the foot.

    Consequently, even the world’s best aren’t immune. Nick Kyrgios’s long-running foot issues have disrupted multiple seasons for him. Rafael Nadal has battled Mueller-Weiss syndrome, which is a rare condition that damages the navicular bone in the foot and requires specialist treatment and custom shoe-inserts.

    In April 2024, French player Arthur Cazaux rolled his ankle at the Barcelona Open, posting a viral image of the swelling that underscored how brutal the sport can be.

    What science says about foot injuries in tennis

    Many foot and ankle injuries in tennis often don’t result from one big moment — they build slowly over time. Stress fractures in the navicular and metatarsals (small bones in the midfoot) are especially common in players who train and play often. These bones are repeatedly loaded during sprints, pivots and push-offs, and can become damaged without any obvious trauma.

    Sprained ankles are another common problem. The ligaments on the outside of the ankle (known as the lateral ligaments) are particularly at risk during sudden changes in direction, especially on slippery surfaces. This is a major feature of tennis movement and makes ankle injuries hard to avoid without good support or strength.

    Foot mechanics, which is the way the foot absorbs, transfers and responds to forces during movement, also play a key role in injury risk. Research shows that players shift their body weight across different areas of the foot depending on the shot. Over time, repeated pressure on the forefoot or heel can lead to tendon strain or bone stress injuries.

    Ankle flexibility and lower limb strength also matter. Studies show that players with poor ankle mobility or control are not only more likely to lose power in their shots, they’re also more prone to overloading the foot and ankle during play.

    Despite this, foot and ankle injuries still get overlooked in many tennis injury prevention plans. Most focus on the knees, hips or shoulders, leaving one of the most injury-prone parts of the body without enough attention or support.

    The Wimbledon effect

    Wimbledon inspires thousands to pick up a racket every summer. But this seasonal spike in participation is often matched by a rise in injuries, particularly among casual players.

    Studies show that leg and foot injuries are prevalent among amateur tennis players. Ankle sprains, Achilles tendon issues and plantar fasciitis (pain in the bottom of the foot) are among the most common complaints.

    Footwear is one of the main reasons for this. Professionals wear tennis-specific shoes tailored to surface type. Grass-court shoes, for example, have shallow pimples for traction without damaging the turf. But many recreational players hit the court in running shoes, which are designed for straight-line motion, not side-to-side movement. This increases the risk of slips, ankle rolls and stress to the plantar fascia.

    Others ignore foot pain, assuming it’s normal or age-related. But aching arches, bruised heels or soreness across the midfoot may signal deeper issues like tendon overload, early stress fractures or plantar tissue damage.

    How to protect your feet

    So if you’re heading out to play tennis this summer, whether at a club or on the local court, a few small changes can help protect your feet:

    1. Wear tennis shoes designed for the surface. Don’t rely on general trainers or running shoes.

    2. Warm up properly. Include ankle rolls, calf raises and lateral drills (side-to-side movements).

    3. Strengthen your feet between matches with balance work or resistance-band exercises. You can also do towel curls, which involves placing a towel on the floor and gripping it towards your arch with your toes.

    4. Listen to pain. Discomfort in the heel, arch or midfoot isn’t “just tiredness”. It may be a warning sign.

    5. Replace worn shoes regularly, especially if you play on grass where grip is crucial.

    If you do sustain a minor ankle sprain apply the “police” principle:

    Protection = Avoid activities that aggravate pain and further injury.

    Optimal loading = Gentle, controlled movement and weight-bearing as tolerated, aiming to promote tissue healing and prevent stiffness.

    Ice = Apply ice to reduce swelling and pain, typically for 15-20 minutes every few hours.

    Compression = Use an elastic bandage to help reduce swelling, but be mindful of circulation.

    Elevation = Keep the injured ankle elevated to minimise swelling.

    If pain doesn’t ease after 48 hours, or worsens during activity, speak to a podiatrist or physiotherapist. Stress fractures in particular can worsen without rest.

    Wimbledon is a celebration of tennis at its most graceful and exciting. But it’s also a high-impact sport that places a lot of strain on the body.

    Whether you’re serving aces at your club or just hitting a couple of balls with friends, your feet are your secret weapon and your first line of defence. Take care of them, and you’ll stay in the match for longer.

    Craig Gwynne does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. How tennis takes a toll: the leg and foot injuries players need to watch out for – https://theconversation.com/how-tennis-takes-a-toll-the-leg-and-foot-injuries-players-need-to-watch-out-for-258872

    MIL OSI Analysis

  • MIL-OSI Analysis: New special tribunal for Ukraine will pave the way for holding Russian leaders to account for the invasion

    Source: The Conversation – UK – By Andrew Forde, Assistant Professor – European Human Rights Law, Dublin City University

    A special tribunal has been established by the international human rights organisation the Council of Europe (CoE) and the Ukrainian government to try crimes of aggression against Ukraine which could be used to hold Vladimir Putin and others to account for the February 2022 invasion and war crimes committed since.

    The Ukrainian president, Volodymyr Zelensky, signed an agreement with CoE secretary general, Alain Berset, on June 25, setting up the special tribunal. Subject to it securing the necessary political backing and budget the tribunal will be established within the framework of the CoE (which is not part of the European Union.

    Work on the first phase of the court could progress in 2026. In his speech to the Council of Europe parliamentary assembly in Strasbourg, Zelensky was cautious in his optimism but stressed that the agreement was “just the beginning”.

    “It will take strong political and legal cooperation to make sure every Russian war criminal faces justice – including Putin,” he said. He knows, through years of hard experience as he travelled the world seeking help from Ukraine’s allies, that political support can be fleeting.

    A new Nuremberg?

    Inspired by ad hoc courts established after major conflicts such as the Nuremberg tribunal after the second world war or, more recently the International Criminal Tribunal for the former Yugoslavia (ICTY)
    in the 1990s, the Ukraine has been established with the aim of holding to account the perpetrators of the first full-scale armed conflict in Europe in the 21st century.

    The prohibition against the crime of aggression is a basic principle of international law, and a key part of the UN charter.

    In principle, the crime of aggression should be prosecuted by the International Criminal Court (ICC). But as Russia is not a party to the Rome Statute which underpins the court, that option was ruled out. Similarly, Russia’s veto on the UN security council meant that it would be impossible in practice to practically set up a court under the mandate of the UN – as the ICTY was in 1993.

    The Ukraine special tribunal, which was developed by a Core Group, made up of states plus the EU and the Council of Europe, seeks to fill an obvious accountability gap. If the illegal invasion is left unpunished, it would set a dangerous precedent.

    Such impunity would embolden Russia and inspire others with revanchist ambitions, undermining an already shaky international order. The US, which was instrumental in setting up the Core Group under the presidency of Joe Biden, withdrew in March 2025 when Donald Trump took office.

    The statute of the special tribunal sets out that the court will be based on Ukrainian law and will have a strong link to the country’s legal system. Ukraine’s prosecutor-general will play a key role in the proceedings, referring evidence for further investigation by the tribunal. But it will be internationally funded with international judges and prosecutors, and strong cooperation with the International Criminal Court. It is likely to be based in the Hague – although this has yet to be confirmed.

    The need for accountability for the illegal invasion of Ukraine was stressed in a resolution of the UN general assembly in February 2023 as the war headed into its second year. The resolution, which calls for “appropriate, fair and independent investigations and prosecutions at the national or international level” to “ensure justice for all victims and the prevention of future crimes” was approved by an overwhelming majority of 141 states. Any country in the world can join this core group to support its establishment.

    Holding leaders accountable

    Unlike previous international courts, the caseload is likely to be extremely narrow. There are likely to be dozens of charges rather than hundreds or thousands, which is perhaps reassuring in terms of managing costs.

    The tribunal will focus on those “most responsible” including the so-called “troika”: the president Vladimir Putin, prime minister Mikhail Mishustin and the minister for foreign affairs Sergey Lavrov. Charges may also be levelled against the leadership of Belarus and North Korea for their role in aiding, abetting and actively participating in the war of aggression. But don’t expect Kim Jong-un or Alexander Lukashenko in the dock anytime soon.

    The Court has opted for a novel approach to a longstanding customary rule by noting that heads of state are not functionally immune from prosecution. But it adds that indictments won’t be confirmed until such time as the suspect is no longer in office.

    Trials can take place in absentia if the accused fails to attend and all reasonable steps taken to apprehend them have failed. But, like the ICC, the court will still rely on states to apprehend and physically transfer indicted individuals in due course. This will inevitably limit the chances of seeing any of the key individuals actually in a court, something that has also dogged the ICC.

    The fact that a tribunal has now been set up is a major development in international criminal justice. But it is now in a sort of purgatory, existing and not existing at the same time. To become operational, another treaty known as an enlarged partial agreement must be signed by interested states. This will have to be ratified by many national parliaments, depending on their constitutions. This process could take years.

    But simply by creating the framework for the tribunal, the Council of Europe has demonstrated its commitment to ensuring accountability. In a further development, the European Court of Human Rights delivers its long-awaited judgment in the case of Ukraine and the Netherlands v Russia on July 9.

    This concerns “complaints about the conflict in eastern Ukraine involving pro-Russian separatists which began in 2014, including the downing of Malaysia Airlines flight MH17, and the Russian military operations in Ukraine since 2022”. The judgement will add further momentum to these accountability efforts.

    Symbolic as it may seem, this week’s agreement creates a real opportunity for the international community to send a message that impunity for international aggression is intolerable – not just for the victims, but for all who believe in the rule of law.

    Andrew Forde is affiliated with Dublin City University (Assistant Professor, European Human Rights Law). He is also, separately, affiliated with the Irish Human Rights and Equality Commission (Commissioner).

    ref. New special tribunal for Ukraine will pave the way for holding Russian leaders to account for the invasion – https://theconversation.com/new-special-tribunal-for-ukraine-will-pave-the-way-for-holding-russian-leaders-to-account-for-the-invasion-260022

    MIL OSI Analysis

  • MIL-OSI Analysis: Class and masculinity are connected – when industry changes, so does what it means to ‘be a man’

    Source: The Conversation – UK – By Sophie Lively, PhD Candidate in Human Geography, Newcastle University

    Tero Vesalainen/Shutterstock

    On July 3, I’ll be discussing Youth, Masculinity and the Political Divide at an event with The Conversation and Cumberland Lodge at Newcastle University (get your tickets here).

    Young people involved in the panel have brought up class and the decline of industry as topics for discussion. This is particularly fitting, given my ongoing PhD research exploring masculinity and the contemporary lives of working-class men in Tyneside.

    Tyneside is an area in north-east England which was once a major centre of Britain’s Industrial Revolution. Its coal mining, shipbuilding and heavy engineering industries were seen as the backbone of the region, upheld by a large industrial skilled working class.

    As with many northern towns, widespread deindustrialisation, predominantly around the 1970s and 1980s, dramatically changed the area. At its peak, Swan Hunter – a globally recognised shipyard and significant employer in Wallsend (North Tyneside) and the surrounding area – employed up to 12,000 people. By 2005, the year before its closure, only 357 direct workers were employed.

    The process of deindustrialisation affected not just the type of work that was done, but how men in the region saw themselves. As I am currently researching, the effects of this ring true today.



    Boys and girls are together facing an uncertain world. But research shows they are diverging when it comes to attitudes about masculinity, feminism and gender equality.

    Social media, politics, and identity all play a role. But what’s really going on with boys and girls? Join The Conversation UK and Cumberland Lodge’s Youth and Democracy project at Newcastle University for a discussion of these issues with young people and academic experts. Tickets available here.


    Like other regions in Britain, Tyneside shifted from mostly masculine manual labour to a largely “feminised” service sector. Informal work, subcontracting and part-time work proliferated while rates of trade unionism declined.

    Changes in industry and understandings of social class have a surprising amount to do with how we think about masculinity. Paul Willis’ 1977 seminal study Learning to Labour explores how the links between social class and masculinity are forged early in life.

    Our ideas about masculinity are produced, reinforced and upheld through institutions such as schools, the workplace and media. There is no singular “form” of masculinity – men perform it in many different ways. There is, however, hegemonic masculinity. This is the most dominant form of masculinity in a society at any given time, valued above other forms of gender identities that do not match up to the dominant ideal.

    “Traditional” views of masculinity were particularly prevalent during the height of industry in the area. These views centred around ideas of men as providers and ideas of toughness. Value was placed on a willingness (or need) to do physical and often hazardous labour.

    The demise of “masculine” labour in areas such as Tyneside disrupted not only economic stability but also male identity and pride. As broader socioeconomic shifts unfolded across England, many working class men found themselves outside of those traditional masculine ideals around labour.

    This has been well documented, particularly in ethnographic work such as Anoop Nayak’s 2006 study Displaced Masculinities. This key text explored how working-class boys navigate “what it is to be a ‘man’ beyond the world of industrial paid employment”.

    Class and identity in a changing world

    Early findings from my research suggest that today, class (and working-class identity) is not as salient in mens’ everyday lives. Participants in my study have spoken about class, but it does not overtly feature in how they make sense of their identities. As one man put it: “Class means you have to use yourself to earn money. Your labour, that’s what I understand by it, but I’ve never thought about class much.”

    The quayside in Newcastle-upon-Tyne.
    Philip Mowbray/Shutterstock

    What happens to men when an area’s strong working-class identity declines, but there is no narrative to replace it? There is a risk that harmful ideas about masculinity step in to fill a gap left by declining industry and continued economic inequality. We have seen this in extensive research in the US about masculinity, class and the appeal of the far right.

    This is why class must be part of the discussion around the rise of the “manosphere” – online communities and influencers sharing content about masculinity that can veer into misogyny. Class politics also presents a positive and unifying alternative.

    It is imperative that working-class areas and the people within them aren’t portrayed as somehow inherently susceptible to, or represented by, the narratives of the manosphere. Indeed, the men I have spoken to have not been particularly pulled in by the manosphere. However they do recognise the feeling of being overlooked and not measuring up to idealised “standards” about masculinity.

    The “manosphere” preys on this, tapping into boys’ and young men’s fears around masculinity and their (perceived) social status. Narrow portrayals of what success looks like puts immense pressure on young people to live up to unattainable standards.

    As I have written before, mansophere content often relies on messages around hyper-individualism that ignore the broader effects of class, the economy and political views.

    Manosphere messaging that “most men are invisible” and that the system is now “rigged against men” fits neatly with young boys’ and men’s anxieties about not having the same place or opportunities in society that previous generations of men might have had.

    Without honest discussion about working-class communities and the effects of deindustrialisation on identity, this messaging may become alluring in postindustrial towns.

    Sophie Lively receives funding from the Economic and Social Research Council as part of the Northern Ireland and North East Doctoral Training Partnership.

    ref. Class and masculinity are connected – when industry changes, so does what it means to ‘be a man’ – https://theconversation.com/class-and-masculinity-are-connected-when-industry-changes-so-does-what-it-means-to-be-a-man-258857

    MIL OSI Analysis

  • MIL-OSI Submissions: Haiti on the brink: Gangs fill power vacuum as current solutions fail a nation in crisis

    Source: The Conversation – Canada – By Greg Beckett, Associate Professor of Anthropology, Western University

    Haiti is facing a multifaceted crisis unlike any in the country’s modern history.

    Haiti recently marked the one-year anniversary of Haiti’s Presidential Transitional Council’s (CPT) new government — an internationally backed effort to restore governance in the country after Prime Minister Ariel Henry was ousted by gangs.

    But rather than charting a path to stability, the CPT remains mired in dysfunction as Haiti’s crisis deepens with no end in sight. Armed gangs now control most of the capital, more than a million Haitians have been displaced and half the country faces acute food insecurity.

    Criminal gangs have taken control of most of the capital city of Port-au-Prince and significant parts of the country. Since 2021, gangs have killed more than 15,000 people and forcibly displaced over a million people.

    Beyond the security situation, there is a dire humanitarian emergency as more than half the country faces severe food insecurity.

    The United Nations says the country may be reaching a point of no return and risks falling into “total chaos.”

    Haitian friends tell me their whole country feels as blocked as the barricaded streets and choke points used by the gangs to control the capital.

    A security crisis paralyzing everything

    The impasse is undoubtedly shaped by entrenched gang violence. Armed groups have been used by political players for political ends in Haiti for decades.

    But now, new, well-organized armed gangs have emerged as political entities in their own right.

    For example, the G9 Alliance, the most notorious of gangs — actually a federation of gangs — is led by former police officer Jimmy “Barbecue” Chérizier.

    Chérizier presents himself on social media as a revolutionary figure fighting the elites, but in the streets of Port-au-Prince most, see him as a violent criminal.

    Last year, the G9 merged with rivals to form a coalition called Viv Ansamn (Live Together). Led by Chérizier and others, the group forced Prime Minister Ariel Henry from power. Henry had become prime pinister after the assassination of Haiti’s last elected head of state, President Jovenel Moïse, in July 2021, despite himself being implicated in the assassination.

    Both Henry and Moïse were accused of paying gangs to maintain control.

    Viv Ansamn’s takeover of the capital confirms gangs have become an autonomous political force. They have since expanded their power through their control over fuel supplies, critical infrastructure and key choke points.

    It’s telling that the gangs have become so powerful despite the presence of a UN-approved, Kenya-led Multinational Security Support (MSS) mission. The mission has been in Haiti since shortly after Henry was forced out of power.

    But with limited scope and funding from donor countries, including the United States, Canada and Ecuador, the mission has failed to achieve any major successes. Indeed, by the UN’s own estimates, gang violence continues to have a “devastating impact” on the population, despite the presence of the mission.

    Last month, the U.S. government designated Viv Ansamn and Gran Grif, Haiti’s two most powerful armed gangs, as terrorist organizations. Canada and others have also imposed sanctions on politicians and gang leaders, and perhaps this could lead to more sanctions against those who most directly benefit from the crisis. But for residents of Port-au-Prince, little has changed on the ground, where many feel the gangs are holding the country hostage.

    Democratic vacuum with no clear path forward

    A common saying in Haiti goes like this: peyi’m pa gen leta, my country has no state. Once a criticism of a particular government, it now feels literal. Haiti has no elected national officials.

    The CPT was established by the Organization of American States after Henry’s ousting, but has has done little to restore democracy. Elections are impossible under the current security conditions.

    Instead, the CPT has become another obstacle to resolution. Mired in internal conflict, some members have been accused of bribery. With no framework for political compromise, the council reflects a system where some key players actually benefit from the political impasse.

    Governing structures that can’t govern

    Haiti is now in uncharted territory. The CPT operates in a legal vacuum, making decisions without a clear mandate or authority.

    Still, the council is moving forward with a controversial plan to rewrite the Haitian constitution. The proposed changes will fundamentally alter Haiti’s government structure, including abolishing the senate and the prime minister, allowing presidents to hold consecutive terms, changing election procedures and allowing dual citizens and Haitians living abroad to run for office.

    This constitutional reform highlights the paradox at the heart of Haiti’s crisis: an institution with questionable legitimacy is attempting to redesign the very framework that would determine its own authority.

    These aren’t just procedural problems: they represent fundamental questions about who has the authority to govern and how decisions get made in a country where democratic institutions have always been fragile.

    International responses miss the mark

    International groups, including the UN, the Organization of American States and the Core Group that includes the United States, Canada and France, have overseen Haiti’s politics for decades. But their influence has often backfired. Many in Haiti see the international community as directly responsible for the current crisis.

    Whatever internal problems have given rise to the current crisis, the role played by the international community in Haiti has undoubtedly contributed to the impasse.

    The MSS mission is a stop gap at best and a liability at worst. It is insufficient for the scale of the crisis.

    Some observers have called for a full UN peacekeeping mission, but there is little support for it and such a mission would likely face resistance within Haiti given the country’s fraught history with international interventions.

    Can the international community undo the damage it has already done? And can Haiti make it through the impasse without the international community?

    Beyond the impasse: What needs to change

    There are no easy solutions. Addressing gang violence without legitimate governing institutions won’t create lasting stability. Yet the path to a legitimate government remains unclear as organizing elections without basic security is unrealistic.

    The international community must stop treating Haiti as a series of separate crises requiring separate responses. The current piecemeal approach treats symptoms while ignoring the underlying causes that block political resolutions.

    For Haitians, the stakes could not be higher. The question isn’t whether change is needed, but whether the international community and Haitian leaders can move beyond the impasse before the situation deteriorates even further.

    Greg Beckett receives funding from the Social Sciences and Humanities Research Council of Canada.

    ref. Haiti on the brink: Gangs fill power vacuum as current solutions fail a nation in crisis – https://theconversation.com/haiti-on-the-brink-gangs-fill-power-vacuum-as-current-solutions-fail-a-nation-in-crisis-257948

    MIL OSI

  • MIL-OSI Submissions: How pterosaurs learned to fly: scientists have been looking in the wrong place to solve this mystery

    Source: The Conversation – Global Perspectives – By Davide Foffa, Research Fellow in Palaeobiology, University of Birmingham

    Ever since the first fragments of pterosaur bone surfaced nearly 250 years ago, palaeontologists have puzzled over one question: how did these close cousins of land-bound dinosaurs take to the air and evolve powered flight? The first flying vertebrates seemed to appear on the geological stage fully formed, leaving almost no trace of their first tentative steps into the air.

    Taken at face value, the fossil record implies that pterosaurs suddenly originated in the later part of the Triassic period (around 215 million years ago), close to the equator on the northern super-continent Pangaea. They then spread quickly between the Triassic and the Jurassic periods, about 10 million years later, in the wake of a mass extinction that was most likely caused by massive volcanic activity.

    Most of the handful of Triassic specimens come from narrow seams of dark shale in Italy and Austria, with other fragments discovered in Greenland, Argentina and the southwestern US. These skeletons appear fully adapted for flight, with a hyper-elongated fourth finger supporting membrane-wings. Yet older rocks show no trace of intermediate gliders or other transitional forms that you might expect as evidence of pterosaurs’ evolution over time.

    There are two classic competing explanations for this. The literal reading says pterosaurs evolved elsewhere and did not reach those regions where most have been discovered until very late in the Triassic period, by which time they were already adept flyers. The sceptical reading notes that pterosaurs’ wafer-thin, hollow bones could easily vanish from the fossil record, dissolve, get crushed or simply be overlooked, creating this false gap.

    Eudimorphodon ranzii fossil from Bergamo in 1973 is one of many pterosaur discoveries from southern Europe.
    Wikimedia, CC BY-SA

    For decades, the debate stalled as a result of too few fossils or too many missing rocks. This impasse began to change in 2020, when scientists identified the closest relatives of pterosaurs in a group of smallish upright reptiles called lagerpetids.

    From comparing many anatomical traits across different species, the researchers established that pterosaurs and lagerpetids shared many similarities including their skulls, skeletons and inner ears. While this discovery did not bring any “missing link” to the table, it showed what the ancestor of pterosaurs would have looked like: a rat-to-dog-sized creature that lived on land and in trees.

    This brought new evidence about when pterosaurs may have originated. Pterosaurs and lagerpetids like Scleromochlus, a small land-dwelling reptile, diverged at some point after the end-Permian mass extinction. It occurred some 250 million years ago, 35 million years before the first pterosaur appearance in the fossil record.

    Scleromochlus is one of the lagerpetids, the closest known relatives to the pterosaurs.
    Gabriel Ugueto

    Pterosaurs and their closest kin did not share the same habitats, however. Our new study, featuring new fossil maps, shows that soon after lagerpetids appeared (in southern Pangaea), they spread across wide areas, including harsh deserts, that many other groups were unable to get past. Lagerpetids lived both in these deserts and in humid floodplains.

    They tolerated hotter, drier settings better than any early pterosaur, implying that they had evolved to cope with extreme temperatures. Pterosaurs, by contrast, were more restricted. Their earliest fossils cluster in the river and lake beds of the Chinle and Dockum basins (southwest US) and in moist coastal belts fringing the northern arm of the Tethys Sea, a huge area that occupied today’s Alps.

    Scientists have inferred from analysing a combination of fossil distributions, rock features and climate simulations that pterosaurs lived in areas that were warm but not scorching. The rainfall would have been comparable to today’s tropical forests rather than inland deserts.

    This suggests that the earliest flying dinosaurs may have lived in tree canopies, using foliage both for take-off and to protect themselves from predators and heat. As a result of this confined habitat, the distances that they flew may have been quite limited.

    Changing climates

    We were then able to add a fresh dimension to the story using a method called ecological niche modelling. This is routinely used in modern conservation to project where endangered animals and plants might live as the climate gets hotter. By applying this approach to later Triassic temperatures, rainfall and coastlines, we asked where early pterosaurs lived, regardless of whether they’ve shown up there in the fossil record.

    Many celebrated fossil sites in Europe emerge as poor pterosaur habitat until very late in the Triassic period: they were simply too hot, too dry or otherwise inhospitable before the Carnian age, around 235 million years ago. The fact that no specimens have been discovered there that are more than about 215 million years old may be because the climate conditions were still unsuitable or simply because we don’t have the right type of rocks preserved of that age.

    In contrast, parts of the south-western US, Morocco, India, Brazil, Tanzania and southern China seem to have offered welcoming environments several million years earlier than the age of our oldest discoveries. This rewrites the search map. If pterosaurs could have thrived in those regions much more than 215 million years ago, but we have not found them there, the problem may again lie not with biology but with geology: the right rocks have not been explored, or they preserve fragile fossils only under exceptional conditions.

    Our study flags a dozen geological formations, from rivers with fine sediment deposits to lake beds, as potential prime targets for the next breakthrough discovery. They include the Timezgadiouine beds of Morocco, the Guanling Formation of south-west China and, in South America, several layers of rock from the Carnian age, such as the Santa Maria Formation, Chañares Formation and Ischigualasto Formation.

    Pterosaurs were initially confined to tropical treetops near the equator. When global climates shifted and forested corridors opened, pterosaurs’ wings catapulted them into every corner of the planet and ultimately carried them through one of Earth’s greatest extinctions. What began as a tale of missing fossils has become a textbook example of how climate, ecology and evolutionary science have come together to illuminate a fragmentary history that has intrigued paleontologists for over two centuries.

    Davide Foffa is funded by Marie Skłodowska-Curie Actions: Individual (Global) Fellowship (H2020-MSCA-IF-2020; No.101022550), and by the Royal Commission for the Exhibition of 1851–Science Fellowship

    Alfio Alessandro Chiarenza receives funding from The Royal Society (Newton International Fellowship NIFR1231802)

    Emma Dunne does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. How pterosaurs learned to fly: scientists have been looking in the wrong place to solve this mystery – https://theconversation.com/how-pterosaurs-learned-to-fly-scientists-have-been-looking-in-the-wrong-place-to-solve-this-mystery-259063

    MIL OSI

  • MIL-OSI Submissions: ‘Making decisions closer to the wharf’ can ensure the sustainability of Canada’s fisheries and oceans

    Source: The Conversation – Canada – By Matthew Robertson, Research Scientist, Fisheries and Marine Institute, Memorial University of Newfoundland

    The harbour in Bonavista, Newfoundland. Major reforms could fundamentally reshape fisheries science and management in Canada (Sally LeDrew/Wikimedia commons), CC BY-SA

    During the federal election campaign, Canadian Prime Minister Mark Carney announced that if elected, he would look into restructuring Fisheries and Oceans Canada (DFO). Carney stated that he understood the importance of DFO and of “making decisions closer to the wharf.”

    Carney’s statement was made in response to protesting fish harvesters in Newfoundland and Labrador who decried recent DFO decision-making for multiple fisheries, including Northern cod and snow crab.

    Although addressing industry concerns is important, any change to DFO decision-making must serve the broader public interest, which includes commitments to reconciliation and conserving biodiversity.

    Major reforms could fundamentally reshape fisheries science and management in Canada, yet most Canadians are unaware of how DFO’s science-management process works, or why change might be needed.

    The DFO’s dual mandate

    DFO has long been criticized for its dual mandate, which involves both supporting economic growth and conserving the environment.

    For organizations like DFO to be trusted by the public, they need to produce information and policies that are credible, relevant and legitimate.

    However, DFO’s dual mandates have been viewed as antithetical and have at the least created a perceived conflict of interest. The issue at stake is how science advice from DFO can be considered independent, if it is also supposed to serve commercial interests.

    One solution to this problem would be to shift control over the economic viability of fisheries to provinces. This is not a radical idea by any means, as most of the economic value of the fishery arises after fish are brought to harbour.

    Fishing boats in the town of Clarke’s Harbour, located on Cape Sable Island, Nova Scotia in July 2011.
    (Dennis G. Jarvis/Wikimedia commons), CC BY-SA

    For example, licences to process groundfish like cod, haddock and halibut —which Nova Scotia has just announced will be opened for new entrants following decades of a moratorium — as well as policies governing the purchase of seafood already fall to provinces.

    In 2024, all 13 ministers from the Canadian Council of Fisheries and Aquaculture Ministers indicated a desire for “joint management” between provinces and DFO.

    This was driven driven by a concern that the department has not focused enough on provincial and territorial fisheries issues. This shouldn’t be seen as a criticism of DFO, but rather an opportunity to embrace differentiated responsibility.

    DFO could maintain regulatory control for fisheries, like enforcing the Fisheries Act, defining licence conditions and performing long-term monitoring and assessments. As included in the modernized Fisheries Act, it could still consider the social and economic objectives in decision-making.

    Regional decision-making

    DFO is structured into regions with their own science and management branches, but many decisions end up being made by staff at DFO headquarters in Ottawa. In addition, the federal fisheries minister retains ministerial discretion for almost every decision, something that has been criticized as being inequitable.

    During an interview with researchers looking into fisheries management policy, a regional manager stated that they no longer make decisions:

    “Because of…risk aversion, much more of the decision-making has now been bumped up to higher levels. So I like to facetiously state that I am no longer a manager, I am a recommender.”

    Centralized decision-making can limit communication between regional scientists and managers and federal government policymakers.

    This communication gap can make it difficult for managers to use the latest science and adjust policies quickly and it can also lead to recommended policies that are challenging to implement at the local level.

    Handing management decision-making power to regional fisheries managers could therefore benefit science and policy, and contribute to decisions that are deemed more equitable by those impacted.

    A map representing DFO’s regional structure.
    (Fisheries and Oceans Canada)

    Other countries use a regional management approach. In the United States, marine fisheries are managed by eight regional fishery management councils that use scientific advice from the National Marine Fisheries Service. Although not without their flaws, the successful rebuilding of overfished stocks in the U.S. has been attributed, in part, to the regional council system.

    Governance systems that have multiple but connected centres of decision-making are generally expected to be more participatory, flexible to respond to changes and have improved spatial fit between knowledge and policy actions.

    This type of approach could shift the focus of Ottawa-based managers and the fisheries minister to ensuring national consistency.

    Local stakeholder involvement

    Canada’s current methods for inclusion of social and economic considerations are limited and have produced scientific advice that is not fully separable from rights holder and stakeholder input.

    Most of DFO’s scientific peer-review process is focused on ecological science conducted by DFO scientists. The peer-review process often also involves rights holders and stakeholders. While Indigenous rights holders and community stakeholders may not be trained in the presented analyses, they often contribute to these meetings by describing their knowledge and experiences.

    However, because the meetings are focused on DFO ecological science, they are not designed to formally consider stakeholder and rights holder knowledge. This can lead to two key issues. First, it may blur the line between peer-reviewed science and rights holder and stakeholder input, reducing the credibility of the scientific advice.

    Second, the valuable information provided by rights holders and stakeholders may be overlooked since it is not shared in a setting designed to incorporate it.

    The lack of review of alternative Indigenous knowledge sources and social and economic science during peer-review processes inherently limits the advice that can be provided. It suggests that the government is not benefiting from the opportunity to incorporate diverse knowledge bases.

    These problems could be addressed by developing procedures through which stakeholders and rights holders contribute their local and traditional knowledge to better inform ecological and socio-economic considerations.

    By increasing the number of peer-review platforms, rights holder and stakeholder input could be reviewed similarly to ecological science. This change would likely increase the credibility, legitimacy and salience of information used to inform fishery managers.

    Regardless of how rights holders and stakeholders perspectives are included, the process should be clearly structured and documented.

    By reconsidering DFO’s mandate, decentralizing management decision-making and improving the scientific consideration of varied forms of knowledge, DFO could make decisions that are closer to the wharf.

    Matthew Robertson receives funding from the Canadian Natural Sciences and Engineering Research Council of Canada (NSERC) Discovery Grant and the Fisheries & Oceans Canada (DFO) Atlantic Fisheries Fund (AFF).

    Megan Bailey receives research funding from multiple sources, including NSERC, SSHRC, CIRNAC, Genome Atlantic, Nippon Foundation Ocean Nexus Centre, Ocean Frontier Institute (through a Canada First Research Excellence Fund), and the Canada Research Chairs program.

    Tyler Eddy receives funding from the Natural Sciences and Engineering Research Council of Canada (NSERC) Discovery Grant, Fisheries & Oceans Canada (DFO) Atlantic Fisheries Fund (AFF) and Sustainable Fisheries Science Fund (SFSF), the Canada First Research Excellence Fund (CFREF), and the Crown Indigenous Relations and Northern Affairs Canada (CIRNAC) Indigenous Community-Based Climate Monitoring (ICBCM) Program.

    ref. ‘Making decisions closer to the wharf’ can ensure the sustainability of Canada’s fisheries and oceans – https://theconversation.com/making-decisions-closer-to-the-wharf-can-ensure-the-sustainability-of-canadas-fisheries-and-oceans-254874

    MIL OSI

  • MIL-OSI Submissions: Nineteen Eighty-Four might have been inspired by George Orwell’s fear of drowning

    Source: The Conversation – Global Perspectives – By Nathan Waddell, Associate Professor in Twentieth-Century Literature, University of Birmingham

    George Orwell had a traumatic relationship with the sea. In August 1947, while he was writing Nineteen Eighty-Four (1949) on the island of Jura in the Scottish Hebrides, he went on a fishing trip with his young son, nephew and niece.

    Having misread the tidal schedules, on the way back Orwell mistakenly piloted the boat into rough swells. He was pulled into the fringe of the Corryvreckan whirlpool off the coasts of Jura and Scarba. The boat capsized and Orwell and his relatives were thrown overboard.

    It was a close call – a fact recorded with characteristic detachment by Orwell in his diary that same evening: “On return journey today ran into the whirlpool & were all nearly drowned.” Though he seems to have taken the experience in his stride, this may have been a trauma response: detachment ensures the ability to persist after a near-death experience.

    We don’t know for sure if Nineteen Eighty-Four was influenced by the Corryvreckan incident. But it’s clear that the novel was written by a man fixated on water’s terrifying power.


    This article is part of Rethinking the Classics. The stories in this series offer insightful new ways to think about and interpret classic books and artworks. This is the canon – with a twist.


    Nineteen Eighty-Four isn’t typically associated with fear of death by water. Yet it’s filled with references to sinking ships, drowning people and the dread of oceanic engulfment. Fear of drowning is a torment that social dissidents might face in Room 101, the torture chamber to which all revolutionaries are sent in the appropriately named totalitarian state of Oceania.

    An early sequence in the novel describes a helicopter attack on a ship full of refugees, who are bombed as they fall into the sea. The novel’s protagonist, Winston Smith, has a recurring nightmare in which he dreams of his long-lost mother and sister trapped “in the saloon of a sinking ship, looking up at him through the darkening water”.

    George Orwell in 1943.
    National Union of Journalists

    The sight of them “drowning deeper every minute” takes Winston back to a culminating moment in his childhood when he stole chocolate from his mother’s hand, possibly condemning his sister to starvation. These watery graves imply that Winston is drowning in guilt.

    The “wateriness” of Nineteen Eighty-Four may have another interesting historical source. In his essay My Country Right or Left (1940), Orwell recalls that when he had just become a teenager he read about the “atrocity stories” of the first world war.

    Orwell states in this same essay that “nothing in the whole war moved [him] so deeply as the loss of the Titanic had done a few years earlier”, in 1912. What upset Orwell most about the Titanic disaster was that in its final moments it “suddenly up-ended and sank bow foremost, so that the people clinging to the stern were lifted no less than 300 feet into the air before they plunged into the abyss”.

    Sinking ships and dying civilisations

    Orwell never forgot this image. Something similar to it appears in his novel Keep the Aspidistra Flying (1936) where the idea of a sinking passenger liner evokes the collapse of modern civilisation, just as the Titanic disaster evoked the end of Edwardian industrial confidence two decades beforehand.

    The Titanic disaster had a profound impact on Orwell.
    Wiki Commons

    References to sinking ships and drowning people appear at key moments in many other works by Orwell, too. But did the full impact of the Titanic surface in Nineteen Eighty-Four?

    Sinking ships were part of Orwell’s descriptive toolkit. In Nineteen Eighty-Four, a novel driven by memories of unsympathetic water, they convey nightmares. Filled with references to water and liquidity, it’s one of the most aqueous novels Orwell produced, relying for many of its most shocking episodes on imagery of desperate people drowning or facing imminent death on sinking sea craft.

    The thought of trapped passengers descending into the depths survives in Winston’s traumatic memories of his mother and sister, who, in the logic of his dreams, are alive inside a sinking ship’s saloon.


    Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.


    There’s no way to prove that the Nineteen Eighty-Four is “about” the Titanic disaster, but in the novel, and indeed in Orwell’s wider body of work, there are too many tantalising hints to let the matter rest.

    Thinking about fear of death by water takes us into Orwell’s terrors just as it takes us into Winston’s, allowing readers to see the frightened boy inside the adult man and, indeed, inside the author who dreamed up one of the 20th century’s most famous nightmares.

    Beyond the canon

    As part of the Rethinking the Classics series, we’re asking our experts to recommend a book or artwork that tackles similar themes to the canonical work in question, but isn’t (yet) considered a classic itself. Here is Nathan Waddell’s suggestion:

    As soon as the news broke of the Titanic’s sinking, literary works of all shapes and sizes started to appear in tribute to the disaster and its victims. As the century went on, and as research into the tragedy developed (particularly after the ships wreckage was discovered in 1985), more nuanced literary responses to the sinking became possible.

    One such response is Beryl Bainbridge’s Whitbread-prize-winning novel Every Man for Himself (1996). It reimagines the disaster from the first-person perspective of an imaginary character, Morgan, the fictional nephew of the historically real financier J. P. Morgan (who was due to sail on the Titanic but changed plans before it sailed).

    This article features references to books that have been included for editorial reasons, and may contain links to bookshop.org. If you click on one of the links and go on to buy something from bookshop.org The Conversation UK may earn a commission.

    Nathan Waddell does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Nineteen Eighty-Four might have been inspired by George Orwell’s fear of drowning – https://theconversation.com/nineteen-eighty-four-might-have-been-inspired-by-george-orwells-fear-of-drowning-251289

    MIL OSI

  • MIL-OSI Submissions: Air India crash in Ahmedabad sends reverberations to Canadian families of Air India Flight 182

    Source: The Conversation – Canada – By Chandrima Chakraborty, Professor, English and Cultural Studies; Director, Centre for Global Peace, Justice and Health, McMaster University

    The June 12 Air India crash in Ahmedabad, Gujarat, India, with 230 passengers and 12 crew members aboard is sending deep reverberations through a group of Canadians who know all too well the shock, grief and horror of losing loved ones in hauntingly similar circumstances.

    They are the families of those killed in the bombing of Air India Flight 182 en route from Canada to India 40 years ago this month.

    I work closely with these families as a researcher and advocate. I began interviewing these families in 2014 and have witnessed firsthand their pain, advocacy and emotional turmoil of living in the shadow of a historical event.

    As reports of the Ahmedabad crash came in, the WhatsApp account of the Air India Flight 182 families immediately flooded with expressions of shock, concern, sympathy and memories triggered by the latest incident.

    On June 23, 1985, Flight 182 was brought down by terrorist bombs created and planted on Canadian soil. The devastating mid-air explosion occurred over the Atlantic Ocean near Ireland. It killed all 329 passengers and crew, including 268 Canadians. The crew and most of the passengers were of Indian origin.

    Investigations into the causes of the crash of Air India Flight 171, en route to London’s Gatwick airport, shortly after take-off are still underway. At least 279 people died in the crash, which also impacted people on the ground.

    Acknowledging losses as significant

    A recent public conference at McMaster University commemorated the 40th anniversary of Flight 182, bringing together Indian and Canadian families, researchers, creative artists and community members.

    Book cover for ‘Remembering Air India The Art of Public Mourning,’ edited by Chandrima Chakraborty, Amber Dean and Angela Failler.
    University of Alberta Press

    The conference dealt with critical themes, including the challenge of Flight 182 families recovering from their losses within a climate of broad indifference among their fellow Canadians.

    Regardless of what may have caused the more recent crash in western India, these Canadian families know the shock and loss that a new set of victims’ families are facing, and how important it is to support them.

    Hopefully, the home countries of last week’s crash victims — most of them Indian and British citizens, with at least one Canadian reported to have been aboard — will regard their deaths as significant losses. If so, this would be unlike what the 1985 victims’ families experienced in Canada.

    A little-mourned Canadian tragedy

    In Canada, we have a national day to remember on June 23, 1985. The bombing has been called a Canadian tragedy in a public inquiry report.

    Yet according to a 2023 Angus Reid poll, “nine out of 10 Canadians say they have little or no knowledge of the worst single instance of the mass killing of their fellow citizens.” That essentially means the bombing has yet to penetrate the consciousness of everyday Canadians or evoke shared grief or public mourning.

    The families continue to carry the torch of remembrance as they organize annual memorial vigils every June 23. Few others attend. Many victims’ relatives have died since 1985. Some spouses, siblings or parents are now in their 80s, wondering why the bombing is still not widely discussed in schools or in public discourse.

    The grinding and unsatisfying criminal proceedings, the belated public inquiry and the welcome but lukewarm apology by the Canadian government 25 years after the fact have all contributed to the failure of this tragedy to adhere more solidly to the Canadian consciousness. In fact, many continue to deny the Canadian significance of Flight 182 and view the bombing as a foreign event.

    A torch of remembrance

    At last month’s conference, my research team launched the Air India Flight 182 archive to counter this collective amnesia and lack of acknowledgement.

    Canadian archival consultant and writer Laura Millar has said that archives act as “touchstones to memory” and can aid the process of transforming individual memories into collective remembering. Adopting NYU professor Carol Gilligan’s ethics of care for the archive, we have been consulting with families to find ways to share their grief with the public.

    The Flight 182 memory archive — both physical and digital — serves as a repository for artefacts, first-person narratives, memorabilia and creative works related to the tragedy produced by family members. Family donations of artefacts such as dance videos and pilot wings redirect notions of archives away from a documental deposit. Hopefully, they can move the public to learn and care for the impacts of the Flight 182 bombing.

    The archive is a publicly accessible record of the tragedy, where scholars and everyday citizens can learn about the victims and their families.

    Since the past involves both the present and the future, the archive will enable a meaningful recognition of marginalized voices and histories. It can offer a form of memory justice for those who would otherwise be forgotten by sustaining memory from generation to generation.

    While the archive articulates the demand from families that the bombing of Flight 182 and its aftermath be incorporated into Canadian national consciousness, establishing this archive alone will not be enough to elevate the memory of Flight 182 to the place it deserves.

    But at least it establishes a rich, permanent academic and personal legacy for the community of mourners, and for the Canadian and global public to find it, use it and learn from its many lessons.

    Families of those on board the 1985 flight are preparing to commemorate the 40th anniversary of the terror bombing of Flight 182 that has devastated their lives.

    As we learn more about the tragic Air India Flight 171 crash on June 12, the lessons of Flight 182 will hopefully prevent a new set of families from feeling the pain of indifference on top of the unimaginable agony of loss they’re already experiencing.

    Chandrima Chakraborty receives funding from the Social Sciences and Humanities Research Council of Canada.

    ref. Air India crash in Ahmedabad sends reverberations to Canadian families of Air India Flight 182 – https://theconversation.com/air-india-crash-in-ahmedabad-sends-reverberations-to-canadian-families-of-air-india-flight-182-258991

    MIL OSI

  • MIL-OSI Submissions: Is Sabrina Carpenter’s Man’s Best Friend album cover satire or self-degradation? A psychology expert explores our reactions

    Source: The Conversation – Global Perspectives – By Katrina Muller-Townsend, Lecturer in Psychology, Edith Cowan University

    Island Records

    Sabrina Carpenter’s Man’s Best Friend album cover has fans divided.

    Carpenter poses on all fours, her glossy blond hair grasped by a male figure cropped from the frame. Her wide-eyed expression intensifies an ambiguous performance of subservience, tapping into a visual language tied to female objectification, from classic pin-up imagery to contemporary pop culture.

    The emotionally loaded image plays on her hyper-feminine, tongue-in-cheek pop star persona, forcing us to question where irony ends and objectification begins.

    Is it satire, or self-degradation?

    Up for debate

    At first glance, the cover seems like just another stylised, provocative pop image. It delivers what we’ve come to expect: a bold, ironic twist on the exaggerated Juno-style pose she reinvents on stage.

    To some fans, it’s clever satire: a pop star reclaiming and amplifying her image to mock industry norms. Satire uses exaggeration, irony, or humour to critique power structures – and Carpenter’s pose walks that tightrope.

    To others it crosses a line, reinforcing regressive attitudes about women’s sexuality and drawing criticism from domestic violence advocates.

    The debate reflects our unresolved discomfort about gender, power and control. There is a tension between Carpenter’s ironic persona and the submissive pose, creating uncertainty for the viewer.

    We can use psychology to better understand this dichotomy.

    The schema violation

    This mismatch between expectation and perception is a schema violation.

    A schema is a mental shortcut: a template built from experience and unspoken rules that helps us make sense of the world and predict what to expect. When something breaks that pattern, it’s called a schema violation.

    Carpenter’s brand is cheeky, self-aware irony – so when she adopts a pose steeped in submission and hyper-femininity as in this album image, it feels off.

    That can trigger cognitive dissonance: the mental tension we feel when two ideas (here, empowerment and obedience) don’t align.

    To resolve the conflict, some fans reinterpret the image as feminist sarcasm. Others reject it, fearing it panders to outdated, dangerous norms.

    Both reactions reflect our emotional and ideological investments in who Carpenter is or should be.

    Exploring confirmation bias

    Part of this conflicted reaction is driven by confirmation bias: our tendency to filter information to support what we already believe.

    Fans who see Carpenter as witty and empowered interpret the image as intentionally ironic. Others – more sceptical of the industry’s history of exploiting female sexuality – view it as a throwback to damaging norms.

    Either way, our interpretations often reflect more about ourselves than about Carpenter’s intent.

    When her image contradicts both her public persona and our social values, it creates a gap between what we think is right and what we want to be right. So, we try to explain it away, by either defending the image or criticising it.

    Satire and scandal

    Carpenter’s cover follows a long tradition of female artists whose work straddles satire and scandal, complicating public reception.

    Madonna’s Like a Prayer drew outrage for mixing religion with sexual imagery. Yet it positioned her as a provocateur – a woman resisting the lack of agency that so often defines sexualised media.

    Miley Cyrus’ Bangerz era shocked fans with a bold shift from Hannah Montana innocence to hypersexualised rebellion, challenging the narrow roles women in pop culture are confined to.

    Doja Cat’s shift from glam pop princess to glitch villainess unsettled audiences. Was it satire, rebellion, or just chaos?

    These women, like Carpenter, force us to confront our own discomfort with women who won’t stay in one lane.

    Performer and provocateur

    Audience reaction is also shaped by emotional investment in Carpenter’s persona. Through carefully curated social media, interviews and lyrics, fans build intimate narratives forming parasocial relationships – one-sided emotional bonds with celebrities.

    When an image contradicts that imagined persona, it can feel jarring, even like betrayal.

    Audiences often expect idols to be empowering but not polarising, sexy but safe, to challenge norms – but only in ways that affirm our own values.

    Carpenter’s image breaks that implicit contract, which creates discomfort for some viewers.

    Carpenter’s cover raises uncomfortable but necessary questions about how much freedom female artists have to be both critical and complicit. Can they play with society and play along, to be both performer and provocateur?

    This highlights the double bind many women face in media and popular culture. Female artists are expected to both subvert and satisfy; to entertain without offending; empower without alienating. The burden to be palatable and provocative is one male artists rarely face.

    It’s what we make of it

    Is Carpenter undermining herself or subverting the system? Perhaps both. Or perhaps the image isn’t the message: our reaction is.

    The image forces us to confront not only our perception of Sabrina Carpenter but also our cultural discomfort with women who defy neat categorisation. Satire demands interpretation, especially when it comes from women addressing sex or power.

    More than provocation, Carpenter’s cover mirrors our cultural struggle to accept women who defy simple labels of satire or submission. The image can reflect broader social ideals and tensions projected onto public figures.

    What we see says more about our assumptions than her intent. Understanding those reactions doesn’t kill the fun – it deepens it.

    Katrina Muller-Townsend does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Is Sabrina Carpenter’s Man’s Best Friend album cover satire or self-degradation? A psychology expert explores our reactions – https://theconversation.com/is-sabrina-carpenters-mans-best-friend-album-cover-satire-or-self-degradation-a-psychology-expert-explores-our-reactions-259043

    MIL OSI

  • MIL-OSI Submissions: Plastics threaten ecosystems and human health, but evidence-based solutions are under political fire

    Source: The Conversation – Canada – By Tony Robert Walker, Professor, School for Resource and Environmental Studies, Dalhousie University

    Negotiations toward a global, legally binding plastics treaty are set to resume this summer, with the United Nations Environment Programme announcing that the Intergovernmental Negotiating Committee on plastic pollution will reconvene in August.

    The committee was established to develop an international legally binding instrument — known as the plastics treaty — to end plastic pollution, one of the fastest-growing environmental threats.




    Read more:
    Here’s how the new global treaty on plastic pollution can help solve this crisis


    Globally, 40 per cent of plastics production goes into the production of single-use plastic packaging, which is the single largest source of plastic waste and is a threat to wildlife and human health. Without meaningful action, global plastic waste is projected to nearly triple by 2060, reaching an estimated 1.2 billion tonnes.

    As the world prepares for another round of talks, Canada’s own plastic problem reveals what’s at stake, and what’s possible for the future.

    Canada’s plastic problem

    Canada is no exception to the global plastic crisis. Nearly half (47 per cent) of all plastic waste in Canada comes from the food and drink sector, contributing 3,268 million tonnes annually. Canadians use 15 billion plastic bags annually and nearly 57 million straws daily, yet only nine per cent of plastics are recycled — a figure that is not expected to improve.

    Most of Canada’s plastic — except for plastic bottles made of PET (polyethylene terephthalate) — are uneconomical or difficult to recycle because of the complexity of mixed plastics used in our economy. As a result, 2.8 million tonnes of plastic waste — equivalent to the weight of 24 CN Towers — end up in landfills every year.

    This is not a trivial problem, as Ontario is projected to run out of landfill space by 2035. Plastic pollution poses growing risks to both urban and rural infrastructure.

    In addition to landfill overflow, around one per cent of Canada’s plastic waste leaks into the environment. In 2016, this was 29,000 tonnes of plastic pollution. Once in the environment, plastics disintegrate into tiny particles, called microplastics (small pieces of plastic less than five millimetres long).

    We drink those tiny microplastic particles in our tap water, and eat them in our fish dinners. Some are even making their way into farmland.

    Plastics are everywhere, including inside us

    More than 93 per cent of Canadians have expressed concerns over single-use plastics used in food packaging and have supported government bans. There is a good reason for concern over the mounting levels of plastics in the environment, in our food and in us.

    Growing evidence indicates that plastics can cause harmful health effects in humans and animals. Microplastics and smaller nanoplastics (less than one micron in length) have been found in humans, including infants and breast milk. They can cause metabolic disorders, interfere with our immune and reproductive systems and cause behavioural problems.

    These health problems may be caused by chemicals added to plastics, including single-use plastics, of which 4,200 chemicals have been identified as posing a hazard to human and ecosystem health.

    It is for these reasons that the Canadian government introduced a ban on single-use plastics in 2022 as part of a plan to reach zero plastic waste in Canada by 2030.

    The decision was based extensive public and industry consultation, as well as decades of data on plastic pollution gathered from the Great Canadian Shoreline Cleanup. This data shows the most common plastic litter items found in the environment across Canada, known as the “dirty dozen” list.

    Six of these items were included in the federal ban. Three eastern Canadian provinces had already implemented single-use plastic bag bans before the federal government, with little to no public or industry opposition. Prince Edward Island was the first Canadian province to implement a province-wide plastic bag ban in July 2019, closely followed by Newfoundland and Labrador and Nova Scotia in October 2020.

    The politics of plastic

    Despite overwhelming scientific consensus, debates around plastic pollution are becoming increasingly politicized.

    In February in the United States, President Donald Trump signed an executive order directing the U.S. government to “stop purchasing paper straws and ensure they are no longer provided within federal buildings.”

    Trump told reporters at the White House: “I don’t think plastic is going to affect a shark very much, as they’re munching their way through the ocean.” Almost 2,000 peer-reviewed studies have reported, however, that more than 4,000 species have ingested or been entangled by plastic litter.

    In Canada, plastic has also become a political flashpoint. During the recent federal election, Conservative Leader Pierre Poilievre said he would scrap the federal government’s ban on single-use plastics and bring back plastic straws and grocery bags. He argued the government’s ban was about “symbolism” rather than “science,” saying, “the Liberals’ plastics ban is not about the environment, it’s about cost and control.”

    His promise would have harmed Canadians by dismissing the overwhelming scientific evidence showing that plastics in our bodies are linked to health impacts. Legislation to ban single-use plastics can be highly effective, ranging from 33 to 96 per cent reductions in plastic waste and pollution in the environment, depending on the policy and jurisdiction.

    Canada’s single-use plastics ban is a great example of evidence-based policymaking. The latest data from the conservation group Ocean Wise shows there was a 32 per cent drop in plastic straws found on Canadian shorelines in 2024 compared to the previous year.

    Science-based policies are needed

    It is indisputable that growing plastic production is directly related to plastic pollution in the environment and in human beings. Increasing plastic pollution is a global threat to human and ecosystem health, regardless of borders and political affiliation.

    As negotiators gear up for another round of talks to finalize a Global Plastics Treaty to end plastic pollution, the need for policies that are supported by scientific evidence is more urgent than ever.

    Future generations deserve a healthy and sustainable planet. The path towards a healthy and sustainable planet requires supporting action based on scientific evidence, not misinforming people with catchy phrases and political rhetoric.

    Tony Robert Walker receives funding from the Natural Sciences and Engineering Research Council of Canada, Canada Foundation for Innovation, and Research Nova Scotia. He is also a non-remunerated member of the Scientists’ Coalition for an Effective Plastics Treaty.

    Miriam L Diamond receives funding from Natural Sciences and Engineering Research Council, Ontario Ministry of Environment, Conservation and Parks, Future Earth, and Environment and Climate Change Canada. She is affiliated with the University of Toronto, serves as a paid expert for the Scientific and Technical Advisory Panel of the Global Environment Facility, and has non-remunerated positions with the International Panel on Chemical Pollution (Vice-Chair), is a member of the Scientist Coalition for an Effective Plastics Treaty, and sits on the board of the Canadian Environmental Law Association.

    ref. Plastics threaten ecosystems and human health, but evidence-based solutions are under political fire – https://theconversation.com/plastics-threaten-ecosystems-and-human-health-but-evidence-based-solutions-are-under-political-fire-256764

    MIL OSI

  • MIL-OSI Submissions: The 28 Days Later franchise redefined zombie films. But the undead have an old, rich and varied history

    Source: The Conversation – Global Perspectives – By Christopher White, Historian, The University of Queensland

    The history of the dead – or, more precisely, the history of the living’s fascination with the dead – is an intriguing one.

    As a researcher of the supernatural, I’m often pulled aside at conferences or at the school gate, and told in furtive whispers about people’s encounters with the dead.

    The dead haunt our imagination in a number of different forms, whether as “cold spots”, or the walking dead popularised in zombie franchises such as 28 Days Later.

    The franchise’s latest release, 28 Years Later, brings back the Hollywood zombie in all its glory – but these archetypal creatures have a much wider and varied history.

    Zombis, revenants and the returning dead

    A zombie is typically a reanimated corpse: a category of the returning dead. Scholars refer to them as “revenants”, and continue to argue over their exact characteristics.

    In the Haitian Vodou religion, the zombi is not the same as the Hollywood zombie. Instead, zombi are people who, as a religious punishment, are drugged, buried alive, then dug out and forced into slavery.

    The Hollywood zombie, however, draws more from medieval European stories about the returning dead than from Vodou.

    A perfect setting for a ‘zombie’ film

    In 28 Years Later, the latest entry in Danny Boyle’s blockbuster horror franchise, the monsters technically aren’t zombies because they aren’t dead. Instead, they are infected by a “rage virus”, accidentally released by a group of animal rights activists in the beginning of the first film.

    This third film focuses on events almost three decades after the first film. The British Isles is quarantined, and the young protagonist Spike (Alfie Williams) and his family live in a village on Lindisfarne Island. This island, one of the most important sites in early medieval British Christianity, is isolated and protected by a tidal causeway that links it to the mainland.

    Aaron Taylor-Johnson and Alfie Williams star in the new film, out in Australian cinemas today.
    Sony Pictures

    The film leans heavily on how we imagine the medieval world, with scenes showing silhouetted fletchers at work making arrows, children training with bows, towering ossuaries and various memento mori. There’s also footage from earlier depictions of medieval warfare. And at one point, the characters seek sanctuary in the ruins of Fountains Abbey, in Yorkshire, which was built in 1132.

    The medieval locations and imagery of 28 Years Later evoke the long history of revenants, and the returned dead who once roved medieval England.

    Early accounts of the medieval dead

    In the medieval world, or at least the parts that wrote in Latin, the returning dead were usually called spiritus (“spirit”), but they weren’t limited to the non-corporeal like today’s ghosts are.

    Medieval Latin Christians from as early as the 3rd century saw the dead as part of a parallel society that mirrored the world of the living, where each group relied on the other to aid them through the afterlife.

    Depiction of the undead from a medieval manuscript.
    British Library, Yates Thompson MS 13

    While some medieval ghosts would warn the living about what awaited sinners in the afterlife, or lead their relatives to treasure, or prophesise the future, some also returned to terrorise the living.

    And like the “zombies” affected by the rage virus in 28 Years Later, these revenants could go into a frenzy in the presence of the living.

    Thietmar, the Prince-Bishop of Merseburg, Germany, wrote the Chronicon Thietmari (Thietmar’s Chronicle) between 1012 and 1018, and included a number of ghost stories that featured revenants.

    Although not all of them framed the dead as terrifying, they certainly didn’t paint them as friendly, either. In one story, a congregation of the dead at a church set the priest upon the altar, before burning him to ashes – intended to be read as a mirror of pagan sacrifice.

    These dead were physical beings, capable of seizing a man and sacrificing him in his own church.

    A threat to be dealt with

    The English monastic historian William of Newburgh (1136–98) wrote revenants were so common in his day that recording them all would be exhausting. According to him, the returned dead were frequently seen in 12th century England.

    So, instead of providing a exhausting list, he offered some choice examples which, like most medieval ghost stories, had a good Christian moral attached to them.

    William’s revenants mostly killed the people of the towns they lived, returning to the grave between their escapades. But the medieval English had a method for dealing with these monsters; they dug them up, tore out the heart and then burned the body.

    Other revenants were dealt with less harshly, William explained. In one case, all it took was the Bishop of Lincoln writing a letter of absolution to stop a dead man returning to his widow’s bed.

    These medieval dead were also thought to spread disease – much like those infected with the rage virus – and were capable of physically killing someone.

    Depiction of the undead from a medieval manuscript.
    British Library, Arundel MS 83.

    The undead, further north

    In medieval Scandinavia and Iceland, the undead draugr were extremely strong, hideous to look at and stunk of decomposition. Some were immune to human weapons and often killed animals near their tombs before building up to kill humans. Like their English counterparts, they also spread disease.

    But according to the Eyrbyggja saga, an anonymous 13th or 14th century text written in Iceland, all it took was a type of community court and the threat of legal action to drive off these returned dead.

    It’s a method the survivors in 28 Years Later didn’t try.

    The dead live on

    The first-hand zombie stories that were common during the medieval period started to dwindle in the 16th century with the Protestant Reformation, which focused more on individuals’ behaviours and salvation.

    Nonetheless, their influence can still be felt in Catholic ritual practices today, such as in prayers offered for the dead, and the lighting of votive candles.

    We still tell ghost stories, and we still worry about things that go bump in the night. And of course, we continue to explore the undead in all its forms on the big screen.

    Christopher White does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. The 28 Days Later franchise redefined zombie films. But the undead have an old, rich and varied history – https://theconversation.com/the-28-days-later-franchise-redefined-zombie-films-but-the-undead-have-an-old-rich-and-varied-history-247900

    MIL OSI

  • MIL-OSI Submissions: Decolonizing history and social studies curricula has a long way to go in Canada

    Source: The Conversation – Canada – By Sara Karn, Postdoctoral Fellow, Department of History, McMaster University

    In June 2015, 10 years ago, the Truth and Reconciliation Commission of Canada (TRC) called for curriculum on Indigenous histories and contemporary contributions to Canada to foster intercultural understanding, empathy and respect. This was the focus of calls to action Nos. 62 to 65.

    As education scholars, we are part of a project supported by the Social Sciences and Humanities Research Council called Thinking Historically for Canada’s Future. This project involves researchers, educators and partner organizations from across Canada, including Indigenous and non-Indigenous team members.

    As part of this work, we examined Canadian history and social studies curricula in elementary, middle and secondary schools with the aim of understanding how they address — and may better address in future — the need for decolonization.

    We found that although steps have been made towards decolonizing history curricula in Canada, there is still a long way to go. These curricula must do far more to challenge dominant narratives, prompt students to critically reflect on their identities and value Indigenous world views.




    Read more:
    Looking for Indigenous history? ‘Shekon Neechie’ website recentres Indigenous perspectives


    Reimagining curriculum

    As white settler scholars and educators, we acknowledge our responsibility to unlearn colonial ways of being and learn how to further decolonization in Canada.

    In approaching this study, we began by listening to Indigenous scholars, such as Cree scholar Dwayne Donald. Donald and other scholars call for reimagining curriculum through unlearning colonialism and renewing relationships.




    Read more:
    Leaked Alberta school curriculum in urgent need of guidance from Indigenous wisdom teachings


    The late Arapaho education scholar Michael Marker suggested that in history education, renewing relations involves learning from Indigenous understandings of the past, situated within local meanings of time and place.

    History, social studies curricula

    Curricula across Canada have been updated in the last 10 years to include teaching about treaties, Indian Residential Schools and the cultures, perspectives and experiences of Indigenous Peoples over time.

    Thanks primarily to the work of Indigenous scholars and educators, including Donald, Marker, Mi’kmaw educator Marie Battiste, Anishinaabe scholar Nicole Bell and others, some public school educators are attentive to land-based learning and the importance of oral history.

    But these teachings are, for the most part, ad hoc and not supported by provincial curriculum mandates.

    Our study revealed that most provincial history curricula are still focused on colonial narratives that centre settler histories and emphasize “progress” over time. Curricula are largely inattentive to critical understandings of white settler power and to Indigenous ways of knowing and being.

    Notably, we do not include the three territories in this statement. Most of the territorial history curricula have been co-created with local Indigenous communities, and stand out with regard to decolonization.

    For example, in Nunavut’s Grade 5 curriculum, the importance of local knowledge tied to the land is highlighted throughout. There are learning expectations related to survival skills and ecological knowledge.

    Members of our broader research team are dedicated to analyzing curricula in Nunavut, the Northwest Territories and the Yukon. Their work may offer approaches to be adapted for other educational contexts.

    Dominant narratives

    In contrast, we found that provincial curricula often reinforce dominant historical narratives, especially surrounding colonialism. Some documents use the term “the history,” implying a singular history of Canada (for example, Manitoba’s Grade 6 curriculum).

    Historical content, examples and guiding questions are predominantly written from a Euro-western perspective, while minimizing racialized identities and community histories. In particular, curricula often ignore illustrations of Indigenous agency and experience.




    Read more:
    Moving beyond Black history month towards inclusive histories in Québec secondary schools


    Most curricula primarily situate Indigenous Peoples in the past, without substantial consideration for present-day implications of settler colonialism, as well as Indigenous agency and experiences today.

    For example, in British Columbia’s Grade 4 curriculum, there are lengthy discussions of the harms of colonization in the past. Yet, there is no mention of the ongoing impacts of settler colonialism or the need to engage in decolonization today.

    To disrupt these dominant narratives, we recommend that history curricula should critically discuss the ongoing impacts of settler colonialism, while centring stories of Indigenous resistance and survival over time.

    Identity and privilege

    There are also missed opportunities within history curricula when it comes to critical discussions around identity, including systemic marginalization or privilege.

    Who we are informs how we understand history, but curricula largely does not prompt student reflection in these ways, including around treaty relationships.

    In Saskatchewan’s Grade 5 curriculum, students are expected to explain what treaties are and “affirm that all Saskatchewan residents are Treaty people.”

    However, there is no mention of students considering how their own backgrounds, identities, values and experiences shape their understandings of and responsibilities for treaties. Yet these discussions are essential for engaging students in considering the legacies of colonialism and how they may act to redress those legacies.

    A key learning outcome could involve students becoming more aware of how their own personal and community histories inform their historical understandings and reconciliation commitments.

    Indigenous ways of knowing and being

    History curricula generally ignore Indigenous ways of knowing and being. Most curricula are inattentive to Indigenous oral traditions, conceptions of time, local contexts and relationships with other species and the environment.

    Instead, these documents reflect Euro-western, settler colonial worldviews and educational values. For example, history curricula overwhelmingly ignore local meanings of time and place, while failing to encourage opportunities for land-based and experiential learning.

    In Prince Edward Island’s Grade 12 curriculum, the documents expect that students will “demonstrate an understanding of the interactions among people, places and the environment.” While this may seem promising, environmental histories in this curriculum and others uphold capitalist world views by focusing on resource extraction and economic progress.

    To disrupt settler colonial relationships with the land and empower youth as environmental stewards, we support reframing history curricula in ways that are attentive to Indigenous ways of knowing the past and relations with other people, beings and the land.

    Ways forward

    Schools have been, and continue to be, harmful spaces for many Indigenous communities, and various aspects of our schooling beg questions about how well-served both Indigenous and non-Indigenous students are for meeting current and future challenges.

    If, as a society, we accept the premise that the transformation of current curricular expectations is possible for schools, then more substantive engagement is required in working toward decolonization.

    Decolonizing curricula is a long-term, challenging process that requires consideration of many things: who sits on curriculum writing teams; the resources allocated to supporting curricular reform; broader school or board-wide policies; and ways of teaching that support reconciliation.

    We encourage history curriculum writing teams to take up these recommendations as part of a broader commitment to reconciliation.

    While not exhaustive, recommendations for curricular reform are a critical step in the future redesign of history curricula. The goal is a history education committed to listening and learning from Indigenous communities to build more inclusive national stories of the past, and into the future.

    This is a corrected version of a story originally published June 17, 2025. The earlier story said Michael Marker was from the Lummi Nation instead of saying he was an Arapaho scholar.

    Sara Karn receives funding from the Social Sciences and Humanities Research Council (SSHRC).

    Kristina R. Llewellyn receives funding from the Social Sciences and Humanities Research Council (SSHRC).

    Penney Clark receives funding from the Social Sciences and Humanities Research Council of Canada (SSHRC).

    ref. Decolonizing history and social studies curricula has a long way to go in Canada – https://theconversation.com/decolonizing-history-and-social-studies-curricula-has-a-long-way-to-go-in-canada-253679

    MIL OSI

  • MIL-OSI Submissions: Digital government can benefit citizens: how South Africa can reduce the risks and get it right

    Source: The Conversation – Africa (2) – By Busani Ngcaweni, Visiting Adjunct Professor, Wits School of Governance, University of the Witwatersrand

    The digital revolution is reshaping governance worldwide. From the electronic filing of taxes to digital visa applications, technology is making government services more accessible, efficient and transparent.

    South Africa is making progress in its digital journey. In 2024 it climbed to 40th place out of 193 countries, from 65th place in 2022, in the United Nations e-Government Index. This improvement makes the country one of Africa’s digital leaders, surpassing Mauritius and Tunisia.

    South Africa has identified more than 255 government services for digitisation. Already, 134 are available on the National e-Government Portal. This achievement is remarkable. Nevertheless, the shift to digitisation comes with challenges and risks.

    Some countries have weakened the state’s role by rapidly outsourcing key government functions. But South Africa has the opportunity to build a model of digital transformation that strengthens public institutions rather than diminishes them.

    New technologies must bring tangible benefits for citizens. Digital transformation can improve public administration. But, if mismanaged, it could burden taxpayers with costs.

    Benefits

    Digital transformation comes at a cost. This is particularly true if the state fails to use its procurement power to negotiate reasonable prices. Infrastructure upgrades, cybersecurity measures, software licensing and system maintenance require substantial financial investment.

    The question is whether these expenses are a necessary step towards a more efficient and accessible government.

    Two South African examples illustrate that digital transformation can save money and enhance service delivery quality.

    The first is the South African Revenue Service. Its goal is to ensure that taxpayers and tax advisers can use the service from anywhere and at any time. The changes made more than a decade ago show that digital systems can yield substantial financial gains. After introducing e-filing in 2006, the revenue service streamlined tax processes, reduced inefficiencies and led to higher compliance rates. Ultimately this led to improved revenue collection.

    Similarly, digitising social grant payments has had a number of positive effects. In a chapter of a recent edited volume on public governance, my colleagues and I wrote a case study about how the South African Social Security Agency used basic technologies and platforms like WhatsApp and email to process a grant during the COVID pandemic. It allowed over 14 million people to apply, paid grants to over 6 million beneficiaries during the first phase of the project.

    South African Social Security Agency annual reports show that over 95% of grant beneficiaries receive their payouts electronically through debit cards, instead of going to cash points. This improves security and lets beneficiaries decide when to get and spend their money.

    There are fears that automation could result in massive job losses. But global experience has shown that digitalisation does not necessarily lead to large-scale retrenchments. Instead it can shift the nature of work to other responsibilities.

    The South African Social Security Agency provides a compelling case. Its transition to digital grant payments did not lead to job losses. Similarly, the expansion of e-filing at the revenue service has not resulted in workforce reductions. In both cases efficiencies improved.

    These cases highlight that digital transformation is reshaping roles rather than displacing employees. Public servants are moving into areas such as cybersecurity, data analysis and AI-driven decision-making.

    Shortcomings and pitfalls

    A number of inefficiencies are at play in government services.

    Firstly, most government digital operations still work with outdated paper-based systems. The lack of a uniform digital identity creates bureaucratic inefficiencies and delays.

    Secondly, fragmented procurement of equipment in government has led to duplicated efforts, increased costs and fruitless expenditure.

    Thirdly, different departments often use isolated and incompatible digital systems. This reduce the mutual benefits of digital transformation. The State IT Agency has been blamed for inefficiencies, procurement failures and questionable spending.

    Fourthly, South Africa’s public service remains fragmented. Citizens still struggle to access government services seamlessly. They often move between departments to complete what should be a single transaction.

    Without a centralised system, departments operate in isolation, duplicating efforts, increasing costs and eroding public trust.




    Read more:
    South Africa’s civil servants are missing skills, especially when it comes to technology – report


    Fifth, a lack of skills. Increasing reliance on digital tools requires expertise in data analytics, cloud computing and automation. Many public servants lack the training to take on these new roles. The National Digital and Future Skills Strategy was introduced in September 2020 to bridge this gap, but its effectiveness depends on its implementation.

    Introducing it in 2020 at the height of the COVID-19 pandemic forced government to make digital leaps which otherwise might have taken longer. To sustain services, technology had to be rapidly adopted, including basic things like holding Cabinet meetings online, using a system rapidly developed by the State Information Technology Agency.

    Sixth, security concerns complicate the transformation. As government systems become digital, they become vulnerable to cyberattacks. South Africa must put in place cybersecurity infrastructure to prevent identity theft, data breaches and service disruptions. A cyberattack on one department could affect the entire public sector.

    What needs to be done

    Government must streamline procurement, improve coordination and eliminate inefficiencies to ensure interdepartmental collaboration.

    A single, integrated e-government platform would:

    • cut red tape

    • reduce queues

    • increase efficiency.

    Government needs to upskill civil servants and improve their digital literacy.

    Government must create a seamless e-government system that connects services while protecting citizens’ personal information. The success of digitalisation depends on technological advancements as well as the level of trust citizens have in government systems. Without strong security measures, transparency and accountability, even the most sophisticated digital tools will fail to gain public confidence.

    South Africa has the chance to demonstrate that a strong, capable state can successfully integrate technology while safeguarding public interests. It should take full advantage of offers by Microsoft, Amazon and Huawei to support digital skills training in the public sector in a way that does not advantage one company’s technologies over others. Choices of technology must be user-centric, not based on preferences of accounting officers and chief information officers. Leaders of public institutions must be measured on their ability to digitally transform their organisations.

    Busani Ngcaweni is affiliated with the National School of Government, Wits and Johannesburg Universities.

    ref. Digital government can benefit citizens: how South Africa can reduce the risks and get it right – https://theconversation.com/digital-government-can-benefit-citizens-how-south-africa-can-reduce-the-risks-and-get-it-right-254089

    MIL OSI