Category: Global

  • MIL-OSI Global: The Ballad of Wallis Island is a masterpiece of the extraordinary made ordinary

    Source: The Conversation – UK – By Nicola Bishop, Academic Enhancement Lead, De Montfort University

    With The Ballad of Wallis Island, Tom Basden and Tim Key have written a poignant and comical exploration of music, loss, nostalgia and hope.

    The film has been compared to Once (2007) and Local Hero (1983), similarly low-key films that put music at the heart of quiet personal transformations. It also shares common ground with movingly situated, deliberately gently paced and panoramically shot films like The Dig (2021).

    It was made in just 18 days on a tight budget in a typical Welsh summer. A doctor was on hand to stop the actors getting hypothermia when they filmed shots in the sea. Filmed in an eclectic mausoleum of an old manor house, with a charmingly decorated coat of arms in the hallway, leaky taps and socially awkward characters, it is easy to see why romcom giant Richard Curtis called it “one of the great British films of all time”.

    The film takes place on the fictional Wallis Island, home to millionaire Charles (Tim Key), an eccentric and almost obsessive fan of former folk-rock duo McGwyer Mortimer (Herb and Nell, played by Basden and Carey Mulligan). Invited to the island to play a private gig, Herb and Nell face their musical and romantic past, all under the gaze of an ecstatic Charles.


    Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.


    Pared back and slow paced, the film downplays the complex emotions at its core and leaves the audience to connect their own dots. Instead of verbose dialogue or emotional clashes it uses everyday details to encourage the audience to be observant – a two-second shot that picks out a framed picture on a sideboard, the shadow that passes over a face, a simple gesture.

    Sitting comfortably alongside these big feelings – love, loss, grief, change, nostalgia – are all of the hallmarks of a British comedy classic. Victoria Wood-esque puns (watch out for Dame Judi “Drenched”), slapstick physical gags and pop culture references keep the audience laughing without unbalancing the pathos. It is reminiscent of Wood’s sitcom Dinnerladies (1998-2000), in the breadcrumb trail of slipped in details that provide laughter in the moment but which return to make the audience think twice.

    Basden’s brilliance

    Writer and star Tom Basden has form in the sitcom world. As well as his sitcom Plebs (2013), his most recent television project, Here We Go (2022), shares many of the subtle emotional touches and casually observed titbits of everyday life.

    Here We Go is a wonderful blend of quirky British antics and emotional depth, equally aided by a stellar script and cast. Purportedly filmed as part of a media project by the youngest member of the Jessop family, and sequenced into flashbacks and forwards across several days or weeks, the episodes drip-feed humdrum details that later gain significance. And like Dinnerladies, the funniest observations are those that the audience earn, not those that are given away, by rewatching again and again.

    The trailer for The Ballad of Wallis Island.

    While Here We Go uses disordered sequencing to reveal the meaning behind tiny details, The Ballad of Wallis Island uses objects that give hints about the past. Pictures of Charles and Marie at gigs, fridge magnets of the places they visited, the ticket stumps and magazine interviews of a super-fan collector. The extraordinariness of now is rooted in the everyday of Charles’s past. Even the source of his wealth rests on a single ordinary moment that has the potential to change all of their lives.

    Key and Basden turn the complex emotions of minutia into a powerful narrative. A bar of well-used soap on the side of the bathtub, a plastic bag of 20-pence pieces, and a bowl of homemade soup become symbols of emotional connection to the story, while their everydayness stops them from feeling saccharine or soppy.

    This is, as others have called it, a nostalgic film, about loss and moving on. But it also records a present that is made up of tiny glimpses of everyday life, captured like Here We Go, against a backdrop of the familiar and the ordinary. The quietly hopeful takeaway from the film is that small gestures are as memorable as any stadium finale.

    Nicola Bishop does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. The Ballad of Wallis Island is a masterpiece of the extraordinary made ordinary – https://theconversation.com/the-ballad-of-wallis-island-is-a-masterpiece-of-the-extraordinary-made-ordinary-259635

    MIL OSI – Global Reports

  • MIL-OSI Global: How your gut bacteria could help detect pancreatic cancer early

    Source: The Conversation – UK – By Falk Hildebrand, Researcher in Bioinformatician, Quadram Institute

    SewCreamStudio/Shutterstock

    Whether you had breakfast this morning or not, your pancreas is working quietly behind the scenes. This vital organ produces the enzymes that help digest your food and the hormones that regulate your metabolism. But when something goes wrong with your pancreas, the consequences can be devastating.

    Pancreatic cancer has earned the grim nickname “the silent killer” for good reason. By the time most patients experience symptoms, the disease has often progressed to an advanced stage where treatment options become severely limited. In the UK alone, over 10,700 new cases and 9,500 deaths from pancreatic cancer were recorded between 2017 and 2019, with incidence rates continuing to rise.

    The most common form, pancreatic ductal adenocarcinoma (PDAC), develops in the pancreatic duct – a tube connecting the pancreas to the small intestine. When tumours form here, they can block the flow of digestive enzymes, causing energy metabolism problems that leave patients feeling chronically tired and unwell. Yet these symptoms are often so subtle that they’re easily dismissed or attributed to other causes.


    Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.


    Now researchers are turning to an unexpected source for early PDAC detection: faecal samples. While analysing poo might seem an unlikely approach to cancer diagnosis, scientists are discovering that our waste contains a treasure trove of information about our health.

    This is because your gut is home to trillions of bacteria – in fact, bacterial cells in your body outnumber human cells by roughly 40 trillion to 30 trillion. These microscopic residents form complex communities that can reflect the state of your health, including the presence of disease.

    Since PDAC typically develops in the part of the pancreas that connects to the gut, and most people have regular bowel movements, stool samples provide a practical, non-invasive window into what is happening inside the body.

    Pancreatic cancer explained,

    Global evidence builds

    This innovative approach has been validated in studies across several countries, including Japan, China and Spain. The latest breakthrough comes from a 2025 international study involving researchers in Finland and Iran, which set out to examine the relationship between gut bacteria and pancreatic cancer onset across different populations.

    The researchers collected stool samples and analysed bacterial DNA using a technique called 16S rRNA gene amplicon sequencing. Despite the complex name, the principle is straightforward: scientists sequence and compare a genetic region found in every bacterium’s genome, allowing them to both identify and count different bacterial species simultaneously.

    The findings from the Finnish-Iranian study were striking. Patients with PDAC exhibited reduced bacterial diversity in their gut, with certain species either enriched or depleted compared with healthy people. More importantly, the team developed an artificial intelligence model that could accurately distinguish between cancer patients and healthy people based solely on their gut bacterial profiles.

    The field of microbiome research is evolving rapidly. While this study used amplicon sequencing, newer methods like “shotgun metagenomic sequencing” are providing even more detailed insights. This advanced technique captures the entire bacterial genome content rather than focusing on a single gene, offering an unprecedented resolution that can even detect whether bacteria have recently transferred between individuals.

    These technological advances are driving a fundamental shift in how we think about health and disease. We’re moving from a purely human-centred view to understanding ourselves as “human plus microbiome” – complex ecosystems where our bacterial partners play crucial roles in our wellbeing.

    Beyond pancreatic cancer

    The possibilities go well beyond pancreatic cancer. At Quadram, we’re applying similar methods to study colorectal cancer. We’ve already analysed over a thousand stool samples using advanced computational tools that piece together bacterial genomes and their functions from fragmented DNA. This ongoing work aims to reveal how gut microbes behave in colorectal cancer, much like other scientists have done for PDAC.

    The bidirectional interactions between cancer and bacteria are particularly fascinating – not only can certain bacterial profiles indicate disease presence, but the disease itself can alter the gut microbiome, as we previously showed in Parkinson’s disease, creating a complex web of cause and effect that researchers are still unravelling.

    Nonetheless, by understanding how our microbial partners respond to and influence disease, we’re gaining insights that could revolutionise both diagnosis and treatment. Our past research has shown this to be incredibly complex and sometimes difficult to understand, but developments in biotechnology and artificial intelligence are increasingly helping us to make sense of this microscopic world.

    For cancer patients and their families, this and other advancements in microbiome research offer hope for earlier detection. While we’re still in the early stages of translating these findings into clinical practice, the potential to catch this silent killer before it becomes deadly could transform outcomes for thousands of patients, but will require more careful and fundamental research.

    The microbial perspective on health is no longer a distant scientific curiosity – it’s rapidly becoming a practical reality that could save lives. As researchers continue to explore this inner frontier, we’re learning that the answer to some of our most challenging medical questions might be hiding in plain sight – in the waste we flush away each day.

    Falk Hildebrand receives funding from the UKRI, BBSRC, NERC and ERC.

    Daisuke Suzuki receives funding from Japan Society for the Promotion of Science.

    ref. How your gut bacteria could help detect pancreatic cancer early – https://theconversation.com/how-your-gut-bacteria-could-help-detect-pancreatic-cancer-early-259220

    MIL OSI – Global Reports

  • MIL-OSI Global: Some people are turning to nicotine gum and patches to treat long COVID brain fog

    Source: The Conversation – UK – By Dipa Kamdar, Senior Lecturer in Pharmacy Practice, Kingston University

    Andrey Popov/Shutterstock.com

    Some people with long COVID are turning to an unlikely remedy: nicotine gum and patches. Though typically used to quit smoking, nicotine is now being explored as a possible way to ease symptoms such as brain fog and fatigue.

    One such case, detailed in a recent article in Slate, describes a woman who found significant relief from debilitating brain fog after trying low-dose nicotine gum. Her experience, while anecdotal, aligns with findings from a small but interesting study from Germany.

    The study involved four participants suffering from symptoms related to long COVID. The researcher administered low-dose nicotine patches once daily and noticed marked improvements in the participants’ symptoms. Tiredness, weakness, shortness of breath and trouble with exercise rapidly improved – by day six at the latest.

    For those who had lost their sense of taste or smell, it took longer, but these senses came back fully within 16 days. Although it’s not possible to draw definitive conclusions on cause and effect from such a small study, the results could pave the way for larger studies.


    Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.


    While some people slowly recover from COVID, others remain unwell for years, especially those who became sick before vaccines were available. Between 3% and 5% of people continue to experience symptoms months, and sometimes even years, after the initial infection. In the UK, long COVID affects around 2.8% of the population.

    Brain fog and other neurological symptoms of long COVID are thought to result from a combination of factors – including inflammation, reduced oxygen to the brain, vascular damage and disruption to the blood-brain barrier. Research continues as there is still a lot we don’t know about this condition.

    The researcher in the German study thinks that long COVID symptoms, such as fatigue, brain fog and mood changes, might partly be due to problems with a brain chemical called acetylcholine, a neurotransmitter. This chemical is important for many functions in the body, including memory, attention and regulating mood.

    Normally, acetylcholine works by attaching to special “docking sites” on cells called nicotinic acetylcholine receptors, which help send signals in the brain and nervous system. But the COVID virus may interfere with these receptors, either by blocking them or disrupting how they work. When this happens, the brain may not be able to send signals properly, which could contribute to the mental and physical symptoms seen in long COVID.

    So why would nicotine potentially be useful? Nicotine binds to the same receptors and might help restore normal signalling, but the idea that it displaces the virus directly is still speculative.

    Nicotine is available in different forms, such as patches, gum, lozenges and sprays. Using nicotine through the skin, for example, with a patch, keeps the amount in the blood steady without big spikes. Because of this, people in the study didn’t seem to develop a dependence on it.

    Chewing nicotine gum or using a lozenge can cause spikes in nicotine levels, since the nicotine is absorbed gradually through the lining of the mouth. But unlike a patch, which delivers a steady dose, the user has more control over how much nicotine they take in when using gum or lozenges.

    There are mixed results on the effectiveness of nicotine on cognitive functions such as memory and concentration. But most studies agree that it can enhance attention. Larger studies are needed to gauge the effectiveness of nicotine specifically for long COVID symptoms.

    An estimated 2.8% of people in the UK have long COVID.
    Chaz Bharj/Shutterstock.com

    Not without risks

    Despite its benefits, nicotine is not without risks. Even in gum or patch form, it can cause side-effects like nausea, dizziness, increased heart rate and higher blood pressure.

    Some of these stimulant effects on heart rate may be useful for people with long COVID symptoms such as exercise intolerance. But this needs to be closely monitored. Long-term use may also affect heart health. For non-smokers, the risk of developing a nicotine dependency is a serious concern.

    So are there any options to treat long COVID symptoms?

    There are some studies looking at guanfacine in combination with N-acetylcysteine, which have shown improvement in brain fog in small groups of people. There has been at least one clinical trial exploring nicotine for mild cognitive impairment in older adults, though not in the context of long COVID. Given that anecdotal reports and small studies continue to draw attention, it is likely that targeted trials are in development.

    The main recommendations by experts are to implement lifestyle measures. Slowly increasing exercise, having a healthy diet, avoiding alcohol, drugs and smoking, sleeping enough, practising mindfulness and doing things that stimulate the brain are all thought to help brain fog.

    For those grappling with long COVID or persistent brain fog, the idea of using nicotine patches or gum might be tempting. But experts caution against self-medicating with nicotine. The lack of standardised dosing and the potential for addiction and unknown long-term effects make it a risky experiment.

    While nicotine isn’t a cure and may carry real risks, its potential to ease long COVID symptoms warrants careful study. For now, those battling brain fog should approach it with caution – and always under medical supervision. What’s clear, though, is the urgent need for more research into safe, effective treatments for the lingering effects of COVID.

    Dipa Kamdar does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Some people are turning to nicotine gum and patches to treat long COVID brain fog – https://theconversation.com/some-people-are-turning-to-nicotine-gum-and-patches-to-treat-long-covid-brain-fog-259093

    MIL OSI – Global Reports

  • MIL-OSI Global: Iran’s history has been blighted by interference from foreign powers

    Source: The Conversation – UK – By Simin Fadaee, Senior Lecturer in Sociology, University of Manchester

    Iranians commemorate the 1979 revolution in Qom, central Iran. Mostafameraji via Wikimedia Commons, CC BY-NC-SA

    Israel’s recent surprise attack on Iran was ostensibly aimed at neutralising Iran’s nuclear programme, but it didn’t just damage nuclear installations. It killed scientists, engineers and senior military personnel.

    Meanwhile, citizens with no ties to the government or military, became “collateral damage”. For 11 days, Israel’s attacks intensified across Tehran and other major cities.

    When the US joined the attack, dropping its bunker-buster bombs on sites in central Iran on June 21, it threatened to push the region closer to large-scale conflict. Israel’s calls for regime change in Iran were joined by the US president, Donald Trump, who took to social media on June 22 with the message: “if the current Iranian Regime is unable to MAKE IRAN GREAT AGAIN, why wouldn’t there be a Regime change??? MIGA!!!”

    Trump’s remarks are reminders of past US interventions. The threat of regime change by the most powerful state in the world carries particular weight in Iran, where memories of foreign-imposed coups and covert operations remain vivid and painful.


    Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.


    In the early 1890s, Iran was rocked by a popular uprising after the shah granted a British company exclusive rights to the country’s tobacco industry. The decision was greeted with anger and in 1891 the country’s senior cleric, Grand Ayatollah Mirza Shirazi, issued a fatwa against tobacco use.

    A mass boycott ensued – even the shah’s wives reportedly gave up the habit. When it became clear that the boycott was going to hold, the shah cancelled the concession in January 1892. It was a clear demonstration of people power.

    This event is thought to have played a significant role in the development of the revolutionary movement that led to the Constitutional Revolution that took place between 1905 and 1911 and the establishment of a constitution and parliament in Iran.

    Rise of the Pahlavis

    Reza Shah, who founded the Pahlavi dynasty – which would be overthrown in the 1979 revolution and replaced by the Islamic Republic – rose to power following a British-supported coup in 1921.

    Autocrat: Mohammad Reza Pahlavi.

    During the first world war, foreign interference weakened Iran and the ruling Qajar dynasty. In 1921, with British support, army officer Reza Khan and politician Seyyed Ziaeddin Tabatabaee led a coup in Tehran. Claiming to be acting to save the monarchy, they arrested key opponents. By 1923, Reza Khan had become prime minister.

    In 1925, Reza Khan unseated the Qajars and founded the Pahlavi dynasty, becoming Reza Shah Pahlavi. This was a turning point in Iran’s history, marking the start of British dominance. The shah’s authoritarian rule focused on centralisation, modernisation and secularisation. It set the stage for the factors that would that eventually lead to the 1979 Revolution.

    In 1941, concerned at the close relationship Pahlavi had developed with Nazi Germany, Britain and its allies once again intervened in Iranian politics, forcing Pahlavi to abdicate. He was exiled to South Africa and his 22-year-old son, Mohammad Reza, was named shah in his place.

    The 1953 coup

    Mohammad Mosaddegh became Iran’s first democratically elected prime minister in 1951. He quickly began to introduce reforms and challenge the authority of the shah. Despite a sustained campaign of destabilisation, Mossadegh retained a high level of popular support, which he used to push through his radical programme. This included the nationalisation of Iran’s oil industry, which was effectively controlled by the Anglo-Persian Oil Company – later British Petroleum (BP).

    Mohammad Mosaddegh in court martial by Ebrahim Golestan.
    Ebrahim Golestan via Wikimedia Commons

    In 1953, he was ousted in a CIA and MI6-backed coup and placed under house arrest. The shah, who had fled to Italy during the unrest, returned to power with western support.

    Within a short time, Mohammad Reza Shah Pahlavi established an authoritarian regime that governed through repression and intimidation. He outlawed all opposition parties, and numerous activists involved in the oil nationalisation movement were either imprisoned or forced into exile.




    Read more:
    Iran’s long history of revolution, defiance and outside interference – and why its future is so uncertain


    The 1979 revolution: the oppression continues

    The shah’s rule became increasingly authoritarian and was also marked by the lavish lifestyles of the ruling elite and increasing poverty of the mass of the Iranian people. Pahlavi increasingly relied on his secret police, the Bureau for Intelligence and Security of the State.

    Meanwhile, a scholar and Islamic cleric named Ruhollah Khomeini, had been rising in prominence especially after 1963, when Pahlavi’s unpopular land reforms mobilised a large section of society against his rule. His growing prominence brought him into confrontation with the government and in 1964 he was sent into exile. He remained abroad, living in Turkey, Iraq and France.

    By 1964 cleric Ruhollah Khomeini had become the focus for some anti-government protests in Iran.
    emam.com via Wikimedia Commons

    By 1978 a diverse alliance primarily made up of urban working and middle-class citizens had paralysed the country. While united in their resistance to the monarchy, participants were driven by a variety of ideological beliefs, including socialism, communism, liberalism, secularism, Islamism and nationalism. The shah fled into exile on January 16 1979 and Khomeini returned to Iran, which in March became an Islamic Republic with Khomeini at its head.

    But the US was not finished in its attempts to destabilise Iran. In 1980, Washington backed Saddam Hussein in initiating a brutal eight-year war, which claimed hundreds of thousands of Iranian lives and severely disrupted the country’s efforts at political and economic reconstruction.

    Iran and the US have remained bitter foes. Over the years ordinary Iranians have suffered tremendously under rounds of US-imposed sanctions, which have all but destroyed the economy in recent years.

    This new wave of foreign aggression has arrived at a time of significant domestic unrest within Iran. Since the Woman, Life, Freedom protests, which began in September 2022 after the death of Mahsa Amini at the hands of the morality police, there has been a general groundswell of demand for social justice and democracy.

    But the convergence of external aggression and internal demands has brought national sovereignty and self-determination to the forefront, as it did during previous major struggles. While world powers gamble with Iran’s future, it is the Iranian people through their struggles and unwavering push for justice and democracy who must determine the country’s future.

    Simin Fadaee does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Iran’s history has been blighted by interference from foreign powers – https://theconversation.com/irans-history-has-been-blighted-by-interference-from-foreign-powers-259700

    MIL OSI – Global Reports

  • MIL-OSI Global: Why there’s a growing backlash against plant-based diets

    Source: The Conversation – UK – By Jonathan Beacham, Research Fellow, University of Bristol Business School, University of Bristol

    Geinz Angelina/Shutterstock

    People in the UK are eating too much meat – especially processed meat – according to a recent report from the Food Foundation, a UK charity.

    The report recommends revisiting school food standards, which advises schools to serve meat three times a week. The consequence? Children often eat a higher proportion of processed meat than adults.

    The effects of meat-heavy diets are well documented. Some analyses estimate that overconsumption of meat, especially processed red meat, costs the global economy around £219 billion annually, in terms of harms to human health and the environment. At the same time, a growing body of evidence shows that a transition toward more plant-based diets is not just beneficial, but essential.

    And yet efforts to reduce meat consumption haven’t always been well received. In Paris, for instance, the mayor’s initiative to remove meat from municipal canteen menus twice a week triggered an angry backlash from unions and workers who called for the return of steak frites.

    A few years ago, meat consumption in the UK was falling, and interest in initiatives like Veganuary was surging. Venture capital flooded into plant-based startups, from cricket burgers to hemp milk.

    But enthusiasm, and investment, has since declined. Meanwhile, populism and “culture war” narratives have fuelled social media misinformation about food, diet and sustainability, hampering progress. So what has changed? And why is meat once again a flashpoint in the food debate?

    Working with the H3 Consortium, which explores pathways to food system transformation in the UK, our research has focused on why the backlash against plant-based diets is growing and what it means for people, animals and the planet.

    Part of the answer lies in coordinated messaging campaigns that frame meat and dairy not just as “normal” but as “natural” and essential to a balanced diet. One example is the Let’s Eat Balanced campaign, run by the Agriculture and Horticulture Development Board since 2021. It promotes meat and dairy as key sources of micronutrients such as Vitamin B12 and implicitly positions plant-based diets as nutritionally inadequate.

    But here’s the irony: many intensively farmed animals don’t get B12 from their diet naturally. Their feed is supplemented with vitamins and minerals, just as vegan diets are supplemented. So is meat really a more “natural” source of B12 than a pill?

    That raises a broader question: what could a fair and sustainable transition to plant-based protein look like – not just for consumers, but for farmers and rural communities? Some analyses warn that rapid shifts in land use toward arable farming could have serious unintended consequences, such as disrupting rural economies and threatening livelihoods.

    There are also legitimate questions about the healthiness of meat and dairy alternatives. Despite the early hype around alternative proteins, many products fall under the category of ultra-processed foods (UPFs) – a red flag for consumers wary of additives and artificial ingredients.

    The popularity of books like Chris van Tulleken’s Ultra-Processed People has stoked concerns about emulsifiers, ingredients used to bind veggie burgers or prevent vegan milk from curdling, and some headlines have asked whether they “destroy” our gut health.

    Still, it’s a leap to suggest that conventional red meat is the healthier alternative. The health risks of processed meat are well established, especially the carcinogenic effects of nitrites used to keep meat looking fresh in packaging.

    Some people suggest eating chicken instead of read meat because it produces less greenhouse gas. But raising chickens also causes problems, like pollution from chicken manure that harms rivers, and it depends a lot on soy feed, which can be affected by political and trade issues.

    There’s a strong case for reducing meat consumption, and the scientific evidence to support it is robust. But understanding the backlash against plant-based eating is essential if we want to make meaningful progress. For now, meat is not disappearing from our diets. In fact, the food fight may be just getting started.

    Jonathan Beacham receives funding from the UKRI Strategic Priorities Fund (grant ref: BB/V004719/1).

    David M. Evans receives funding from the UKRI Strategic Priorities Fund (grant ref: BB/V004719/1). He is affiliated with Defra (the Department of Environment, Food and Rural Affairs) as a member of their Social Science Expert Group.

    ref. Why there’s a growing backlash against plant-based diets – https://theconversation.com/why-theres-a-growing-backlash-against-plant-based-diets-259455

    MIL OSI – Global Reports

  • MIL-Evening Report: Macron invites all New Caledonia stakeholders for Paris talks

    By Patrick Decloitre, RNZ Pacific correspondent French Pacific desk

    French President Emmanuel Macron has sent a formal invitation to “all New Caledonia stakeholders” for talks in Paris on the French Pacific territory’s political and economic future to be held on July 2.

    The confirmation came on Thursday in the form of a letter sent individually to an undisclosed list of recipients and June 24.

    The talks follow a series of roundtables fostered earlier this year by French Minister for Overseas Manuel Valls.

    But the latest talks, held in New Caledonia under a so-called “conclave” format, stalled on  May 8.

    This was mainly because several main components of the pro-France (anti-independence) parties said the draft agreement proposed by Valls was tantamount to a form of independence, which they reject.

    The project implied that New Caledonia’s future political status vis-à-vis France could be an associated independence “within France” with a transfer of key powers (justice, defence, law and order, foreign affairs, currency ), a dual New Caledonia-France citizenship and an international standing.

    Instead, the pro-France Rassemblement-LR and Loyalistes suggested another project of “internal federalism” which would give more powers (including on tax matters) to each of the three provinces, a notion often criticised as a de facto partition of New Caledonia.

    Local elections issue
    In May 2024, on the sensitive issue of eligibility at local elections, deadly riots broke out in New Caledonia, resulting in 14 deaths and more than 2 billion euros (NZ$3.8 billion) in damage.

    In his letter, Macron writes that although Valls “managed to restore dialogue…this did not allow reaching an agreement on (New Caledonia’s) institutional future”.

    “This is why I decided to host, under my presidency, a summit dedicated to New Caledonia and associating the whole of the territory’s stakeholders”.

    Macron also wrote that “beyond institutional topics, I wish that our exchanges can also touch on (New Caledonia’s) economic and societal issues”.

    Macron made earlier announcements, including on 10 June 2025, on the margins of the recent UNOC Oceans Summit in Nice (France), when he dedicated a significant part of his speech to Pacific leaders attending a “Pacific-France” summit to the situation in New Caledonia.

    “Our exchanges will last as long as it takes so that the heavy topics . . . can be dealt with with all the seriousness they deserve”.

    Macron also points out that after New Caledonia’s “crisis” broke out on 13 May 2024, “the tension was too high to allow for a dialogue between all the components of New Caledonia’s society”.

    Letter sent by French President Emmanuel Macron to New Caledonia’s stakeholders for Paris talks on 2 July 2025. Image: RNZ Pacific

    A new deal?
    The main political objective of the talks remains to find a comprehensive agreement between all local political stakeholders, in order to arrive at a new agreement that would define the French Pacific territory’s political future and status.

    This would then allow to replace the 27-year-old Nouméa Accord, signed in 1998.

    That pact put a heavy focus on the notions of “living together” and “common destiny” for New Caledonia’s indigenous Kanaks and all of the other components of its ethnically and culturally diverse society.

    It also envisaged an economic “rebalancing” between the Northern and Islands provinces and the more affluent Southern province, where the capital Nouméa is located.

    The Nouméa Accord also contained provisions to hold three referendums on self-determination.

    The three polls took place in 2018, 2020 and 2021, all of those resulting in a majority of people rejecting independence.

    But the last referendum, in December 2021, was largely boycotted by the pro-independence movement.

    ‘Examine the situation’
    According to the Nouméa Accord, after the referendums, political stakeholders were to “examine the situation thus created”, Macron recalled.

    But despite several attempts, including under previous governments, to promote political talks, the situation has remained deadlocked and increasingly polarised between the pro-independence and the pro-France camps.

    A few days after the May 2024 riots, Macron made a trip to New Caledonia, calling for the situation to be appeased so that talks could resume.

    In his June 10 speech to Pacific leaders, Macron also mentioned a “new project” and in relation to the past referendums process, pledged “not to make the same mistakes again”.

    He said he believed the referendum, as an instrument, was not necessarily adapted to Melanesian and Kanak cultures.

    In practice, the Paris “summit” would also involve French minister for Overseas Manuel Valls.

    The list of invited participants would include all parties, pro-independence and pro-France, represented at New Caledonia’s Congress (the local parliament).

    But it would also include a number of economic stakeholders, as well as a delegation of Mayors of New Caledonia, as well as representatives of the civil society and NGOs.

    Talks could also come in several formats, with the political side being treated separately.

    The pro-independence platform FLNKS (Kanak and Socialist National Liberation Front) has to decide at the weekend whether it will take part in the Paris talks.

    FLNKS leader Christian Téin . . . still facing charges over last year’s riots, but released from prison in France providing he does not return to New Caledonia and checks in with investigating judges. Image: Opinion International

    Will Christian Téin take part?
    During a whirlwind visit to New Caledonia in June 2024, Macron met Christian Téin, the leader of a pro-independence CCAT (Field Action Coordination Cell), created by Union Calédonienne (UC).

    Téin was arrested and jailed in mainland France.

    In August 2024, while in custody in the Mulhouse prison (northeastern France), he was elected in absentia as president of a UC-dominated FLNKS.

    Even though he still faces charges for allegedly being one of the masterminds of the May 2024 riots, Téin was released from jail on June 12 on condition that he does not travel to New Caledonia and reports regularly to French judges.

    On the pro-France side, Téin’s release triggered mixed angry reactions.

    Other pro-France hard-line components said the Kanak leader’s participation in the Paris talks was simply “unthinkable”.

    Pro-independence Tjibaou said Téin’s release was “a sign of appeasement”, but that his participation was probably subject to “conditions”.

    “But I’m not the one who makes the invitations,” he told public broadcaster NC la 1ère on 15 June 2025.

    FLNKS spokesman Dominique Fochi said in a release Téin’s participation in the talks was earlier declared a prerequisite.

    “Now our FLNKS president has been released. He’s the FLNKS boss and we are awaiting his instructions,” Fochi said.

    At former roundtables earlier this year, the FLNKS delegation was headed by Union Calédonienne (UC, the main and dominating component of the FLNKS) president Emmanuel Tjibaou.

    ‘Concluding the decolonisation process’, says Valls
    In a press conference on Tuesday in Paris, Valls elaborated some more on the upcoming Paris talks.

    “Obviously there will be a sequence of political negotiations which I will lead with all of New Caledonia’s players, that is all groups represented at the Congress. But there will also be an economic and social sequence with economic, social and societal players who will be invited”, Valls said.

    During question time at the French National Assembly in Paris on 3 June 2025, Valls said he remained confident that it was “still possible” to reach an agreement and to “reconcile” the “contradictory aspirations” of the pro-independence and pro-France camps.

    During the same sitting, pro-France New Caledonia MP Nicolas Metzdorf decried what he termed “France’s lack of ambition” and his camp’s feeling of being “let down”.

    The other MP for New Caledonia’s, pro-independence Emmanuel Tjibaou, also took the floor to call on France to “close the colonial chapter” and that France has to “take its part in the conclusion of the emancipation process” of New Caledonia.

    “With the President of the Republic and the Prime Minister, and the political forces, we will make offers, while concluding the decolonisation process, the self-determination process, while respecting New Caledonians’ words and at the same time not forgetting history, and the past that have led to the disaster of the 1980s and the catastrophe of May 2024,” he said.

    This article is republished under a community partnership agreement with RNZ.

    MIL OSI AnalysisEveningReport.nz

  • MIL-OSI Global: Semen allergies may be surprisingly common – here’s what you need to know

    Source: The Conversation – UK – By Michael Carroll, Reader / Associate Professor in Reproductive Science, Manchester Metropolitan University

    Yuriy Maksymiv/Shutterstock

    Imagine itching, burning, swelling, or even struggling to breathe just moments after sex. For a small but growing number of women, that’s not an awkward anecdote – it’s a medical condition. It’s called seminal plasma hypersensitivity (SPH) – an allergy to semen.

    This rare but underdiagnosed allergy isn’t triggered by sperm cells, but by proteins in the seminal plasma — the fluid that carries sperm. First documented in 1967, when a woman was hospitalised after a “violent allergic reaction” to sex, SPH is now recognised as a type 1 hypersensitivity, the same category as hay fever, peanut allergy and cat dander.

    Symptoms range from mild to severe. Some women experience local reactions: burning, itching, redness and swelling of the vulva or vagina. Others develop full-body symptoms: hives, wheezing, dizziness, runny nose and even anaphylaxis, a potentially life-threatening immune response.


    Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.


    Until 1997, SPH was thought to affect fewer than 100 women globally. But a study led by allergist Jonathan Bernstein found that among women reporting postcoital symptoms, nearly 12% could be classified as having probable SPH.

    I conducted a small, unpublished survey in 2013 and found a similar 12% rate. The true figure may be higher still. Many cases go unreported, misdiagnosed, or dismissed as STIs, yeast infections, or general “sensitivity”. One revealing clue: symptoms disappear when condoms are used.

    A 2024 study reinforced this finding, suggesting that SPH is both more common and more commonly misdiagnosed than previously believed.

    The problem isn’t the sperm

    The main allergen appears to be prostate-specific antigen (PSA): a protein found in all seminal plasma, not just that of a particular partner. In other words, women can develop a reaction to any man’s semen, not just their regular partner’s.

    There’s also evidence of cross-reactivity. For example, Can f 5, a protein found in dog dander, is structurally similar to human PSA. So women allergic to dogs may find themselves reacting to semen too. In one unusual case, a woman with a Brazil nut allergy broke out in hives after sex, probably due to trace nut proteins in her partner’s semen.

    Diagnosis begins with a detailed sexual and medical history, often followed by skin prick testing with the partner’s semen or blood tests for PSA-specific antibodies (IgE).

    In my own research involving symptomatic women, we demonstrated that testing with washed spermatozoa, free from seminal plasma, can help confirm that the allergic trigger is not the sperm cells themselves, but proteins in the seminal fluid.

    And it’s not just women. It’s possible some men may be allergic to their own semen.

    This condition, known as post-orgasmic illness syndrome (POIS), causes flu-like symptoms, such as fatigue, brain fog and muscle aches, immediately after ejaculation. It’s believed to be an autoimmune or allergic reaction. Diagnosis is tricky, but skin testing with a man’s own semen can yield a positive reaction.

    What about fertility?

    Seminal plasma hypersensitivity doesn’t cause infertility directly, but it can complicate conception. Avoiding the allergen – usually the most effective treatment for allergies – isn’t feasible for couples trying to conceive.

    Treatments include prophylactic antihistamines (antihistamine medications taken in advance of anticipated exposure to an allergen, or before allergy symptoms are expected to appear to prevent or reduce the severity of allergic reactions), anti-inflammatories and desensitisation using diluted seminal plasma. In more severe cases, couples may choose IVF with washed sperm, bypassing the allergic trigger altogether.

    It’s important to note: SPH is not a form of infertility. Many women with SPH have conceived successfully – some naturally, others with medical support.

    So why don’t more people know about this?

    Because sex-related symptoms often go unspoken. Embarrassment, stigma and a lack of awareness among doctors mean that many women suffer in silence. In Bernstein’s 1997 study, almost half of the women who had symptoms after sex had never been checked for SPH, and many had spent years being misdiagnosed and getting the wrong treatment.

    If sex routinely leaves you itchy, sore or unwell – and condoms help – you might be allergic to semen.

    It’s time to bring this hidden condition out of the shadows and into the consultation room.

    Michael Carroll does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Semen allergies may be surprisingly common – here’s what you need to know – https://theconversation.com/semen-allergies-may-be-surprisingly-common-heres-what-you-need-to-know-259308

    MIL OSI – Global Reports

  • MIL-OSI Global: What’s the difference between an eating disorder and disordered eating?

    Source: The Conversation – Global Perspectives – By Gemma Sharp, Researcher in Body Image, Eating and Weight Disorders, Monash University

    PIKSEL/Getty

    Following a particular diet or exercising a great deal are common and even encouraged in our health and image-conscious culture. With increased awareness of food allergies and other dietary requirements, it’s also not uncommon for someone to restrict or eliminate certain foods.

    But these behaviours may also be the sign of an unhealthy relationship with food. You can have a problematic pattern of eating without being diagnosed with an eating disorder.

    So, where’s the line? What is disordered eating, and what is an eating disorder?

    What is disordered eating?

    Disordered eating describes negative attitudes and behaviours towards food and eating that can lead to a disturbed eating pattern.

    It can involve:

    • dieting

    • skipping meals

    • avoiding certain food groups

    • binge eating

    • misusing laxatives and weight-loss medications

    • inducing vomiting (sometimes known as purging)

    • exercising compulsively.

    Disordered eating is the term used when these behaviours are not frequent and/or severe enough to meet an eating disorder diagnosis.

    Not everyone who engages in these behaviours will develop an eating disorder. But disordered eating – particularly dieting – usually precedes an eating disorder.

    What is an eating disorder?

    Eating disorders are complex psychiatric illnesses that can negatively affect a person’s body, mind and social life. They’re characterised by persistent disturbances in how someone thinks, feels and behaves around eating and their bodies.

    To make a diagnosis, a qualified health professional will use a combination of standardised questionnaires, as well as more general questioning. These will determine how frequent and severe the behaviours are, and how they affect day-to-day functioning.

    Examples of clinical diagnoses include anorexia nervosa, bulimia nervosa, binge eating disorder and avoidant/restrictive food intake disorder.

    How common are eating disorders and disordered eating?

    The answer can vary quite radically depending on the study and how it defines disordered behaviours and attitudes.

    An estimated 8.4% of women and 2.2% of men will develop an eating disorder at some point in their lives. This is most common during adolescence.

    Disordered eating is also particularly common in young people with 30% of girls and 17% of boys aged 6–18 years reporting engaging in these behaviours.

    Although the research is still emerging, it appears disordered eating and eating disorders are even more common in gender diverse people.

    Can we prevent eating disorders?

    There is some evidence eating disorder prevention programs that target risk factors – such as dieting and concerns about shape and weight – can be effective to some extent in the short term.

    The issue is most of these studies last only a few months. So we can’t determine whether the people involved went on to develop an eating disorder in the longer term.

    In addition, most studies have involved girls or women in late high school and university. By this age, eating disorders have usually already emerged. So, this research cannot tell us as much about eating disorder prevention and it also neglects the wide range of people at risk of eating disorders.

    Is orthorexia an eating disorder?

    In defining the line between eating disorders and disordered eating, orthorexia nervosa is a contentious issue.

    The name literally means “proper appetite” and involves a pathological obsession with proper nutrition, characterised by a restrictive diet and rigidly avoiding foods believed to be “unhealthy” or “impure”.

    These disordered eating behaviours need to be taken seriously as they can lead to malnourishment, loss of relationships, and overall poor quality of life.

    However, orthorexia nervosa is not an official eating disorder in any diagnostic manual.

    Additionally, with the popularity of special diets (such as keto or paleo), time-restricted eating, and dietary requirements (for example, gluten-free) it can sometimes be hard to decipher when concerns about diet have become disordered, or may even be an eating disorder.

    For example, around 6% of people have a food allergy. Emerging evidence suggests they are also more likely to have restrictive types of eating disorders, such as anorexia nervosa and avoidant/restrictive food intake disorder.

    However, following a special diet such as veganism, or having a food allergy, does not automatically lead to disordered eating or an eating disorder.

    It is important to recognise people’s different motivations for eating or avoiding certain foods. For example, a vegan may restrict certain food groups due to animal rights concerns, rather than disordered eating symptoms.

    What to look out for

    If you’re concerned about your own relationship with food or that of a loved one, here are some signs to look out for:

    • preoccupation with food and food preparation

    • cutting out food groups or skipping meals entirely

    • obsession with body weight or shape

    • large fluctuations in weight

    • compulsive exercise

    • mood changes and social withdrawal.

    It’s always best to seek help early. But it is never too late to seek help.


    In Australia, if you are experiencing difficulties in your relationships with food and your body, you can contact the Butterfly Foundation’s national helpline on 1800 33 4673 (or via their online chat).

    For parents concerned their child might be developing concerning relationships with food, weight and body image, Feed Your Instinct highlights common warning signs, provides useful information about help seeking and can generate a personalised report to take to a health professional.

    Gemma Sharp receives funding from an NHMRC Investigator Grant. She is a Professor and the Founding Director and Member of the Consortium for Research in Eating Disorders, a registered charity.

    ref. What’s the difference between an eating disorder and disordered eating? – https://theconversation.com/whats-the-difference-between-an-eating-disorder-and-disordered-eating-256787

    MIL OSI – Global Reports

  • MIL-OSI Global: How Britain’s new political divide delivers voters to Reform and the Greens

    Source: The Conversation – UK – By John Curtice, Professor of Politics, University of Strathclyde and Senior Research Fellow, National Centre for Social Research

    The outcome of last year’s general election left an important question hanging in the air. Could the UK’s traditional system of two-party politics continue to survive?

    True, power did change hands in a familiar fashion. A majority Conservative government was replaced by a majority Labour one. Indeed, the new administration won an overall majority of no less than 174.

    However, the new government was elected with a lower share of the vote than that secured by any previous majority government. At the same time, the Conservatives won by far their lowest share of the vote ever. For the first time since 1922, when Labour replaced the then Liberal party as the Conservatives’ principal competitor, Labour and the Conservatives together won fewer than three in five of all votes cast.

    Over the past 12 months, the foundations of Britain’s two-party system have come to look even shakier. Nigel Farage’s Reform UK party tops the polls. Only just over two in five of those who express a party preference say they would vote Labour or Conservative – a record low.

    New analysis of last year’s election published by the National Centre for Social Research as part of the British Social Attitudes report confirms that Britain’s two-party system is in poor health.


    Want more politics coverage from academic experts? Every week, we bring you informed analysis of developments in government and fact check the claims being made.

    Sign up for our weekly politics newsletter, delivered every Friday.


    The traditional anchor of Conservative and Labour support – social class – has been cast adrift. The ideological underpinning of the battle between them, the division between left and right, has been replaced by a division between social conservatives and social liberals. This second division draws people towards Reform and the Greens. At the same time, low levels of trust and confidence in how they are being governed is also encouraging voters to back these two challenger parties.

    From class divide to identity politics

    Historically, middle-class voters voted Conservative, while their working-class counterparts were more likely to support Labour. In decline ever since the advent of New Labour, that pattern disappeared entirely in 2019 in the wake of a Brexit debate that drew pro-Leave working-class voters towards the Conservatives and pro-Remain middle-class supporters towards Labour.




    Read more:
    Know your place: what happened to class in British politics – a podcast series from The Conversation Documentaries


    Although Brexit was no longer in the news, the traditional link between social class and voting Conservative or Labour did not reappear in 2024. Labour won the support of just 30% of those in routine and semi-routine occupations, compared with 42% of those in professional and managerial jobs. At 17% and 21% respectively, the equivalent figures for Conservative support are also little different from each other.

    As in the EU referendum, what now shapes how people vote is their age and education, not the job they do. Younger voters and graduates are more likely to vote Labour, while older people and those with less in the way of educational qualifications are more inclined to vote Conservative.

    The problem is that the two parties now face competition for these demographic groups from the Greens and Reform. Last year the Greens won as much as 21% of the vote among under-25s. Reform secured 25% among those who do not have an A-level or its equivalent, nearly matching the Tories.

    Green party co-leader Carla Denyer speaks in the House of Commons.
    Flickr/UK Parliament, CC BY-NC-ND

    Equally, Brexit was not a divide between “left” and “right” – that is, between those who think the government should do more to reduce inequality and those who are more concerned about growing the whole economic pie. It was a battle between social liberals and social conservatives – between those who value living in a diverse society and those who believe that too much diversity undermines social cohesion.

    That second divide has now come to matter as much as the left-right divide in shaping how people vote – and thereby helps draw support away from the Conservatives and Labour.

    While the Conservatives are more popular among social conservatives, so also are Reform. Indeed, the competition between the two parties for these voters has intensified since the election. By this spring, Reform, on 37%, was winning the battle for their support, with the Conservatives supported by only 26%. Equally, although Labour are relatively popular among social liberals, both the Greens and the Liberal Democrats find them relatively fertile territory too. Three in ten (31%) social liberals backed the Liberal Democrats or the Greens last year, a figure that now stands at 37%.

    Meanwhile, trust and confidence in government remain at a low ebb. For example, nearly half (46%) say they “almost never” trust governments of any party to put the interests of the country above those of their own parties. This perception is seemingly accompanied by a reluctance to vote for the parties of government too. Nearly one in four (24%) of those who almost never trust governments backed Reform last year, while one in ten (10%) supported the Greens.

    This, of course, is not the first time that Britain’s two-party system has been under challenge. In the early 1980s the Liberal/SDP Alliance threatened to “break the mould of British politics”. In spring 2019, at the height of the Brexit impasse, the Brexit Party and the Liberal Democrats appeared poised to upset the traditional order. This time, however, the challenge to the Conservative/Labour duopoly seems more profound.

    John Curtice currently receives funding from the Economic and Social Research Council.

    ref. How Britain’s new political divide delivers voters to Reform and the Greens – https://theconversation.com/how-britains-new-political-divide-delivers-voters-to-reform-and-the-greens-259613

    MIL OSI – Global Reports

  • MIL-OSI Global: Drone footage captured orcas crafting tools out of kelp – and using them for grooming

    Source: The Conversation – Global Perspectives – By Vanessa Pirotta, Postdoctoral Researcher and Wildlife Scientist, Macquarie University

    Sara Jenkins/500px/Getty

    The more we learn about orcas, the more remarkable they are. These giant dolphins are the ocean’s true apex predator, preying on great white sharks and other lesser predators.

    They’re very intelligent and highly social. Their clans are matrilineal, centred around a older matriarch who teaches her clan her own vocalisations. Not only this, but the species is one of only six known to experience menopause, pointing to the social importance of older females after their reproductive years. Different orca groups have fashion trends, such as one pod who returned to wearing salmon as a hat, decades after it went out of vogue.

    But for all their intelligence, one thing has been less clear. Can orcas actually make tools, as humans, chimps and other primates do? In research out today by United States and British researchers, we have an answer: yes.

    Using drones, researchers watched as resident pods in the Salish Sea broke off the ends of bull kelp stalks and rolled them between their bodies. This, the researchers say, is likely to be a grooming practice – the first tool-assisted grooming seen in marine animals.

    This video shows whales using kelp tools in what appears to be social grooming behaviour. Credit: Center for Whale Research.

    Self kelp: why would orcas make tools?

    Tool use and tool making have been well documented in land-based species. But it’s less common among marine species. This could be partly due to the challenge of observing them.

    This field of research expands what we know these animals are capable of. Not only are orcas spending time making kelp into a grooming tool, but they’re doing it socially – two orcas have to work together to rub the kelp against their bodies.

    To make the tool, the orcas use their teeth to grab a stalk of kelp by its “stipe” – the long, narrow part near the seaweed’s holdfast, where it tethers to the rock. They use their teeth, motion of their body and the drag of the kelp to break off a piece of this narrow stipe.

    Next, they approach a social partner, flip the length of the kelp onto their rostrum (their snout-like projection) and press their head and the kelp against their partner’s flank. The two orcas use their fins and flukes to trap the kelp while rolling it between their bodies. During this contact, the orcas would roll and twist their bodies – often in an exaggerated S-shaped posture. A similar posture has been seen among orcas in other groups, who adopt it when rubbing themselves on sand or pebbles.

    Why do it? The researchers suggest this practise may be social skin-maintenance. Bottlenose dolphin mothers are known to remove dead skin from their calves using flippers, while tool-assisted grooming of a partner has been seen in primates, but infrequently and usually in captivity.

    Orcas across different social groups, ages and genders were seen doing this. But they were more likely to groom close relatives or those of similar age. There was some evidence suggesting whales with skin conditions were more likely to do the kelp-based grooming.

    Humpback whales are known to wear kelp in a practice known as “kelping”. But this study covers a different behaviour, which the authors dub “allokelping” (kelping others).

    A surprise from well-studied pods

    Interestingly, this new discovery comes from some of the most well-studied and famous orcas in the world – a group known as the southern resident killer whales. If you were a child of the 90s, you would have seen them in the opening scene of Free Willy, the movie which set me on my path to study cetaceans.

    These orcas consist of three pods known as J, K and L pods. Each live in the Salish Sea in the Pacific Northwest on the border of Canada and the US.

    Researchers fly drones over these resident pods most days and have access to almost 50 years of observations. But this is the first time the tool-making behaviour has been seen.

    Unfortunately, these pods are critically endangered. They’re threatened by sound pollution from shipping, polluted water, vessel strike and loss of their main food source – Chinook salmon.

    A pod of killer whales off Vancouver, Canada.
    Vanessa Pirotta, CC BY-NC-ND

    Orcas are smart

    In one sense, the findings are not a surprise, given the intelligence of these animals.

    In the Antarctic, orcas catch seals by making waves to wash them off ice floes. Before European colonisation, orcas and First Nations groups near Eden hunted whales together.

    They can mimic human speech, while different groups have their own dialects. These animals are awe-inspiring – and sometimes baffling, as when a pod began biting or attacking boats off the Iberian peninsula.

    While orcas are often called “killer whales”, they’re not whales. They’re the biggest species of dolphin, growing up to nine metres long. They’re found across all the world’s oceans.

    Within the species, there’s a surprising amount of diversity. Scientists group orcas into different ecotypes – populations adapted to local conditions. Different orca groups can differ substantially, from size to prey to habits. For instance, transient orcas cover huge distances seeking larger prey, while resident orcas stick close to areas with lots of fish.

    Not just a fluke

    Because orcas differ so much, we don’t know whether other pods have discovered or taught these behaviours.

    But what this research does point to is that tool making may be more common among marine mammals than we expected. No hands – no problem.

    Vanessa Pirotta does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Drone footage captured orcas crafting tools out of kelp – and using them for grooming – https://theconversation.com/drone-footage-captured-orcas-crafting-tools-out-of-kelp-and-using-them-for-grooming-259372

    MIL OSI – Global Reports

  • MIL-OSI Global: Sharks freeze when you turn them upside down – and there’s no good reason why

    Source: The Conversation – Global Perspectives – By Jodie L. Rummer, Professor of Marine Biology, James Cook University

    Rachel Moore

    Imagine watching your favourite nature documentary. The predator lunges rapidly from its hiding place, jaws wide open, and the prey … suddenly goes limp. It looks dead.

    For some animals, this freeze response – called “tonic immobility” – can be a lifesaver. Possums famously “play dead” to avoid predators. So do rabbits, lizards, snakes, and even some insects.

    But what happens when a shark does it?

    In our recent study, we explored this strange behaviour in sharks, rays and their relatives. In this group, tonic immobility is triggered when the animal is turned upside down – it stops moving, its muscles relax, and it enters a trance-like state. Some scientists even use tonic immobility as a technique to safely handle certain shark species.

    But why does it happen? And does it actually help these marine predators survive?

    The mystery of the ‘frozen shark’

    Despite being well documented across the animal kingdom, the reasons behind tonic immobility remain murky – especially in the ocean. It is generally thought of as an anti-predator defence. But there is no evidence to support this idea in sharks, and alternative hypotheses exist.

    We tested 13 species of sharks, rays, and a chimaera — a shark relative commonly referred to as a ghost shark — to see whether they entered tonic immobility when gently turned upside down underwater.

    Seven species did, but six did not. We then analysed these findings using evolutionary tools to map the behaviour across hundreds of million years of shark family history.

    So, why do some sharks freeze?

    Tonic immobility is triggered in sharks when they are turned upside down.
    Rachel Moore

    Three main hypotheses

    There are three main hypotheses to explain tonic immobility in sharks:

    1. Anti-predator strategy – “playing dead” to avoid being eaten
    2. Reproductive role – some male sharks invert females during mating, so perhaps tonic immobility helps reduce struggle
    3. Sensory overload response – a kind of shutdown during extreme stimulation.

    Our results don’t support any of these explanations.

    There’s no strong evidence sharks benefit from freezing when attacked. In fact, modern predators such as orcas can use this response against sharks by flipping them over to immobilise them and then remove their nutrient-rich livers – a deadly exploit.

    The reproductive hypothesis also falls short. Tonic immobility doesn’t differ between sexes, and remaining immobile could make females vulnerable to harmful or forced mating events.

    And the sensory overload idea? Untested and unverified. So, we offer a simpler explanation. Tonic immobility in sharks is likely an evolutionary relic.

    A case of evolutionary baggage

    Our evolutionary analysis suggests tonic immobility is “plesiomorphic” – an ancestral trait that was likely present in ancient sharks, rays and chimaeras. But as species evolved, many lost the behaviour.

    In fact, we found that tonic immobility was lost independently at least five times across different groups. Which raises the question: why?

    In some environments, freezing might actually be a bad idea. Small reef sharks and bottom-dwelling rays often squeeze through tight crevices in complex coral habitats when feeding or resting. Going limp in such settings could get them stuck – or worse. That means losing this behaviour might have actually been advantageous in these lineages.

    So, what does this all mean?

    Rather than a clever survival tactic, tonic immobility might just be “evolutionary baggage” – a behaviour that once served a purpose, but now persists in some species simply because it doesn’t do enough harm to be selected against.

    It’s a good reminder that not every trait in nature is adaptive. Some are just historical quirks.

    Our work helps challenge long-held assumptions about shark behaviour, and sheds light on the hidden evolutionary stories still unfolding in the ocean’s depths. Next time you hear about a shark “playing dead”, remember – it might just be muscle memory from a very, very long time ago.

    Jodie L. Rummer receives funding from the Australian Research Council. She is affiliated with the Australian Coral Reef Society, as President.

    Joel Gayford receives funding from the Northcote Trust.

    ref. Sharks freeze when you turn them upside down – and there’s no good reason why – https://theconversation.com/sharks-freeze-when-you-turn-them-upside-down-and-theres-no-good-reason-why-259448

    MIL OSI – Global Reports

  • MIL-OSI Global: Iran’s internet blackout left people in the dark. How does a country shut down the internet?

    Source: The Conversation – Global Perspectives – By Mohiuddin Ahmed, Senior Lecturer of Computing and Security, Edith Cowan University

    Dylan Carr/Unsplash

    In recent days, Iranians experienced a near-complete internet blackout, with local service providers – including mobile services – repeatedly going offline. Iran’s government has cited cyber security concerns for ordering the shutdown.

    Shutting off the internet within an entire country is a serious action. It severely limits people’s ability to freely communicate and to find reliable information during times of conflict.

    In countries that have privatised mobile and internet providers, control is often exercised through legislation or through government directives – such as age restrictions on adult content. By contrast, Iran has spent years developing the capacity to directly control its telecommunications infrastructure.

    So how can a country have broad control over internet access, and could this happen anywhere in the world?

    How does ‘blocking the internet’ work?

    The “internet” is a broad term. It covers many types of applications, services and, of course, the websites we’re familiar with.

    There’s a range of ways to control access to internet services, but broadly speaking, there are two “simple” methods a nation could use to block citizens’ internet access.

    Hardware

    A nation may opt to physically disconnect the incoming internet connectivity at the point of entry to the country (imagine pulling the plug on a telephone exchange).

    This allows for easy recovery of service when the government is ready, but the impact will be far-reaching. Nobody in the country, including the government itself, will be able to connect to the internet – unless the government has its own additional, covert connectivity to the rest of the world.




    Read more:
    Undersea cables are the unseen backbone of the global internet


    Software and configuration

    This is where it gets more technical. Every internet-connected endpoint – laptop, computer, mobile phone – has an IP (internet protocol) address. They’re strings of numbers; for example, 77.237.87.95 is an address assigned to one of the internet service providers in Iran.

    IP addresses identify the device on the public internet. However, since strings of numbers are not easy to remember, humans use domain names to connect to services – theconversation.com is an example of a domain name.

    That connection between the IP address and the domain is controlled by the domain name system or DNS. It’s possible for a government to control access to key internet services by modifying the DNS – this manipulates the connection between domain names and their underlying numeric addresses.

    An additional way to control the internet involves manipulating the traffic flow. IP addresses allow devices to send and receive data across networks controlled by internet service providers. In turn, they rely on the border gateway protocol (BGP) – think of it like a series of traffic signs which direct internet traffic flow, allowing data to move around the world.

    Governments could force local internet service providers to remove their BGP routes from the internet. As a result, the devices they service wouldn’t be able to connect to the internet. In the same manner, the rest of the world would no longer be able to “see” into the country.




    Read more:
    Internet shutdowns: here’s how governments do it


    How common is this?

    In dozens of countries around the world, the internet is either routinely controlled or has been shut down in response to major incidents.

    A recent example is a wide-scale internet blackout in Bangladesh in July 2024 during student-led protests against government job quotas.

    In 2023, Senegal limited internet access to handle violent protests that erupted over the sentencing of a political leader. In 2020, India imposed a lengthy internet blackout on the disputed Himalayan region of Kashmir. In 2011, the Egyptian government withdrew BGP routes to address civil unrest.

    These events clearly show that if a government anywhere in the world wants to turn off the internet, it really can. The democratic state of the country is the most significant influence on the willingness to undertake such action – not the technical capability.

    However, in today’s world, being disconnected from the internet will heavily impact people’s lives, jobs and the economy. It’s not an action to be taken lightly.

    How can people evade internet controls?

    Virtual private networks or VPNs have long been used to hide communications in countries with strict internet controls, and continue to be an effective internet access method for many people. (However, there are indications Iran has clamped down on VPN use in recent times.)

    However, VPNs won’t help when the internet is physically disconnected. Depending on configuration, if BGP routes are blocked, this may also prevent any VPN traffic from reaching the target.

    This is where independent satellite internet services open up the most reliable alternative. Satellite internet is great for remote and rural areas where traditional internet service providers have yet to establish their cabling infrastructure – or can’t do so.

    Even if traditional wired or wireless internet connections are unavailable, services such as Starlink, Viasat, Hughesnet and others can provide internet access through satellites orbiting Earth.

    To use satellite internet, users rely on antenna kits supplied by providers. In Iran, Elon Musk’s Starlink was activated during the blackout, and independent reports suggest there are thousands of Starlink receivers secretly operating in the country.

    The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    ref. Iran’s internet blackout left people in the dark. How does a country shut down the internet? – https://theconversation.com/irans-internet-blackout-left-people-in-the-dark-how-does-a-country-shut-down-the-internet-259546

    MIL OSI – Global Reports

  • MIL-OSI Global: Hauntingly familiar? Why comparing the US strikes on Iran to Iraq in 2003 is off target

    Source: The Conversation – Global Perspectives – By Benjamin Isakhan, Professor of International Politics, Deakin University

    On June 21, the United States launched airstrikes on three Iranian nuclear facilities – Fordow, Natanz and Isfahan – pounding deeply buried centrifuge sites with bunker-busting bombs.

    Conducted jointly with Israel, the operation took place without formal congressional authorisation, drawing sharp criticism from lawmakers that it was unconstitutional and “unlawful”.




    Read more:
    Why the US strikes on Iran are illegal and can set a troubling precedent


    Much of the political debate has centred on whether the US is being pulled into “another Middle East war”.

    The New York Times’ Nick Kristof weighed in on the uncertainties following the US’ surprise bombing of Iran and Tehran’s retaliation.

    Even US Vice President JD Vance understood the unease, stating:

    People are right to be worried about foreign entanglement after the last 25 years of idiotic foreign policy.

    These reactions have revived comparisons with George W. Bush’s 2003 invasion of Iraq: a Republican president launching military action on the basis of flimsy weapons of mass destruction (WMD) evidence.

    Hauntingly familiar?

    While the surface similarity is tempting, the comparison may in fact obscure more about President Donald Trump than it reveals.

    Comparisons to the Iraq War

    In 2003, Bush ordered a full-scale invasion of Iraq based on flawed intelligence, claiming Iraqi dictator Saddam Hussein possessed WMDs. And while the war was extremely unpopular across the world, it did have bipartisan congressional support.

    The invasion toppled Iraq’s regime in just a few weeks.

    What followed was a brutal conflict and almost a decade of US occupation. The war triggered the rise of militant jihadism and a horrific sectarian conflict that reverberates today.

    So far, Trump’s one-off strikes on Iran bear little resemblance to the 2003 Iraq intervention.

    These were precision strikes within the context of a broader Iran-Israel war, designed to target Iran’s nuclear program.

    And, so far, there appears to be little appetite for a full-scale military invasion or “boots on the ground”, and regime change seems unlikely despite some rumblings from both Trump and Israeli Prime Minister Benjamin Netanyahu.

    Yet the comparison to Iraq persists, especially among audiences suspicious of repeated US military interventions in the Middle East. But poorly considered analogies carry costs.

    For one, the Iraq comparison sheds little light on Trump’s foreign policy.




    Read more:
    The US has entered the Israel-Iran war. Here are 3 scenarios for what might happen next


    Trump’s foreign policy

    To better understand the recent strikes on Iran, we need to look at Trump’s broader foreign policy.

    Much has been made of his “America first” mantra, a complex mix of prioritising domestic interests, questioning international agreements, and challenging traditional alliances.

    Others, including Trump himself, have often touted his “no war” approach, pointing to large-scale military withdrawals from Afghanistan, Syria and Iraq,and the fact he had not started a new war.

    But beyond this, Trump has increased US military spending and frequently used his office to conduct targeted strikes on adversaries – especially across the Middle East.

    For example, in 2017 and 2018, Trump ordered airstrikes on a Syrian airbase and chemical weapons facilities. In both instances, he bypassed Congress and used precision air power to target weapons infrastructure without pursuing regime change.

    Also, from 2017 to 2021, Trump authorised US support for the Saudi-led war in Yemen, enabling airstrikes that targeted militant cells but also led to mass civilian casualties.

    Trump’s policy was the subject of intense bipartisan opposition, culminating in the first successful congressional invocation of the War Powers Resolution – though it was ultimately vetoed by Trump.

    And in 2020, Trump launched a sequence of attacks on Iranian assets in Iraq. This included a drone strike that killed senior Iranian military commander Qassem Soleimani.

    Again, these attacks were conducted without congressional support. The decision triggered intense bipartisan backlash and concerns about escalation without oversight.

    While such attacks are not without precedent – think back to former US President Barack Obama’s intervention in Libya or Joe Biden’s targeting of terrorist assets – the scale and veracity of Trump’s attacks on the Middle East are much more useful as a framework to understanding the recent attacks on Iran than any reference to the 2003 Iraq war.

    What this reveals about Trump

    It is crucial to scrutinise any use of force. But while comparing the 2025 Iran strikes to Iraq in 2003 may be rhetorically powerful, it is analytically weak.

    A better path is to situate these events within Trump’s broader political style.

    He acts unilaterally and with near-complete impunity, disregarding traditional constraints and operating outside established norms and oversight.

    This is just as true for attacks on foreign adversaries as it is for the domestic policy arena.

    For example, Trump recently empowered agencies such as Immigration and Customs Enforcement (ICE) to operate with sweeping discretion in immigration enforcement, bypassing legal and judicial oversight.

    Trump also uses policy as spectacle, designed to send shockwaves through the domestic or foreign arenas and project dominance to both friend and foe.

    In this way, Trump’s dramatic attacks on Iran have some parallels to his unilateral imposition of tariffs on international trade. Both are abrupt, disruptive and framed as a demonstration of strength rather than a way to create a mutually beneficial solution.

    Finally, Trump is more than willing to use force as an instrument of power rather than as a last resort. This is just as true for Iran as it is for the US people.

    The recent deployment of US Marines to quell protests in Los Angeles reveals a similar impulse: military intervention as a first instinct in the absence of a broader strategy to foster peace.

    To truly understand and respond to Trump’s Iran strikes, we need to move beyond sensationalist analogies and recognise a more dangerous reality. This is not the start of another Iraq; it’s the continuation of a presidency defined by impulsive power, unchecked force and a growing disdain for democratic constraint.

    Benjamin Isakhan receives funding from the Australian Research Council and the Australian Department of Defence. The views expressed in this article do not reflect those of Government policy.

    ref. Hauntingly familiar? Why comparing the US strikes on Iran to Iraq in 2003 is off target – https://theconversation.com/hauntingly-familiar-why-comparing-the-us-strikes-on-iran-to-iraq-in-2003-is-off-target-259668

    MIL OSI – Global Reports

  • MIL-OSI Global: Will the fragile ceasefire between Iran and Israel hold? One factor could be crucial to it sticking

    Source: The Conversation – Global Perspectives – By Ali Mamouri, Research Fellow, Middle East Studies, Deakin University

    Amir Levy/Getty Images

    After 12 days of war, US President Donald Trump announced a ceasefire between Israel and Iran that would bring to an end the most dramatic, direct conflict between the two nations in decades.

    Israel and Iran both agreed to adhere to the ceasefire, though they said they would respond with force to any breach.

    If the ceasefire holds – a big if – the key question will be whether this signals the start of lasting peace, or merely a brief pause before renewed conflict.

    As contemporary war studies show, peace tends to endure under one of two conditions: either the total defeat of one side, or the establishment of mutual deterrence. This means both parties refrain from aggression because the expected costs of retaliation far outweigh any potential gains.

    What did each side gain?

    The war has marked a turning point for Israel in its decades-long confrontation with Iran. For the first time, Israel successfully brought a prolonged battle to Iranian soil, shifting the conflict from confrontations with Iranian-backed proxy militant groups to direct strikes on Iran itself.

    This was made possible largely due to Israel’s success over the past two years in weakening Iran’s regional proxy network, particularly Hezbollah in Lebanon and Shiite militias in Syria.

    Over the past two weeks, Israel has inflicted significant damage on Iran’s military and scientific elite, killing several high-ranking commanders and nuclear scientists. The civilian toll was also high.

    Additionally, Israel achieved a major strategic objective by pulling the United States directly into the conflict. In coordination with Israel, the US launched strikes on three of Iran’s primary nuclear facilities: Fordow, Natanz and Isfahan.

    Despite these gains, Israel has not accomplished all of its stated goals. Prime Minister Benjamin Netanyahu had voiced support for regime change, urging Iranians to rise up against Supreme Leader Ali Khamenei’s government, but the senior leadership in Iran remains intact.

    Additionally, Israel has not fully eliminated Iran’s missile program. (Iran continued striking to the last minute before the ceasefire.) And Tehran did not acquiesce to Trump’s pre-war demand to end uranium enrichment.

    Although Iran was caught off-guard by Israel’s attacks — particularly as it was engaged in nuclear negotiations with the US — it responded by launching hundreds of missiles towards Israel.

    While many were intercepted, a significant number penetrated Israeli air defences, causing widespread destruction in major cities, dozens of fatalities and hundreds of injuries.

    Iran has demonstrated its capacity to strike back, though Israel has succeeded in destroying many of its air defence systems, some ballistic missile assets (including missile launchers) and multiple energy facilities.

    Since the beginning of the assault, Iranian officials have repeatedly called for a halt to resume negotiations. Under such intense pressure, Iran has realised it would not benefit from a prolonged war of attrition with Israel — especially as both nations face mounting costs and the risk of depleting their military stockpiles if the war continues.

    As theories of victory suggest, success in war is defined not only by the damage inflicted, but by achieving core strategic goals and weakening the enemy’s will and capacity to resist.

    While Israel claims to have achieved the bulk of its objectives, the extent of the damage to Iran’s nuclear program is not fully known, nor is its capacity to continue enriching uranium.

    Both sides could remain locked in a volatile standoff over Iran’s nuclear program, with the conflict potentially reigniting whenever either side perceives a strategic opportunity.

    Sticking point over Iran’s nuclear program

    Iran faces even greater challenges when it emerges from the war. With a heavy toll on its leadership and nuclear infrastructure, Tehran will likely prioritise rebuilding its deterrence capability.

    That includes acquiring new advanced air defence systems — potentially from China — and restoring key components of its missile and nuclear programs. (Some experts say Iran has not used some of its most powerful missiles to maintain this deterrence.)

    Iranian officials have claimed they safeguarded more than 400 kilograms of 60% enriched uranium before the attacks. This stockpile could theoretically be converted into nine to ten nuclear warheads if further enriched to 90%.

    Trump declared Iran’s nuclear capacity had been “totally obliterated”, whereas Rafael Grossi, the United Nations’ nuclear watchdog chief, said damage to Iran’s facilities was “very significant”.

    However, analysts have argued Iran will still have a depth of technical knowledge accumulated over decades. Depending on the extent of the damage to its underground facilities, Iran could be capable of restoring and even accelerating its program in a relatively short time frame.

    And the chances of reviving negotiations on Iran’s nuclear program appear slimmer than ever.

    What might future deterrence look like?

    The war has fundamentally reshaped how both Iran and Israel perceive deterrence — and how they plan to secure it going forward.

    For Iran, the conflict has reinforced the belief that its survival is at stake. With regime change openly discussed during the war, Iran’s leaders appear more convinced than ever that true deterrence requires two key pillars: nuclear weapons capability, and deeper strategic alignment with China and Russia.

    As a result, Iran is expected to move rapidly to restore and advance its nuclear program, potentially moving towards actual weaponisation — a step it had long avoided, officially.

    At the same time, Tehran is likely to accelerate military and economic cooperation with Beijing and Moscow to hedge against isolation. Iranian Foreign Minister Abbas Araghchi emphasised this close engagement with Russia during a visit to Moscow this week, particularly on nuclear matters.

    Israel, meanwhile, sees deterrence as requiring constant vigilance and a credible threat of overwhelming retaliation. In the absence of diplomatic breakthroughs, Israel may adopt a policy of immediate preemptive strikes on Iranian facilities or leadership figures if it detects any new escalation — particularly related to Iran’s nuclear program.

    In this context, the current ceasefire already appears fragile. Without comprehensive negotiations that address the core issues — namely, Iran’s nuclear capabilities — the pause in hostilities may prove temporary.

    Mutual deterrence may prevent a more protracted war for now, but the balance remains precarious and could collapse with little warning.

    Ali Mamouri does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Will the fragile ceasefire between Iran and Israel hold? One factor could be crucial to it sticking – https://theconversation.com/will-the-fragile-ceasefire-between-iran-and-israel-hold-one-factor-could-be-crucial-to-it-sticking-259669

    MIL OSI – Global Reports

  • MIL-OSI Global: The war won’t end Iran’s nuclear program – it will drive it underground, following North Korea’s model

    Source: The Conversation – Global Perspectives – By Anthony Burke, Professor of Environmental Politics & International Relations, UNSW Sydney

    The United States’ and Israel’s strikes on Iran are concerning, and not just for the questionable legal justifications provided by both governments.

    Even if their attacks cause severe damage to Iran’s nuclear facilities, this will only harden Iran’s resolve to acquire a bomb.

    And if Iran follows through on its threat to pull out of the Treaty on the Nonproliferation of Nuclear Weapons (NPT), this will gravely damage the global nuclear nonproliferation regime.

    In a decade of international security crises, this could be the most serious. Is there still time to prevent this from happening?

    A successful but vulnerable treaty

    In May 2015, I attended the five-yearly review conference of the NPT. Delegates debated a draft outcome for weeks, and then, not for the first time, went home with nothing. Delegates from the US, United Kingdom and Canada blocked the final outcome to prevent words being added that would call for Israel to attend a disarmament conference.

    Russia did the same in 2022 in protest at language on its illegal occupation of the Zaporizhzhia nuclear power station in Ukraine.

    Now, in the latest challenge to the NPT, Israel and the US have bombed Iran’s nuclear complexes to ostensibly enforce a treaty neither one respects.

    When the treaty was adopted in 1968, it allowed the five nuclear-armed states at the time – the US, Soviet Union, France, UK and China – to join if they committed not to pass weapons or material to other states, and to disarm themselves.

    All other members had to pledge never to acquire nuclear weapons. Newer nuclear powers were not permitted to join unless they gave up their weapons.

    Israel declined to join, as it had developed its own undeclared nuclear arsenal by the late 1960s. India, Pakistan and South Sudan have also never signed; North Korea was a member but withdrew in 2003. Only South Sudan does not have nuclear weapons today.

    To make the obligations enforceable and strengthen safeguards against the diversion of nuclear material to non-nuclear weapons states, members were later required to sign the IAEA Additional Protocol. This gave the International Atomic Energy Agency (IAEA) wide powers to inspect a state’s nuclear facilities and detect violations.

    It was the IAEA that first blew the whistle on Iran’s concerning uranium enrichment activity in 2003. Just before Israel’s attacks this month, the organisation also reported Iran was in breach of its obligations under the NPT for the first time in two decades.

    The NPT is arguably the world’s most universal, important and successful security treaty, but it is also paradoxically vulnerable.

    The treaty’s underlying consensus has been damaged by the failure of the five nuclear-weapon states to disarm as required, and by the failure to prevent North Korea from developing a now formidable nuclear arsenal.

    North Korea withdrew from the treaty in 2003, tested a weapon in 2006, and now may have up to 50 warheads.

    Iran could be next.

    How things can deteriorate from here

    Iran argues Israel’s attacks have undermined the credibility of the IAEA, given Israel used the IAEA’s new report on Iran as a pretext for its strikes, taking the matter out of the hands of the UN Security Council.

    For its part, the IAEA has maintained a principled position and criticised both the US and Israeli strikes.

    Iran has retaliated with its own missile strikes against both Israel and a US base in Qatar. In addition, it wasted no time announcing it would withdraw from the NPT.

    On June 23, an Iranian parliament committee also approved a bill that would fully suspend Iran’s cooperation with the IAEA, including allowing inspections and submitting reports to the organisation.

    Iran’s envoy to the IAEA, Reza Najafi, said the US strikes:

    […] delivered a fundamental and irreparable blow to the international non-proliferation regime conclusively demonstrating that the existing NPT framework has been rendered ineffective.

    Even if Israel and the US consider their bombing campaign successful, it has almost certainly renewed the Iranians’ resolve to build a weapon. The strikes may only delay an Iranian bomb by a few years.

    Iran will have two paths to do so. The slower path would be to reconstitute its enrichment activity and obtain nuclear implosion designs, which create extremely devastating weapons, from Russia or North Korea.

    Alternatively, Russia could send Iran some of its weapons. This should be a real concern given Moscow’s cascade of withdrawals from critical arms control agreements over the last decade.

    An Iranian bomb could then trigger NPT withdrawals by other regional states, especially Saudi Arabia, who suddenly face a new threat to their security.

    Why Iran might now pursue a bomb

    Iran’s support for Hamas, Hezbollah and Syria’s Assad regime certainly shows it is a dangerous international actor. Iranian leaders have also long used alarming rhetoric about Israel’s destruction.

    However repugnant the words, Israeli and US conservatives have misjudged Iran’s motives in seeking nuclear weapons.

    Israel fears an Iranian bomb would be an existential threat to its survival, given Iran’s promises to destroy it. But this neglects the fact that Israel already possesses a potent (if undeclared) nuclear deterrent capability.

    Israeli anxieties about an Iranian bomb should not be dismissed. But other analysts (myself included) see Iran’s desire for nuclear weapons capability more as a way to establish deterrence to prevent future military attacks from Israel and the US to protect their regime.

    Iranians were shaken by Iraq’s invasion in 1980 and then again by the US-led removal of Iraqi dictator Saddam Hussein in 2003. This war with Israel and the US will shake them even more.

    Last week, I felt that if the Israeli bombing ceased, a new diplomatic effort to bring Iran into compliance with the IAEA and persuade it to abandon its program might have a chance.

    However, the US strikes may have buried that possibility for decades. And by then, the damage to the nonproliferation regime could be irreversible.

    Anthony Burke received funding from the UK’s Economic and Social Research Council for a project on global nuclear governance (2014–17).

    ref. The war won’t end Iran’s nuclear program – it will drive it underground, following North Korea’s model – https://theconversation.com/the-war-wont-end-irans-nuclear-program-it-will-drive-it-underground-following-north-koreas-model-259281

    MIL OSI – Global Reports

  • MIL-OSI Global: How old are you really? Are the latest ‘biological age’ tests all they’re cracked up to be?

    Source: The Conversation – Global Perspectives – By Hassan Vally, Associate Professor, Epidemiology, Deakin University

    We all like to imagine we’re ageing well. Now a simple blood or saliva test promises to tell us by measuring our “biological age”. And then, as many have done, we can share how “young” we really are on social media, along with our secrets to success.

    While chronological age is how long you have been alive, measures of biological age aim to indicate how old your body actually is, purporting to measure “wear and tear” at a molecular level.

    The appeal of these tests is undeniable. Health-conscious consumers may see their results as reinforcing their anti-ageing efforts, or a way to show their journey to better health is paying off.

    But how good are these tests? Do they actually offer useful insights? Or are they just clever marketing dressed up to look like science?

    How do these tests work?

    Over time, the chemical processes that allow our body to function, known as our “metabolic activity”, lead to damage and a decline in the activity of our cells, tissues and organs.

    Biological age tests aim to capture some of these changes, offering a snapshot of how well, or how poorly, we are ageing on a cellular level.

    Our DNA is also affected by the ageing process. In particular, chemical tags (methyl groups) attach to our DNA and affect gene expression. These changes occur in predictable ways with age and environmental exposures, in a process called methylation.

    Research studies have used “epigenetic clocks”, which measure the methylation of our genes, to estimate biological age. By analysing methylation levels at specific sites in the genome from participant samples, researchers apply predictive models to estimate the cumulative wear and tear on the body.

    What does the research say about their use?

    Although the science is rapidly evolving, the evidence underpinning the use of epigenetic clocks to measure biological ageing in research studies is strong.

    Studies have shown epigenetic biological age estimation is a better predictor of the risk of death and ageing-related diseases than chronological age.

    Epigenetic clocks also have been found to correlate strongly with lifestyle and environmental exposures, such as smoking status and diet quality.

    In addition, they have been found to be able to predict the risk of conditions such as cardiovascular disease, which can lead to heart attacks and strokes.

    Taken together, a growing body of research indicates that at a population level, epigenetic clocks are robust measures of biological ageing and are strongly linked to the risk of disease and death

    But how good are these tests for individuals?

    While these tests are valuable when studying populations in research settings, using epigenetic clocks to measure the biological age of individuals is a different matter and requires scrutiny.

    For testing at an individual level, perhaps the most important consideration is the “signal to noise ratio” (or precision) of these tests. This is the question of whether a single sample from an individual may yield widely differing results.

    A study from 2022 found samples deviated by up to nine years. So an identical sample from a 40-year-old may indicate a biological age of as low as 35 years (a cause for celebration) or as high as 44 years (a cause of anxiety).

    While there have been significant improvements in these tests over the years, there is considerable variability in the precision of these tests between commercial providers. So depending on who you send your sample to, your estimated biological age may vary considerably.

    Another limitation is there is currently no standardisation of methods for this testing. Commercial providers perform these tests in different ways and have different algorithms for estimating biological age from the data.

    As you would expect for commercial operators, providers don’t disclose their methods. So it’s difficult to compare companies and determine who provides the most accurate results – and what you’re getting for your money.

    A third limitation is that while epigenetic clocks correlate well with ageing, they are simply a “proxy” and are not a diagnostic tool.

    In other words, they may provide a general indication of ageing at a cellular level. But they don’t offer any specific insights about what the issue may be if someone is found to be “ageing faster” than they would like, or what they’re doing right if they are “ageing well”.

    So regardless of the result of your test, all you’re likely to get from the commercial provider of an epigenetic test is generic advice about what the science says is healthy behaviour.

    Are they worth it? Or what should I do instead?

    While companies offering these tests may have good intentions, remember their ultimate goal is to sell you these tests and make a profit. And at a cost of around A$500, they’re not cheap.

    While the idea of using these tests as a personalised health tool has potential, it is clear that we are not there yet.

    For this to become a reality, tests will need to become more reproducible, standardised across providers, and validated through long-term studies that link changes in biological age to specific behaviours.

    So while one-off tests of biological age make for impressive social media posts, for most people they represent a significant cost and offer limited real value.

    The good news is we already know what we need to do to increase our chances of living longer and healthier lives. These include:

    • improving our diet
    • increasing physical activity
    • getting enough sleep
    • quitting smoking
    • reducing stress
    • prioritising social connection.

    We don’t need to know our biological age in order to implement changes in our lives right now to improve our health.

    Hassan Vally does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. How old are you really? Are the latest ‘biological age’ tests all they’re cracked up to be? – https://theconversation.com/how-old-are-you-really-are-the-latest-biological-age-tests-all-theyre-cracked-up-to-be-257710

    MIL OSI – Global Reports

  • MIL-OSI Global: Ibn Battuta, a 14th-century judge and ambassador, travelled further than Marco Polo. The Rihla records his adventures

    Source: The Conversation – Global Perspectives – By Ismail Albayrak, Professor of Islam and Catholic Muslim Relations, Australian Catholic University

    In our guides to the classics, experts explain key literary works.

    Ibn Battuta, was born in Tangier, Morocco, on February 24, 1304. From a statement in his celebrated travel book the Rihla (“legal affairs are my ancestral profession,”) he evidently came from an intellectually distinguished family.

    According to the Rihla (travelogue), Ibn Battuta embarked on his travels from Tangier at the age of 22 with the intention of performing the Hajj (the sacred pilgrimage to Mecca) in 1325. Although he returned to Fez (his adopted home-town) around the end of 1349, he continued to visit various regions, including Granada and Sudan, in subsequent years.

    Over the course of his almost 30 years of travel, Ibn Battuta covered an astonishing distance of approximately 73,000 miles (117,000 kilometres), visiting a region that today encompasses more than 50 countries. His journeys covered much of the medieval Islamic world and beyond, excluding Northern Europe.

    In 1355, he returned to Morocco for the last time and remained there for the rest of his life. Upon his return he dictated his experiences, observations and anecdotes to the Andalusian scholar Ibn Juzayy, with a compilation of his travels completed in 1355 or 1356.

    The work, formally titled A Gift to Researchers on the Curiosities of Cities and the Marvels of Journeys, is more commonly referred to as Rihlat Ibn Battuta or simply Rihla.

    A painting of Ibn Battuta (on right) in Egypt by Leon Benett.
    Wikimedia Commons, CC BY

    More than a travelogue or geographical record, this book provides rich insights into 14th-century social and political life, capturing cultural diversity across nations. Ibn Battuta details local lifestyles, linguistic traits, beliefs, clothing, cuisines, holidays, artistic traditions and gender relations, as well as commercial activities and currencies.

    His observations also include geographical features such as mountains, rivers and agricultural products. Notably, the work highlights his encounters with over 60 sultans and more than 2,000 prominent figures, making it a valuable historical resource.

    The travels

    His travels began after a dream. According to Ibn Battuta, one night, while in Fuwwa, a town near Alexandria in Egypt, he dreamed of flying on a massive bird across various lands, landing in a dark, greenish country.

    To test the local sheikh’s mystical knowledge, he decided if the sheikh knew of his dream, he was truly extraordinary. The next morning, after leading the dawn prayer, he saw the sheikh bid farewell to visitors. Later, the sheikh astonishingly revealed knowledge of Ibn Battuta’s dream and prophesied his pilgrimage through Yemen, Iraq, Turkey and India.

    At the time, the Middle East was under the rule of the Mamluk sultanate, Anatolia was divided among principalities and the Mongol Ilkhanate state controlled Iran, Central Asia, and the Indian subcontinent.

    Ibn Battuta initially travelled through North Africa, Egypt, Palestine and Syria, completing his first Hajj in 1326.

    He then visited Iraq and Iran, returning to Mecca. In 1328, he explored East Africa, reaching Mogadishu, Mombasa, Sudan and Kilwa (modern Tanzania), as well as Yemen, Oman and Anatolia, where he documented cities like Alanya, Konya, Erzurum, Nicaea and Bursa.

    His descriptions are vivid. Describing the city of Dimyat, on the bank of the Nile, he says:

    Many of the houses have steps leading down to the Nile. Banana trees are especially abundant there, and their fruit is carried to Cairo in boats. Its sheep and goats are allowed to pasture at liberty day and night, and for this reason the saying goes of Dimyat, ‘Its wall is a sweetmeat and its dogs are sheep’. No one who enters the city may afterwards leave it except by the governor’s seal […]

    Farmland on the banks of the Nile river today.
    Alice-D/shutterstock

    When it comes to Anatolia (in modern-day Turkey), he declares:

    This country, known as the Land of Rum, is the most beautiful in the world. While Allah Almighty has distributed beauty to other lands separately, He has gathered them all here. The most beautiful and well-dressed people live in this land, and the most delicious food is prepared here […] From the moment we arrived, our neighbors — both men and women — showed great concern for our wellbeing. Here, women do not shy away from men; when we departed, they bid us farewell as if we were family, expressing their sadness through tears.

    A judge and husband

    In 1332, Ibn Battutua met the Byzantine Emperor Andronikos III Palaiologos.
    Wikimedia Commons, CC BY

    Since Ibn Battuta dictated his work, it’s difficult to assess the extent of the scribe’s influence in recording his narratives. Despite being an educated man, he occasionally narrates like a commoner and sometimes exceeds the bounds of polite language. At times, he provides excessive detail, giving the impression he may be quoting from sources beyond his own observations.

    Nevertheless, the Rihla stands out for its engaging style and captivating anecdotes, drawing readers in.

    Ibn Battuta later journeyed through Crimea, Central Asia, Khwarezm (a large oasis region in the territories of present-day Turkmenistan and Uzbekistan), Bukhara (a city in Uzbekistan), and the Hindu Kush Mountains. In 1332, he met Byzantine Emperor Andronikos III Palaiologos and travelled to Istanbul with the caravan of Uzbek Khan’s third wife. He mentions a caravan that even has a market:

    Whenever the caravan halted, food was cooked in great brass cauldrons, called dasts, and supplied from them to the poorer pilgrims and those who had no provisions. […] This caravan contained also animated bazaars and great supplies of luxuries and all kinds of food and fruit. They used to march during the night and light torches in front of the file of camels and litters, so that you saw the countryside gleaming with light and the darkness turned into radiant day.

    Ibn Battuta arrived in Delhi in 1333, where he served as a judge under Sultan Muhammad bin Tughluq for seven years. He married or was married to local women in many of the places he stayed. Among his wives were ordinary people as well as the daughters of the administrative class.

    Miniature painting in Mughal style depicting the court of Muhammad bin Tughluq.
    Wikimedia Commons, CC BY

    The Sultan’s generosity, intelligence and unconventional ruling style both impressed and surprised Ibn Battuta. However, Muhammad bin Tughluq was known for making excessively harsh and abrupt decisions at times, which led Ibn Battuta to approach him with caution. Nevertheless, with the Sultan’s support, he remained in India for a long time and was eventually chosen as an ambassador to China in 1341.

    In 1345 his mission was disrupted when his ship capsized off the coast of Calcutta (then known as Sadqawan) in the Indian Ocean. Though he survived, he lost most of his possessions.

    After the incident, he remained in India for a while before continuing his journey by other means. During this period, he travelled through India, Sri Lanka and the Maldives. He served as a judge in the latter for one and a half years. In 1345, he journeyed to China via Bengal, Burma and Sumatra, reaching the city of Guangzhou but limiting his exploration to the southern coast.

    He was among the first Arab travellers to record Islam’s spread in the Malay Archipelago, noting interactions between Muslims and Hindu-Buddhist communities. Visiting Java and Sumatra, he praised Sultan Malik al-Zahir of Sumatra as a generous, pious and scholarly ruler and highlighted his rare practice of walking to Friday prayers.

    On his return, Ibn Battuta explored regions such as Iran, Iraq, North Africa, Spain and the Kingdom of Mali, documenting the vast Islamic world.

    Back in his homeland, Ibn Battuta served as a judge in several locations. He died around 1368-9 while serving as a judge in Morocco and was buried in his birthplace, Tangier.

    Historic copy of selected parts of the Travel Report by Ibn Battuta, 1836 CE, Cairo.
    Wikimedia Commons, CC BY

    The status of women

    Ibn Battuta’s travels revealed intriguing insights into the status of women across regions. In inner West Africa, he observed matriarchal practices where lineage and inheritance were determined by the mother’s family.

    Among Turks, women rode horses like raiders, traded actively and did not veil their faces.

    In the Maldives, husbands leaving the region had to abandon their wives. He noted that Muslim women there, including the ruling woman, did not cover their heads. Despite attempting to enforce the hijab as a judge, he failed.

    He offers fascinating insights into food cultures. In Siberia, sled dogs were fed before humans. He described 15-day wedding feasts in India.

    He tried local produce such as mango in the Indian subcontinent, which he compared to an apple, and sun-dried, sliced fish in Oman.

    Religious practices

    Ibn Battuta’s accounts of the Hajj (pilgrimage) rituals he performed six times provide a unique perspective. He references a fatwa by Ibn Taymiyyah, prominent Islamic scholar and theologian known for his opposition to theological innovations and critiques of Sufism and philosophy, advising against shortening prayers for those travelling to Medina.

    Ibn Battuta’s accounts, particularly regarding the Iranian region, offer important perspectives into religious sects during a period when Iran started shifting from Sunnism to Shiism. He describes societies with diverse demographics, including Persians, Azeris, Kurds, Arabs and Baluchis. His observations on religious practices are especially significant.

    Inclined toward Sufism, Ibn Battuta often dressed like a dervish during his travels. He offers a compelling view of Islamic mysticism. He considered regions like Damascus as places of abundance and Anatolia as a land of compassion, interpreting them with a spiritual perspective.

    His accounts of Sufi education, dervish lodges, zawiyas (similar to monasteries), and tombs, along with the special invocations of Sufi masters, are important historical records. He also observed and documented unique practices, such as the followers of the Persian Sufi saint Sheikh Qutb al-Din Haydar wearing iron rings on their hands, necks, ears, and even private parts to avoid sexual intercourse.

    While Ibn Battuta primarily visited Muslim lands, he also travelled to non-Muslim territories, offering key understandings into different religious cultures, for instance interactions between Crimean Muslims and Christian Armenians in the Golden Horde region.

    He also documented churches, icons and monasteries, such as the tomb of the Virgin Mary in Jerusalem. His observation of Muslims openly reciting the call to prayer (adhan) in China is significant.

    Other anecdotes include the division of the Umayyad Mosque in Damascus into a mosque and Christian church. Most importantly, his encounters with Hindus and Buddhists in the Indian subcontinent and Malay Islands provide rich historical context.

    Umayyad Mosque, Damascus.
    eyetravelphotos/shutterstock

    His accounts of death rituals reveal diverse practices. In Sinop (a city in Turkey), 40 days of mourning were declared for a ruler’s mother, while in Iran, a funeral resembled a wedding celebration. He observed similarities in cremation practices between India and China and described a chilling custom in some regions where slaves and concubines were buried alive with the deceased.

    Ibn Battuta’s Rihla, widely translated into Eastern and Western languages, has drawn some criticism for containing depictions that sometimes diverge from historical continuity or borrow from other works. Ibn Battuta himself admitted to using earlier travel books as references.

    Despite limited recognition in older sources, the Rihla gained prominence in the West in the 19th century. His legacy remains vibrant today. Morocco declared 1996–1997 the “Year of Ibn Battuta,” and established a museum in Tangier to honour him. In Dubai, a mall is named after him.

    Notably, Ibn Battuta travelled to more destinations than Marco Polo and shared a broader range of humane anecdotes, showcasing the depth and diversity of his experiences.

    Ismail Albayrak does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Ibn Battuta, a 14th-century judge and ambassador, travelled further than Marco Polo. The Rihla records his adventures – https://theconversation.com/ibn-battuta-a-14th-century-judge-and-ambassador-travelled-further-than-marco-polo-the-rihla-records-his-adventures-246148

    MIL OSI – Global Reports

  • MIL-OSI Global: Why have athletes stopped ‘taking a knee’?

    Source: The Conversation – Global Perspectives – By Ciprian N. Radavoi, Associate Professor in Law, University of Southern Queensland

    Eli Harold, Colin Kaepernick and Eric Reid of the San Francisco 49ers kneel ahead of a game in 2016. Michael Zagaris/San Francisco 49ers/Getty Images

    It’s almost a decade since San Francisco 49ers quarterback Colin Kaepernick started a worldwide trend and sparked fierce debate when he knelt during the US national anthem.

    In 2016, Kaepernick refused to follow the pre-game protocol related to the national anthem and knelt instead, saying:

    I am not going to stand up to show pride in a flag for a country that oppresses black people and people of colour.

    Soon, many athletes and teams began “taking a knee” at sports events to express their solidarity with victims of racial injustice.

    Now, they appear to have stopped, which prompted us to research the decline.

    Initial widespread support

    Following the intense public debate over the appropriateness of Kaepernick’s act, the ritual quickly spread worldwide, with athletes in major soccer leagues, cricket, rugby, Formula 1, top-tier tennis and the US’s Major League Baseball and National Basketball Association taking a knee.

    Athletes didn’t always kneel during national anthems, with the majority kneeling at certain points pre-game.

    Despite the occasional “defection” of a small number of players who would stand while their teammates knelt – such as Israel Folau in rugby league, Wilfried Zaha in soccer and Quinton de Kock in cricket – the ritual was widely embraced by teams and athletes and helped raise awareness of the issue.

    Even major sports organisations notorious for prohibiting any type of political activism generally accepted the kneeling ritual. For example, soccer’s International Football Federation (FIFA) showcased kneeling as a “stand against discrimination” and as human rights advocacy.

    The International Olympic Committee (IOC) initially stood firm by its Rule 50, which states “no kind of demonstration or political, religious, or racial propaganda is permitted in any Olympic sites, venues or other areas”.

    But just three weeks before the 2021 Olympic and Paralympic Games in Tokyo, the IOC relaxed its interpretation, and athletes were permitted to express their views in ways that included taking a knee.

    A surprising turn of events

    Despite permission and even encouragement from sports governing bodies, our research shows the practice is disappearing from major sports competitions.

    Take soccer, for example. At the FIFA World Cup 2022, England and Wales were the only national teams that knelt at their games in Qatar.

    At the FIFA Women’s World Cup 2023 in Australia and New Zealand, no teams or players knelt.

    The same happened at the 2024 Olympic soccer tournament in Paris.

    That only a handful of teams knelt in Tokyo at the 2021 Olympics, two at the FIFA Mens’ World Cup in Qatar in 2022, none at the FIFA Womens’ World Cup in Australia and New Zealand in 2023, and again none at the Paris 2024 Olympics indicates a growing reluctance throughout the sports world.

    This surely cannot mean athletes have become indifferent to racial injustice or other forms of oppression in the interval between the late 2010s and the mid-2020s.

    The explanation must be sought elsewhere. A hint was provided when Crystal Palace soccer player Zaha, the first player of colour in the UK who refused to kneel, explained:

    I feel like taking the knee is degrading, because growing up my parents just let me know that I should be proud to be Black no matter what and I feel like we should just stand tall.

    The explanation may therefore be, at least in part, the players’ uncomfortable feelings related to the kneeling posture.

    In sociology, this bothersome state of mind is called “cognitive dissonance”: the mental conflict a person experiences in the presence of contrasting beliefs.

    A history of kneeling

    The body posture of kneeling is not deemed, in any culture, as expressing solidarity.

    Ancient Greek and the Roman societies, on whose values Western civilisation was built, rejected kneeling as improper, even when praying to gods.

    Then, with the spread of Christianity in the Western world, kneeling became widely used, but only as an act of worship, confessing guilt, or praying for mercy.

    When performed outside the church, kneeling meant submission to nobility or royalty.

    The significance of kneeling as humility is not limited to the Western world.

    In African tribal culture, the young kneel in front of elders, and everyone kneels before the king.

    In China in 1949, Chairman Mao famously proclaimed at the first plenary of the Chinese People’s Political Consultative Conference:

    From now on our nation […] will no longer be a nation subject to insult and humiliation. We have stood up.

    With this in mind, kneeling may be deemed unfit at sporting events, which often feature a powerful cocktail of emotions, values and social expectations.

    The inconsistency between the excitement of competition and the expectation to kneel — a gesture associated with submission and humility — likely creates a bothersome state of mind for athletes.

    This potentially motivates some players to reject one of the two – in this case, the kneeling – to restore cognitive harmony.

    What could replace the kneeling ritual?

    After refusing, by unanimous players’ vote, to take a knee before their October 2020 game against the All Blacks, the Australian rugby union team chose instead to wear a First Nations jersey.

    The same year, several teams in German soccer’s top league chose to show their support for Black Lives Matter by wearing distinctive armbands.

    So it appears wearing a distinctive jersey or at least an armband is more easily accepted by modern-day athletes. This may be challenging given the governing bodies of many sports, such as FIFA, ban athletes from wearing political symbols on their clothing.

    Depending on whether sports code accept this type of activism in the future, wearing suportive clothing could replace taking a knee as symbolic communication of solidarity with oppressed minorities.

    The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    ref. Why have athletes stopped ‘taking a knee’? – https://theconversation.com/why-have-athletes-stopped-taking-a-knee-259047

    MIL OSI – Global Reports

  • MIL-OSI Global: Bats get fat to survive hard times. But climate change is threatening their survival strategy

    Source: The Conversation – Global Perspectives – By Nicholas Wu, Lecturer in Wildlife Ecology, Murdoch University

    Rudmer Zwerver/Shutterstock

    Bats are often cast as the unseen night-time stewards of nature, flitting through the dark to control pest insects, pollinate plants and disperse seeds. But behind their silent contributions lies a remarkable and underappreciated survival strategy: seasonal fattening.

    Much like bears and squirrels, bats around the world bulk up to get through hard times – even in places where you might not expect it.

    In a paper published today in Ecology Letters, we analysed data from bat studies around the world to understand how bats use body fat to survive seasonal challenges, whether it’s a freezing winter or a dry spell.

    The surprising conclusion? Seasonal fattening is a global phenomenon in bats, not just limited to those in cold climates.

    Even bats in the tropics, where it’s warm all year, store fat in anticipation of dry seasons when food becomes scarce. That’s a survival strategy that’s been largely overlooked. But it may be faltering as the climate changes, putting entire food webs at risk.

    Climate shapes fattening strategies

    We found bats in colder regions predictably gain more weight before winter.

    But in warmer regions with highly seasonal rainfall, such as tropical savannas or monsoonal forests, bats also fatten up. In tropical areas, it’s not cold that’s the enemy, but the dry season, when flowers wither, insects vanish and energy is hard to come by.

    The extent of fattening is impressive. Some species increased their body weight by more than 50%, which is a huge burden for flying animals that already use a lot of energy to move around. This highlights the delicate balancing act bats perform between storing energy and staying nimble in the air.

    Sex matters, especially in the cold

    The results also support the “thrifty females, frisky males” hypothesis.

    In colder climates, female bats used their fat reserves more sparingly than males – a likely adaptation to ensure they have enough energy left to raise young when spring returns. Since females typically emerge from hibernation to raise their young, conserving fat through winter can directly benefit their reproductive success.

    Interestingly, this sex-based difference vanished in warmer climates, where fat use by males and females was more similar, likely because more food is available in warmer climates. It’s another clue that climate patterns intricately shape behaviour and physiology.

    Climate change is shifting the rules

    Beyond the biology, our study points to a more sobering trend. Bats in warm regions appear to be increasing their fat stores over time. This could be an early warning sign of how climate change is affecting their survival.

    Climate change isn’t just about rising temperatures. It’s also making seasons more unpredictable.

    Bats may be storing more energy in advance of dry seasons that are becoming longer or harder to predict. That’s risky, because it means more foraging, more exposure to predators and potentially greater mortality.

    The implications can ripple outward. Bats help regulate insect populations, fertilise crops and maintain healthy ecosystems. If their survival strategies falter, entire food webs could feel the effects.

    Fat bats, fragile futures

    Our study changes how we think about bats. They are not just passive victims of environmental change but active strategists, finely tuned to seasonal rhythms. Yet their ability to adapt has limits, and those limits are being tested by a rapidly changing world.

    By understanding how bats respond to climate, we gain insights into broader ecosystem resilience. We also gain a deeper appreciation for one of nature’s quiet heroes – fattening up, flying through the night and holding ecosystems together, one wingbeat at a time.

    Nicholas Wu was the lead author of a funded Australian Research Council Linkage Grant awarded to Christopher Turbill at Western Sydney University.

    ref. Bats get fat to survive hard times. But climate change is threatening their survival strategy – https://theconversation.com/bats-get-fat-to-survive-hard-times-but-climate-change-is-threatening-their-survival-strategy-259560

    MIL OSI – Global Reports

  • MIL-OSI Global: Work, wages and apprenticeships: sifting for clues about the lives of girls in ancient Egypt

    Source: The Conversation – Global Perspectives – By Julia Hamilton, Lecturer in History and Archaeology, Macquarie University

    Weavers in the Tomb of Khnumhotep II, Beni Hassan, Egypt. Painted by Norman de Garis Davies (MMA 33.8.16)

    We know surprisingly little about the lives of children in ancient Egypt.

    And what records we do have about them often concern the lives of the elite – the young king or the children of senior officials. They are more prominent in surviving material evidence, especially funerary art. Infant mortality rates were high in ancient Egypt.

    As a result, much of the work in Egyptology on representations of childhood in ancient Egypt is dominated by evidence for the lives of boys and young adult men.

    But what were the lives of ordinary girls like in ancient Egypt? And how did they make their way in a deeply patriarchal culture?

    Finding hieroglyphic words for girls

    An initial problem in studying girls’ lives in ancient Egypt is answering the question: who was a girl in ancient Egypt?

    Chronological age was not always recorded by ancient Egyptians in their letters or inscriptions.

    Instead, more general words and hieroglyphic signs tended to accompany images of men, women and children to indicate their social roles.

    A woman is shown nursing a child while another woman is dressing her hair.
    Metropolitan Museum of Art, New York (22.2.35)

    These words and signs were only loosely associated with biological development.

    Hieroglyphic words for infants and small children, for instance, could be marked with an image of a small, seated child – sometimes with a finger held to its mouth.

    Among the words used to describe young girls – talking, walking, and participating alongside adults in their work – was sheriyt.

    This is the word often found in ancient accounting documents recording payments of wages, indicating a girl-child worker. They are distinguished from older women in these documents, although it is difficult to know precisely how young they might have been.

    In this way, written administrative records and archaeological evidence reveals girls of many social classes were integrated into economic production from an early age.

    Payment for work

    Elephantine, a town at Egypt’s southern frontier near modern-day Aswan, provides a unique window into the urban life of some girls who worked in textile workshops during the ancient Egyptian Middle Kingdom, which dates approximately 2030–1650 BCE.

    First published in 1996, archaeologists found a ceramic bowl repurposed as a writing surface in a house in the densely packed urban settlement.

    The excavators initially dated the bowl to the reign of King Amenemhat III, who ruled almost 3,800 years ago. However, based on the style of writing and the types of names listed, some scholars have also dated it earlier. It contains lists of payments of provisions of grain for textile workers over the course of a month.

    What makes this document so important is that it names at least 18 child workers. Of these, 11 are girls, clearly marked with the Egyptian word sheriyt, working alongside 28 adult women.

    The list shows adult women in this workshop received between 50–57 heqat (around 240–274 litres) of grain – although it’s not entirely clear if this was a one-off payment, a payment per month, or something else. The girls earned smaller but still significant wages of 3–7 heqat (around 14–34 litres).

    Some other adult women seem to have also received comparable provisions to the girls, although without further information it is difficult know their social status or age.

    This document not only confirms that girls received payment for their labour. It also suggests a structured apprenticeship system where young girls (and boys) worked alongside experienced craftswomen.

    This corroborates evidence from visual art of textile workshops from the same period.

    Weavers in the Tomb of Khnumhotep II, Beni Hassan, Egypt. Painted at the tomb in 1931 by Norman de Garis Davies.
    Metropolitan Museum of Art, New York (33.8.16)

    Work life, home life

    Archaeological evidence suggests textile production occurred both within homes and in dedicated workshops.

    Evidence from the excavations at Elephantine suggests homes had several rooms with multiple purposes, including courtyards, entrance vestibules, kitchens with ovens (recognisable by blackened walls and ash deposits), and possible stairs leading to roof spaces.

    Privacy would have been limited. Daily life would have included close interaction with animals, as evidenced by attached animal pens.

    More recently, close to the house where the provision list was discovered, archaeologists found needles, spindles, shuttles, and remains of pegs for a large loom.

    These were found both inside houses and in the courtyards attached to them.

    It’s hard to know what exactly these buildings were for; they probably served multiple purposes.

    Lives shaped by class and legal status

    Not all girls at Elephantine had the same experience of life. The town’s position at Egypt’s southern frontier in this period meant it was home to diverse populations, which included migrants, enslaved people and transitory workers.

    A letter dating to the reign of King Amenemhat III documents some families, including women and children, arriving at Elephantine seeking work during a famine in their home region.

    This ancient letter mentions families, including women and children, looking for work.
    © The Trustees of the British Museum. Shared under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) licence, CC BY-NC-SA

    This evidence can be compared to a legal document from the same time period but from another Egyptian town, El Lahun. This document mentions the purchase and transfer of enslaved women and infants who are called Aamut, referring to a region in West Asia. The document shows they have been given new Egyptian names.

    These documents remind us factors such as class and legal status have always profoundly shaped girls’ lives.

    Valuing the work of girls

    Accessing the everyday thoughts, feelings, and perspectives of many ancient people, especially children, is challenging for historians. We don’t, for instance, have a wealth of personal diaries from ancient Egypt to learn about girls’ interior lives.

    But what’s clear is that girls were not merely passive participants in society. They were active economic contributors, who often received formal compensation for their work.

    Historians must always look beyond elite contexts to incorporate diverse evidence types – administrative documents, archaeological remains, and artistic representations – to construct a more complete picture of ancient lives.

    Julia Hamilton does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Work, wages and apprenticeships: sifting for clues about the lives of girls in ancient Egypt – https://theconversation.com/work-wages-and-apprenticeships-sifting-for-clues-about-the-lives-of-girls-in-ancient-egypt-249581

    MIL OSI – Global Reports

  • MIL-OSI Global: MIT researchers say using ChatGPT can rot your brain. The truth is a little more complicated

    Source: The Conversation – Global Perspectives – By Vitomir Kovanovic, Associate Professor and Associate Director of the Centre for Change and Complexity in Learning (C3L), Education Futures, University of South Australia

    Rroselavy / Shutterstock

    Since ChatGPT appeared almost three years ago, the impact of artificial intelligence (AI) technologies on learning has been widely debated. Are they handy tools for personalised education, or gateways to academic dishonesty?

    Most importantly, there has been concern that using AI will lead to a widespread “dumbing down”, or decline in the ability to think critically. If students use AI tools too early, the argument goes, they may not develop basic skills for critical thinking and problem-solving.

    Is that really the case? According to a recent study by scientists from MIT, it appears so. Using ChatGPT to help write essays, the researchers say, can lead to “cognitive debt” and a “likely decrease in learning skills”.

    So what did the study find?

    The difference between using AI and the brain alone

    Over the course of four months, the MIT team asked 54 adults to write a series of three essays using either AI (ChatGPT), a search engine, or their own brains (“brain-only” group). The team measured cognitive engagement by examining electrical activity in the brain and through linguistic analysis of the essays.

    The cognitive engagement of those who used AI was significantly lower than the other two groups. This group also had a harder time recalling quotes from their essays and felt a lower sense of ownership over them.

    Interestingly, participants switched roles for a final, fourth essay (the brain-only group used AI and vice versa). The AI-to-brain group performed worse and had engagement that was only slightly better than the other group’s during their first session, far below the engagement of the brain-only group in their third session.

    The authors claim this demonstrates how prolonged use of AI led to participants accumulating “cognitive debt”. When they finally had the opportunity to use their brains, they were unable to replicate the engagement or perform as well as the other two groups.

    Cautiously, the authors note that only 18 participants (six per condition) completed the fourth, final session. Therefore, the findings are preliminary and require further testing.

    Does this really show AI makes us stupider?

    These results do not necessarily mean that students who used AI accumulated “cognitive debt”. In our view, the findings are due to the particular design of the study.

    The change in neural connectivity of the brain-only group over the first three sessions was likely the result of becoming more familiar with the study task, a phenomenon known as the familiarisation effect. As study participants repeat the task, they become more familiar and efficient, and their cognitive strategy adapts accordingly.

    When the AI group finally got to “use their brains”, they were only doing the task once. As a result, they were unable to match the other group’s experience. They achieved only slightly better engagement than the brain-only group during the first session.

    To fully justify the researchers’ claims, the AI-to-brain participants would also need to complete three writing sessions without AI.

    Similarly, the fact the brain-to-AI group used ChatGPT more productively and strategically is likely due to the nature of the fourth writing task, which required writing an essay on one of the previous three topics.

    As writing without AI required more substantial engagement, they had a far better recall of what they had written in the past. Hence, they primarily used AI to search for new information and refine what they had previously written.

    What are the implications of AI in assessment?

    To understand the current situation with AI, we can look back to what happened when calculators first became available.

    Back in the 1970s, their impact was regulated by making exams much harder. Instead of doing calculations by hand, students were expected to use calculators and spend their cognitive efforts on more complex tasks.

    Effectively, the bar was significantly raised, which made students work equally hard (if not harder) than before calculators were available.

    The challenge with AI is that, for the most part, educators have not raised the bar in a way that makes AI a necessary part of the process. Educators still require students to complete the same tasks and expect the same standard of work as they did five years ago.

    In such situations, AI can indeed be detrimental. Students can for the most part offload critical engagement with learning to AI, which results in “metacognitive laziness”.

    However, just like calculators, AI can and should help us accomplish tasks that were previously impossible – and still require significant engagement. For example, we might ask teaching students to use AI to produce a detailed lesson plan, which will then be evaluated for quality and pedagogical soundness in an oral examination.

    In the MIT study, participants who used AI were producing the “same old” essays. They adjusted their engagement to deliver the standard of work expected of them.

    The same would happen if students were asked to perform complex calculations with or without a calculator. The group doing calculations by hand would sweat, while those with calculators would barely blink an eye.

    Learning how to use AI

    Current and future generations need to be able to think critically and creatively and solve problems. However, AI is changing what these things mean.

    Producing essays with pen and paper is no longer a demonstration of critical thinking ability, just as doing long division is no longer a demonstration of numeracy.

    Knowing when, where and how to use AI is the key to long-term success and skill development. Prioritising which tasks can be offloaded to an AI to reduce cognitive debt is just as important as understanding which tasks require genuine creativity and critical thinking.

    The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    ref. MIT researchers say using ChatGPT can rot your brain. The truth is a little more complicated – https://theconversation.com/mit-researchers-say-using-chatgpt-can-rot-your-brain-the-truth-is-a-little-more-complicated-259450

    MIL OSI – Global Reports

  • MIL-OSI Global: It’s time to face an uncomfortable truth: maybe our pampered pets would be better off without us

    Source: The Conversation – Global Perspectives – By Nancy Cushing, Associate Professor, School of Humanities and Social Science, University of Newcastle

    ROSLAN RAHMAN/AFP via Getty Images

    Pet-keeping is often promoted for the benefits it brings humans. A close association with another animal can provide us with a sense of purpose and a daily dose of joy. It can aid our health, make us more conscientious and even help us form relationships with other humans.

    But the situation is perhaps not as rosy for the animal itself. Domesticated animals often live longer than their free-living counterparts, but the quality of those lives can be compromised. Pets can be fed processed foods that can lead to obesity. Many are denied a sexual life and experience of parenthood. Exercise can be limited, isolation is common and boredom must be endured.

    In the worst cases, pets suffer due to selective breeding practices, physical abuse and unethical commercial breeding.

    Is this the best life for the species we feel closest to? This question was raised for me when I heard the story of Valerie, the dachshund recaptured in April this year after almost 18 months living on her own on South Australia’s Karta Pintingga/Kangaroo Island.

    Is being a pet the best life for the species we feel closest to?
    Oleksandr Rupeta/NurPhoto via Getty Images

    Valerie: the story that captivated a nation

    Valerie, a miniature dachshund, escaped into the bush during a camping trip on Kangaroo Island in November 2023. After several days of searching, her bereft humans returned to their home in New South Wales. They assumed the tiny dog, who had lived her life as a “little princess”, was gone forever.

    Fast-forward a year, and sightings were reported on the island of a small dog wearing a pink collar. Word spread and volunteers renewed the search. A wildlife rescue group designed a purpose-built trap, fitting it out with items from Valerie’s former home.

    After several weeks, a remotely controlled gate clattered shut behind Valerie and she was caught.

    Cue great celebrations. The searchers were triumphant and the family was delighted. Social media lit up. It was a canine reenactment of one of settler Australia’s enduring narratives: the lost child rescued from the hostile bush.

    A dog’s-eye view

    But imagine if Valerie’s story was told from a more dog-centred perspective. Valerie found herself alone in a strange place and took the opportunity to run away. She embarked on a new life in which she was responsible for herself and could exercise the intelligence inherited from her boar-hunting ancestors.

    No longer required to be a good girl, Valerie applied her own judgement – that notorious dachshund “stubbornness” – to evade predators, fill her stomach and pass her days.

    Some commentators assumed Valerie must have been fed by anonymous benefactors – reflecting a widely held view that pets have limited abilities.

    Veterinary experts, however, said her diet likely consisted of small birds, mammals and reptiles she killed herself – as well as roadkill, other carrion and faeces.

    Valerie was clearly good at life on the lam. Unlike the human competitors in the series Alone Australia, she did not waste away when left in an island wilderness. Instead, she gained 1.8 kg of muscle – and was so stocky she no longer fit the old harness her humans brought to collect her. She had literally outgrown her former bonds.

    Valerie could have sought shelter with the island’s humans at any time, but chose not to. She had to be actively trapped. Once returned to her humans, she needed time to reacclimatise to life as a pet.

    Not all missing pets thrive in the wild. But all this raises the question of whether Valerie’s rescue would be better understood as a forced return from a full life of freedom, to a diminished existence in captivity?

    A long history of pets thriving in the wild

    Other examples exist which suggest an animal’s best life can take place outside the constraints of being a pet.

    Exotic parrots have fled lives in cages to form urban flocks. In the United States, 25 species initially imported as pets have set up self-sustaining, free-living populations across 23 states.

    Or take the red-eared slider turtle, which is native to parts of the US and Mexico. It’s illegal to keep the turtles as pets in Australia, but some of those smuggled in have later been released into urban wetlands where they have established large and widespread populations.

    Cats are perhaps the most notorious example of escaped pets thriving on their own in Australia. They numbers in the millions, in habitats from cities to the Simpson Desert to the Snowy Mountains, showing how little they need human assistance.

    One mark of their success is their prodigious size. At up to 7kg, free-living cats can be more than twice the weight of the average domestic cat.

    Around the world, exotic former companion mammals, birds, fish, reptiles, amphibians and insects have all established populations large enough to pose problems for other species.

    Rethinking animals as pets

    Of course, I am not advocating that pets be released to the wild, creating new problems. But I do believe current pet-keeping practices are due for reconsideration.

    A dramatic solution would be to take the animal out of the pet relationship. Social robots that look like seals and teddy bears are already available to welcome you home, mirror your emotions and offer up cuddles without the cost to other animals.

    A less radical option is to rethink the idea of animals as “pets” and instead see them as equals.

    Some people already enjoy these unforced bonds. Magpies, for example, are known to have strong allegiances with each other and are sometimes willing to extend those connections to humans in multi-species friendships.

    As for Valerie, she did make “her little happy sounds” when reunited with her humans. But she might look back with nostalgia to her 529 days of freedom on Kangaroo Island.

    Nancy Cushing receives funding from the State Library of New South Wales as the Coral Thomas Fellow. She is a member of the executive committee of the Australian Historical Association.

    ref. It’s time to face an uncomfortable truth: maybe our pampered pets would be better off without us – https://theconversation.com/its-time-to-face-an-uncomfortable-truth-maybe-our-pampered-pets-would-be-better-off-without-us-256903

    MIL OSI – Global Reports

  • MIL-OSI Global: Inaccurate and misogynistic: why we need to make the term ‘hysterectomy’ history

    Source: The Conversation – Global Perspectives – By Theresa Larkin, Associate Professor of Medical Sciences, University of Wollongong

    Panuwat Dangsungnoen/Getty Images

    Have you had a tonsillectomy (your tonsils taken out), appendectomy (your appendix removed) or lumpectomy (removal of a lump from your breast)? The suffix “ectomy” denotes surgical removal of the named body part, so these terms give us a clear idea of what the procedure entails.

    So why is the removal of the uterus called a hysterectomy and not a uterectomy?

    The name hysterectomy is rooted in a mental health condition – “hysteria” – that was once believed to affect women. But we now know this condition doesn’t exist.

    Continuing to call this significant operation a hysterectomy both perpetuates misogyny and hampers people’s understanding of what it is.

    From the defunct condition ‘hysteria’

    Hysteria was a psychiatric condition first formally defined in the 5th century BCE. It had many symptoms, including excessive emotion, irritability, anxiety, breathlessness and fainting.

    But hysteria was only diagnosed in women. Male physicians at the time claimed these symptoms were caused by a “wandering womb”. They believed the womb (uterus) moved around the body looking for sperm and disrupted other organs.

    Because the uterus was blamed for hysteria, the treatment was to remove it. This procedure was called a hysterectomy. Sadly, many women had their healthy uterus unnecessarily removed and most died.

    The word “hysteria” did originally came from the ancient Greek word for uterus, “hystera”. But the modern Greek word for uterus is “mitra”, which is where words such as “endometrium” come from.

    Hysteria was only removed as an official medical diagnosis in 1980. It was finally recognised it does not exist and is sexist.

    “Hysterectomy” should also be removed from medical terminology because it continues to link the uterus to hysteria.

    Common but confusing

    About one in three Australian women will have their uterus removed. A hysterectomy is one of the most common surgeries worldwide. It’s used to treat conditions including:

    • abnormal uterine bleeding (heavy bleeding)
    • uterine fibroids (benign tumours)
    • uterine prolapse (when the uterus protrudes down into the vagina)
    • adenomyosis (when the inner layer of the uterus grows into the muscle layer)
    • cancer.

    However, in a survey colleagues and I did of almost 500 Australian adults, which is yet to be published in a peer-reviewed journal, one in five people thought hysterectomy meant removal of the ovaries, not the uterus.

    It’s true some hysterectomies for cancer do also remove the ovaries. A hysterectomy or partial hysterectomy is the removal of only the uterus, a total hysterectomy removes the uterus and cervix, while a radical hysterectomy usually removes the uterus, cervix, uterine tubes and ovaries.

    There are important differences between these hysterectomies, so they should be named to clearly indicate the nature of the surgery.

    Research has shown ambiguous terminology such as “hysterectomy” is associated with low patient understanding of the procedure and the female anatomy involved.

    There are different types of hysterectomies, and the label can be confusing.
    Olena Yakobchuk/Shutterstock

    Uterectomy should be used for removal of the uterus, in combination with the medical terms for removal of the cervix, uterine tubes and ovaries as needed. For example, a uterectomy plus cervicectomy would refer to the removal of the uterus and the cervix.

    This could help patients understand what is (and isn’t) being removed from their bodies and increase clarity for the wider public.

    Other female body parts and procedures have male names

    There are many eponyms (something named after a person) in anatomy and medicine, such as the Achilles tendon and Parkinson’s disease. They are almost exclusively the names of white men.

    Eponyms for female anatomy and procedures include the Fallopian tubes, Pouch of Douglas, and Pap smear.

    The anatomical term for Fallopian tubes is uterine tubes. “Uterine” indicates these are attached to the uterus, which reinforces their important role in fertility.

    The Pouch of Douglas is the space between the rectum and uterus. Using the anatomical name (rectouterine pouch) is important, because this a common site for endometriosis and can explain any associated bowel symptoms.

    Pap smear gives no indication of its location or function. The new cervical screening test is named exactly that, which clarifies it samples cells of the cervix. This helps people understand this tests for risk of cervical cancer.

    Language matters in medicine and health care

    Language in medicine impacts patient care and health. It needs to be accurate and clear, not include words associated with bias or discrimination, and not disempower a person.

    For these reasons, the International Federation of Associations of Anatomists recommends removing eponyms from scientific and medical communication.

    Meanwhile, experts have rightly argued it’s time to rename the hysterectomy to uterectomy.

    A hysterectomy is an emotional procedure with not only physical but also psychological effects. Not directly referring to the uterus perpetuates the historical disregard of female reproductive anatomy and functions. Removing the link to hysteria and renaming hysterectomy to uterectomy would be a simple but symbolic change.

    Educators, medical doctors and science communicators will play an important role in using the term uterectomy instead of hysterectomy. Ultimately, the World Health Organization should make official changes in the International Classification of Health Interventions.

    In line with increasing awareness and discussions around female reproductive health and medical misogyny, now is the time to improve terminology. We must ensure the names of body parts and medical procedures reflect the relevant anatomy.

    Theresa Larkin does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Inaccurate and misogynistic: why we need to make the term ‘hysterectomy’ history – https://theconversation.com/inaccurate-and-misogynistic-why-we-need-to-make-the-term-hysterectomy-history-257972

    MIL OSI – Global Reports

  • MIL-OSI Global: How do sleep trackers work, and are they worth it? A sleep scientist breaks it down

    Source: The Conversation – Global Perspectives – By Dean J. Miller, Senior Lecturer, Appleton Institute, HealthWise Research Group, CQUniversity Australia

    Many smartwatches, fitness and wellness trackers now offer sleep tracking among their many functions.

    Wear your watch or ring to bed, and you’ll wake up to a detailed sleep report telling you not just how long you slept, but when each phase happened and whether you had a good night’s rest overall.

    Surfing is done in the ocean, planes fly in the sky, and sleep occurs in the brain. So how can we measure sleep from the wrist or finger?

    The gold standard of sleep measurement

    If you’ve ever had a sleep study or seen someone with dozens of wires attached to their head, body and face, you’ve encountered polysomnography or PSG.

    Eye movements, muscle tone, heart rate and brain activity are measured and assessed by experts to detect which stage of sleep or wakefulness a person is in.

    When we sleep, we cycle through different stages, generally classified as light sleep, slow-wave sleep (also known as deep sleep), and rapid eye movement or REM sleep.

    Each stage has an effect on brain activity, muscle tone and heart rate – which is why sleep scientists need so many wires.

    Accurate? Absolutely. Convenient? Like two left shoes.

    This is where the convenience of wearable at-home sleep trackers comes in.

    What sensors are in sleep trackers?

    Since the 1990s, sleep researchers have been using actigraphy to measure people’s sleep outside the laboratory.

    An actigraphy device is similar to a wristwatch and uses accelerometers to measure the person’s movement. Coupled with sleep diaries, actigraphy assumes a person is awake when they’re moving and asleep when still. Simple.

    While this is a scientifically accepted method of estimating sleep, it’s prone to mislabelling being awake but at rest (such as when reading a book) as sleep.

    There’s one key addition that makes wrist-worn sleep trackers more accurate – PPG or photoplethysmography.

    It’s hard to pronounce, but photoplethysmography is a key driver in the explosion of wearable health tracking.

    It uses those little green lights on the skin-side of the wearable to track the amount of blood passing through your wrist at any given time. Clip-on pulse oximeters used by doctors are the same type of tech.

    The addition of PPG to a wrist tracker allows for the measurement of raw data like heart rate and breathing rate. From this data, the wearable can estimate a number of physiological metrics, including sleep stages.

    Since fitness wearables already have accelerometers and PPG to track your physical activity and heart rate, it makes sense to use these sensors to track sleep too. But how accurate are they?

    Many fitness trackers leverage the sensors used to measure your fitness activities and heart rate for sleep tracking.
    The Conversation

    How do scientists test sleep trackers?

    Two main factors determine the accuracy of sleep trackers. How well does the device detect whether you’re asleep or awake? And how well can it distinguish the sleep stages?

    To answer these questions, sleep scientists conduct validation studies. Participants sleep overnight in a laboratory while wearing both a sleep tracker and undergoing PSG.

    Then, scientists compare the data from both methods in 30-second blocks called “epochs”. That means for a nine-hour sleep there will be 1,080 epochs to compare.

    If both the device and PSG indicate “sleep” for the same epoch, they’re in agreement. If the device indicates “wake” and PSG indicates “sleep” for the same epoch, that’s considered an error. The same is done for sleep stages.

    How accurate are sleep trackers?

    In a 2022 study of several popular trackers, most correctly identified more than 90% of sleep epochs. But because light sleep and restful wake are so similar, wearables struggle more to estimate wakefulness, correctly identifying between 26% and 73% of wake epochs.

    When it comes to sleep stages, wearables are less precise, correctly identifying between 53% and 60% of sleep stage epochs. However, for some devices and some sleep stages the precision can be greater. A recent validation study showed that a latest generation ring-shaped wearable didn’t differ from PSG for estimating light sleep and slow wave sleep.

    In short, most modern sleep trackers do a decent job of estimating your total sleep each night. Some are more accurate for sleep staging, but this level of detail isn’t essential for improving the basics of your sleep.

    Do I need a sleep tracker?

    If you’re struggling with sleep, you should speak to your doctor. A sleep tracker can be a useful tool to help track your sleep goals, but ultimately your behaviour is what will improve sleep.

    Keeping regular bedtimes and wake-up times, having a distraction-free sleep space, and keeping home lighting low in the evenings can all help to improve your sleep.

    If you love tracking your sleep, make sure your device has been independently validated. While sleep stage data may not be essential, devices that perform well in estimating sleep stage also tend to be more accurate at detecting when you’re asleep or awake. When reviewing your data, look at long term trends in sleep rather than day-to-day variability.

    If you don’t love your sleep tracker, you can take it off or ignore it. For some people, access to sleep data can negatively impact sleep by creating stress and anxiety for getting a perfect night’s sleep. Instead, focus on improving your healthy sleep strategies and pay attention to how you feel during the day.

    Dr Dean J. Miller is a member of a research group at Central Queensland University that receives support for research (i.e., funding, equipment) from WHOOP Inc, a smart device maker.

    ref. How do sleep trackers work, and are they worth it? A sleep scientist breaks it down – https://theconversation.com/how-do-sleep-trackers-work-and-are-they-worth-it-a-sleep-scientist-breaks-it-down-258304

    MIL OSI – Global Reports

  • MIL-OSI Global: Is AI a con? A new book punctures the hype and proposes some ways to resist

    Source: The Conversation – Global Perspectives – By Luke Munn, Research Fellow, Digital Cultures & Societies, The University of Queensland

    AI Am Over It – Nadia Piet.
    Archival Images of AI + AIxDESIGN, CC BY

    Is AI going to take over the world? Have scientists created an artificial lifeform that can think on its own? Is it going to replace all our jobs, even creative ones, like doctors, teachers and care workers? Are we about to enter an age where computers are better than humans at everything?

    The answers, as the authors of The AI Con stress, are “no”, “they wish”, “LOL” and “definitely not”.


    The AI Con: How To Fight Big Tech’s Hype and Create the Future We Want – Emily M. Bender and Alex Hanna (Bodley Head)


    Artificial intelligence is a marketing term as much as a distinct set of computational architectures and techniques. AI has become a magic word for entrepreneurs to attract startup capital for dubious schemes, an incantation deployed by managers to instantly achieve the status of future-forward leaders.

    In a mere two letters, it conjures a vision of automated factories and robotic overlords, a utopia of leisure or a dystopia of servitude, depending on your point of view. It is not just technology, but a powerful vision of how society should function and what our future should look like.

    In this sense, AI doesn’t need to work for it to work. The accuracy of a large language model may be doubtful, the productivity of an AI office assistant may be claimed rather than demonstrated, but this bundle of technologies, companies and claims can still alter the terrain of journalism, education, healthcare, service work and our broader sociocultural landscape.

    Pop goes the bubble

    For Emily M. Bender and Alex Hanna, the AI hype bubble needs to be popped.

    Bender is a linguistics professor at the University of Washington, who has become a prominent technology critic. Hanna is a sociologist and former employee of Google, who is now the director of research at the Distributed AI Research Institute. After teaming up to mock AI boosters in their popular podcast, Mystery AI Hype Theater 3000, they have distilled their insights into a book written for a general audience. They meet the unstoppable force of AI hype with immovable scepticism.


    Step one in this program is grasping how AI models work. Bender and Hanna do an excellent job of decoding technical terms and unpacking the “black box” of machine learning for lay people.

    Driving this wedge between hype and reality, between assertions and operations, is a recurring theme across the pages of The AI Con, and one that should gradually erode readers’ trust in the tech industry. The book outlines the strategic deceptions employed by powerful corporations to reduce friction and accumulate capital. If the barrage of examples tends to blur together, the sense of technical bullshit lingers.

    What is intelligence? A famous and highly cited paper co-written by Bender asserts that large language models are simply “stochastic parrots”, drawing on training data to predict which set of tokens (i.e. words) is most likely to follow the prompt given by a user. Harvesting millions of crawled websites, the model can regurgitate “the moon” after “the cow jumped over”, albeit in much more sophisticated variants.

    Rather than actually understanding a concept in all its social, cultural and political contexts, large language models carry out pattern matching: an illusion of thinking.

    But I would suggest that, in many domains, a simulation of thinking is sufficient, as it is met halfway by those engaging with it. Users project agency onto models via the well-known Eliza effect, imparting intelligence to the simulation.

    Management are pinning their hopes on this simulation. They view automation as a way to streamline their organisations and not be “left behind”. This powerful vision of early adopters vs extinct dinosaurs is one we see repeatedly with the advent of new technologies – and one that benefits the tech industry.

    In this sense, poking holes in the “intelligence” of artificial intelligence is a losing move, missing the social and financial investment that wants this technology to work. “Start with AI for every task. No matter how small, try using an AI tool first,” commanded DuoLingo’s chief engineering officer in a recent message to all employees. Duolingo has joined Fiverr, Shopify, IBM and a slew of other companies proclaiming their “AI first” approach.

    ‘Large language models carry out pattern matching: an illusion of thinking.’ Image: Talking to AI 2.0 – Yutong Liu.
    Kingston School of Art/https://betterimagesofai.org, CC BY

    Shapeshifting technology

    The AI Con is strongest when it looks beyond or around the technologies to the ecosystem surrounding them, a perspective I have also argued is immensely helpful. By understanding the corporations, actors, business models and stakeholders involved in a model’s production, we can evaluate where it comes from, its purpose, its strengths and weaknesses, and what all this might mean downstream for its possible uses and implications. “Who benefits from this technology, who is harmed, and what recourse do they have?” is a solid starting point, Bender and Hanna suggest.

    These basic but important questions extract us from the weeds of technical debate – how does AI function, how accurate or “good” is it really, how can we possibly understand this complexity as non-engineers? – and give us a critical perspective. They place the onus on industry to explain, rather than users to adapt or be rendered superfluous.

    We don’t need to be able to explain technical concepts like backpropagation or diffusion to grasp that AI technologies can undermine fair work, perpetuate racial and gender stereotypes, and exacerbate environmental crises. The hype around AI means to distract us from these concrete effects, to trivialise them and thus encourage us to ignore them.

    Emily M. Bender.
    University of Washington

    As Bender and Hanna explain, AI boosters and AI doomers are really two sides of the same coin. Conjuring up nightmare scenarios of self-replicating AI terminating humanity or claiming sentient machines will usher us into a posthuman paradise are, in the end, the same thing. They place a religious-like faith in the capabilities of technology, which dominates debate, allowing tech companies to retain control of AI’s future development.

    The risk of AI is not potential doom in the future, à la the nuclear threat during the Cold War, but the quieter and more significant harm to real people in the present. The authors explain that AI is more like a panopticon “that allows a single prison warden to keep track of hundreds of prisoners at once”, or the “surveillance dragnets that track marginalised groups in the West”, or a “toxic waste, salting the earth of a Superfund site”, or a “scabbing worker, crossing the picket line at the behest of an employer who wants to signal to the picketers that they are disposable. The totality of systems sold as AI are these things, rolled into one.”

    A decade ago, with another “game-changing” technology, author Ian Bogost observed that

    rather than utopia or dystopia, we usually end up with something less dramatic yet more disappointing. Robots neither serve human masters nor destroy us in a dramatic genocide, but slowly dismantle our livelihoods while sparing our lives.

    The pattern repeats. As AI matures (to some degree) and is adopted by organisations, it moves from innovation to infrastructure, from magic to mechanism. Grand promises never materialise. Instead, society endures a tougher, bleaker future. Workers feel more pressure; surveillance is normalised; truth is muddied with post-truth; the marginal become more vulnerable; the planet gets hotter.

    Technology, in this sense, is a shapeshifter: the outward form constantly changes, yet the inner logic remains the same. It exploits labour and nature, extracts value, centralises wealth, and protects the power and status of the already-powerful.

    Co-opting critique

    In The New Spirit of Capitalism, sociologists Luc Boltanski and Eve Chiapello demonstrate how capitalism has mutated over time, folding critiques back into its DNA.

    After enduring a series of blows around alienation and automation in the 1960s, capitalism moved from a hierarchical Fordist mode of production to a more flexible form of self-management over the next two decades. It began to favour “just in time” production, done in smaller teams, that (ostensibly) embraced the creativity and ingenuity of each individual. Neoliberalism offered “freedom”, but at a price. Organisations adapted; concessions were made; critique was defused.


    Verso Books

    AI continues this form of co-option. Indeed, the current moment can be described as the end of the first wave of critical AI. In the last five years, tech titans have released a series of bigger and “better” models, with both the public and scholars focusing largely on generative and “foundation” models: ChatGPT, StableDiffusion, Midjourney, Gemini, DeepSeek, and so on.

    Scholars have heavily criticised aspects of these models – my own work has explored truth claims, generative hate, ethics washing and other issues. Much work focused on bias: the way in which training data reproduces gender stereotypes, racial inequality, religious bigotry, western epistemologies, and so on.

    Much of this work is excellent and seems to have filtered into the public consciousness, based on conversations I’ve had at workshops and events. However, its flagging of such issues allows tech companies to practise issue resolving. If the accuracy of a facial-recognition system is lower with Black faces, add more Black faces to the training set. If the model is accused of English dominance, fork out some money to produce data on “low-resource” languages.

    Companies like Anthropic now regularly carry out “red teaming” exercises designed to highlight hidden biases in models. Companies then “fix” or mitigate these issues. But due to the massive size of the data sets, these tend to be band-aid solutions, superficial rather than structural tweaks.

    For instance, soon after launching, AI image generators were under pressure for not being “diverse” enough. In response, OpenAI invented a technique to “more accurately reflect the diversity of the world’s population”. Researchers discovered this technique was simply tacking on additional hidden prompts (e.g. “Asian”, “Black”) to user prompts. Google’s Gemini model also seems to have adopted this, which resulted in a backlash when images of Vikings or Nazis had South Asian or Native American features.

    The point here is not whether AI models are racist or historically inaccurate or “woke”, but that models are political and never disinterested. Harder questions about how culture is made computational, or what kind of truths we want as society, are never broached and therefore never worked through systematically.

    Such questions are certainly broader and less “pointy” than bias, but also less amenable to being translated into a problem for a coder to resolve.

    What next?

    How, then, should those outside the academy respond to AI? The past few years have seen a flurry of workshops, seminars and professional development initiatives. These range from “gee whiz” tours of AI features for the workplace, to sober discussions of risks and ethics, to hastily organised all-hands meetings debating how to respond now, and next month, and the month after that.

    Alex Hanna.
    Will Toft/alex-hanna.com, CC BY

    Bender and Hanna wrap up their book with their own responses. Many of these, like their questions about how models work and who benefits, are simple but fundamental, offering a strong starting point for organisational engagement.

    For the technosceptical duo, refusal is also clearly an option, though individuals will obviously have vastly different degrees of agency when it comes to opting out of models and pushing back on adoption strategies. Refusal of AI, as with many technologies that have come before it, often relies to some extent on privilege. The six-figure consultant or coder will have discretion that the gig worker or service worker cannot exercise without penalties or punishments.

    If refusal is fraught at the individual level, it seems more viable and sustainable at a cultural level. Bender and Hanna suggest generative AI be responded to with mockery: companies who employ it should be derided as cheap or tacky.

    The cultural backlash against AI is already in full swing. Soundtracks on YouTube are increasingly labelled “No AI”. Artists have launched campaigns and hashtags, stressing their creations are “100% human-made”.

    These moves are attempts to establish a cultural consensus that AI-generated material is derivative and exploitative. And yet, if these moves offer some hope, they are swimming against the swift current of enshittification. AI slop means faster and cheaper content creation, and the technical and financial logic of online platforms – virality, engagement, monetisation – will always create a race to the bottom.

    The extent to which the vision offered by big tech will be accepted, how far AI technologies will be integrated or mandated, how much individuals and communities will push back against them – these are still open questions. In many ways, Bender and Hanna successfully demonstrate that AI is a con. It fails at productivity and intelligence, while the hype launders a series of transformations that harm workers, exacerbate inequality and damage the environment.

    Yet such consequences have accompanied previous technologies – fossil fuels, private cars, factory automation – and hardly dented their uptake and transformation of society. So while praise goes to Bender and Hanna for a book that shows “how to fight big tech’s hype and create the future we want”, the issue of AI resonates, for me, with Karl Marx’s observation that people “make their own history, but they do not make it just as they please”.

    Luke Munn does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Is AI a con? A new book punctures the hype and proposes some ways to resist – https://theconversation.com/is-ai-a-con-a-new-book-punctures-the-hype-and-proposes-some-ways-to-resist-257015

    MIL OSI – Global Reports

  • MIL-OSI Global: Archetyp was one of the dark web’s biggest drug markets. A global sting has shut it down

    Source: The Conversation – Global Perspectives – By Elena Morgenthaler, PhD Candidate, School of Criminology and Criminal Justice, Griffith University

    Operation Deep Sentinel

    Last week, one of the dark web’s most prominent drug marketplaces – Archetyp – was shut down in an international, multi-agency law enforcement operation following years of investigations. It was touted as a major policing win and was accompanied by a slick cyberpunk-themed video.

    But those of us who have studied this space for years weren’t surprised. Archetyp may have been the most secure dark web market. But shutdowns like this have become a recurring feature of the dark web. And they are usually not a significant turning point.

    The durability of these markets tells us that if policing responses keep following the same playbook, they will keep getting the same results. And by focusing so heavily on these hidden platforms, authorities are neglecting the growing digital harms in the spaces we all use.

    One of the most popular dark web markets

    Dark web markets mirror mainstream e-commerce platforms – think Amazon meets cybercrime. These are encrypted marketplaces accessed via the Tor Browser, a privacy-focused browser that hides users’ IP addresses. Buyers use cryptocurrency and escrow systems (third-party payment systems which hold funds until the transaction is complete) to anonymously purchase illicit drugs.

    Usually these products are sent to the buyer by post and money transferred to the seller through the escrow system.

    Archetyp launched in May 2020 and quickly grew to become one of the most popular dark web markets with an estimated total transaction volume of €250 million (A$446 million). It had more than 600,000 users worldwide and 17,000 listings consisting mainly of illicit drugs including MDMA, cocaine and methamphetamine.

    Compared to its predecessors, Archetyp enforced enhanced security expectations from its users. These included an advanced encryption program known as “Pretty Good Privacy” and a cryptocurrency called Monero. Unlike Bitcoin, which records every payment on a public ledger, Monero conceals all transaction details by default which makes them nearly impossible to trace.

    Despite the fact Archetyp had clearly raised the bar on security on the dark web, Operation Deep Sentinel – a collaborative effort between law enforcement agencies in six countries supported by Europol and Eurojust – took down the market. The front page has now been replaced by a banner.

    While these publicised take-downs feel effective, evidence has shown such interventions only have short-term impacts and the dark web ecosystem will quickly adapt.

    A persistent trade

    These shutdowns aren’t new. Silk Road, AlphaBay, WallStreet and Monopoly Market are all familiar names in the digital graveyard of the dark web. Before these dark web marketplaces were shutdown, they sold a range of illegal products, from drugs to firearms.

    Yet still, the trade persists. New markets emerge and old users return. In some cases, established sellers on closed-down markets are welcomed onto new markets as digital “refugees” and have joining fees waived.

    What current policing strategies neglect is that dark web markets are not isolated to the storefronts that are the popular target of crackdowns. These are communities stretched across dark and surface web forums which develop shared tutorials and help one another adapt to any new changes. These closures bind users together and foster a shared resilience and collective experience in navigating these environments.

    Law enforcement shutdowns are also only one type of disruption that dark web communities face. Dark web market users routinely face voluntary closures (the gradual retirement of a market), exit scams (sudden closures of markets where any money in escrow is taken), or even scheduled maintenance of these markets.

    Ultimately, this disruption to accessibility is not a unique event. In fact, it is routine for individual’s participating in these dark web communities, par for the course of engaging in the markets.

    This ability of dark web communities to thrive in disruptions reflects how dark web market users have become experts at adapting to risks, managing disruptions and rebuilding quickly.

    Dark web markets are accessed via the highly private and secure Tor Browser.
    Daniel Constante/Shutterstock

    Missing the wider landscape of digital harms

    The other emerging issue is that current policing efforts treat dark web markets as the core threat, which might miss the wider landscape of digital harms. Illicit drug sales, for example, are promoted on social media, where platform features such as recommendation systems are affording new means of illicit drug supply.

    Beyond drugs, there are now ever-growing examples of generative AI being used for sexual deepfakes across schools and even of public figures, including the recent case of NRL presenter Tiffany Salmond.

    This is all alongside the countless cases of celebrities and social media influencers caught up in crypto pump-and-dump schemes, where hype is used to artificially inflate the price of a token before the creators sell off their holdings and leave investors with worthless tokens.

    This shows that while the dark web gets all the attention, it’s far from the internet’s biggest problem.

    Archetyp’s takedown might make headlines, but it won’t stop the trade of illicit drugs on the dark web. It should force us to think about where harm is really happening online and whether current strategies are looking in the wrong direction.

    The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    ref. Archetyp was one of the dark web’s biggest drug markets. A global sting has shut it down – https://theconversation.com/archetyp-was-one-of-the-dark-webs-biggest-drug-markets-a-global-sting-has-shut-it-down-259441

    MIL OSI – Global Reports

  • MIL-Evening Report: Nearly half of Kiwis oppose automatic citizenship for Cook Islands, says poll

    By Caleb Fotheringham, RNZ Pacific journalist

    A new poll by the New Zealand Taxpayers’ Union shows that almost half of respondents oppose the Cook Islands having automatic New Zealand citizenship.

    Thirty percent of the 1000-person sample supported Cook Islanders retaining citizenship, 46 percent were opposed and 24 percent were unsure.

    The question asked:

    • The Cook Islands government is pursuing closer strategic ties with China, ignoring New Zealand’s wishes and not consulting with the New Zealand government. Given this, should the Cook Islands continue to enjoy automatic access to New Zealand passports, citizenship, health care and education when its government pursues a foreign policy against the wishes of the New Zealand government?
    • READ MORE: Other Cook Islands reports

    Taxpayers’ Union head of communications Tory Relf said the framing of the question was “fair”.

    “If the Cook Islands wants to continue enjoying a close relationship with New Zealand, then, of course, we will support that,” he said.

    “However, if they are looking in a different direction, then I think it is entirely fair that taxpayers can have a right to say whether they want their money sent there or not.”

    But New Zealand Labour Party deputy leader Carmel Sepuloni said it was a “leading question”.

    ‘Dead end’ assumption
    “It asserts or assumes that we have hit a dead end here and that we cannot resolve the relationship issues that have unfolded between New Zealand and the Cook Islands,” Sepuloni said.

    “We want a resolution. We do not want to assume or assert that it is all done and dusted and the relationship is broken.”

    The two nations have been in free association since 1965.

    Relf said that adding historical context of the two countries relationship would be a different question.

    “We were polling on the Cook Islands current policy, asking about historic ties would introduce an emotive element that would influence the response.”

    New Zealand has paused nearly $20 million in development assistance to the realm nation.

    Foreign Minister Winston Peters said the decision was made because the Cook Islands failed to adequately inform his government about several agreements signed with Beijing in February.

    ‘An extreme response’
    Sepuloni, who is also Labour’s Pacific Peoples spokesperson, said her party agreed with the government that the Cook Islands had acted outside of the free association agreement.

    “[The aid pause is] an extreme response, however, in saying that we don’t have all of the information in front of us that the government have. I’m very mindful that in terms of pausing or stopping aid, the scenarios where I can recall that happening are scenarios like when Fiji was having their coup.”

    In response to questions from Cook Islands News, Cook Islands Prime Minister Mark Brown said that, while he acknowledged the concerns raised in the recent poll, he believed it was important to place the discussion within the full context of Cook Islands’ longstanding and unique relationship with New Zealand.

    “The Cook Islands and New Zealand share a deep, enduring constitutional bond underpinned by shared history, family ties, and mutual responsibility,” Brown told the Rarotonga-based newspaper.

    “Cook Islanders are New Zealand citizens not by privilege, but by right. A right rooted in decades of shared sacrifice, contribution, and identity.

    “More than 100,000 Cook Islanders live in New Zealand, contributing to its economy, culture, and communities. In return, our people have always looked to New Zealand not just as a partner but as family.”

    This article is republished under a community partnership agreement with RNZ.

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: Melanesian Spearhead Group leaders discuss Middle East conflict before ceasefire

    RNZ Pacific

    Papua New Guinea Prime Minister James Marape says the Middle East conflict was one of the discussions of the Melanesian Spearhead Group (MSG) in Suva this week — and Pacific leaders “took note of what is happening”.

    The Post-Courier reports Marape saying the “12 Day War” between Israel and Iran was based on high technology and using missiles sent from great distances.

    “In the context of MSG, the leaders want peace always. And the Pacific remains friends to all, enemies to none,” he said.

    He said an effect on PNG would be the inflation in prices of oil and gas.

    Yesterday morning, US President Donald Trump declared a ceasefire had been agreed  between Israel and Iran, and so far it has been holding in spite of tensions.

    Australia had stepped in to help Papua New Guinea diplomats and citizens caught in the Middle East.

    Foreign Affairs Minister Justin Tkatchenko confirmed last week that a group was to be evacuated through Jordan.

    There had been six diplomats in lockdown at the PNG embassy in Jerusalem awaiting extraction.

    Meanwhile, a repatriation flight for Australians stuck in Israel had been cancelled.

    ABC News reported that it was the second day repatriation plans were scrapped at the last minute because of rocket fire. A bus meant to take people across the border into Jordan was cancelled the previous day.

    This article is republished under a community partnership agreement with RNZ.

    MIL OSI AnalysisEveningReport.nz

  • MIL-OSI Global: A chance discovery of a 350 million-year-old fossil reveals a new type of ray-finned fish

    Source: The Conversation – Canada – By Conrad Daniel Mackenzie Wilson, PhD candidate in Earth Sciences, Carleton University

    An artist’s rendition of the newly discovered fish, _Sphyragnathus tyche_. (C. Wilson), CC BY

    In 2015, two members of the Blue Beach Fossil Museum in Nova Scotia found a long, curved fossil jaw, bristling with teeth. Sonja Wood, the museum’s owner, and Chris Mansky, the museum’s curator, found the fossil in a creek after Wood had a hunch.

    The fossil they found belonged to a fish that had died 350 million years ago, its bony husk spanning nearly a metre on the lake bed. The large fish had lived in waters thick with rival fish, including giants several times its size. It had hooked teeth at the tip of its long jaw that it would use to trap elusive prey and fangs at the back to pierce it and break it down to eat.

    For the last eight years, I have been part of a team under the lead of paleontologist Jason Anderson, who has spent decades researching the Blue Beach area of Nova Scotia, northwest of Halifax, in collaboration with Mansky and other colleagues. Much of this work has been on the tetrapods — the group that includes the first vertebrates to move to land and all their descendants — but my research focuses on what Blue Beach fossils can tell us about how the modern vertebrate world formed.

    Blue Beach Fossil Museum curator Chris Mansky below the fossil cliffs.
    (C. Wilson), CC BY

    Birth of the modern vertebrate world

    The modern vertebrate world is defined by the dominance of three groups: the cartilaginous fishes or chondrichthyans (including sharks, rays and chimaeras), the lobe-finned fishes or sarcopterygians (including tetrapods and rare lungfishes and coelacanths), and the ray-finned fishes or actinopterygians (including everything from sturgeon to tuna). Only a few jawless fishes round out the picture.

    This basic grouping has remained remarkably consistent — at least for the last 350 million years.

    Before then, the vertebrate world was a lot more crowded. In the ancient vertebrate world, during the Silurian Period (443.7-419.2 MA) for example, the ancestors of modern vertebrates swam alongside spiny pseudo-sharks (acanthodians), fishy sarcopterygians, placoderms and jawless fishes with bony shells.

    Armoured jawless fishes had dwindled by the Late Devonian Period (419.2-358.9 MA), but the rest were still diverse. Actinopterygians were still restricted to a few species with similar body shapes.

    By the immediately succeeding early Carboniferous times, everything had changed. The placoderms were gone, the number of species of fishy sarcopterygians and acanthodians had cratered, and actinopterygians and chondrichthyans were flourishing in their place.

    The modern vertebrate world was born.

    A shortnose chimaera, belonging to the chondrichthyan group of vertebrates.
    (Shutterstock)

    A sea change

    Blue Beach has helped build our understanding of how this happened. Studies describing its tetrapods and actinopterygians have showed the persistence of Devonian-style forms in the Carboniferous Period.

    Whereas the abrupt end-Devonian decline of the placoderms, acanthodians and fishy sarcopterygians can be explained by a mass extinction, it now appears that multiple types of actinopterygians and tetrapods survived to be preserved at Blue Beach. This makes a big difference to the overall story: Devonian-style tetrapods and actinopterygians survive and contribute to the evolution of these groups into the Carboniferous Period.

    But significant questions remain for paleontologists. One point of debate revolves around how actinopterygians diversified as the modern vertebrate world was born — whether they explored new ways of feeding or swimming first.

    Comparing the jawbones of Sphyragnathus, Austelliscus and Tegeolepis.
    (C. Wilson), CC BY

    The Blue Beach fossil was actinopterygian, and we wondered what it could tell us about this issue. Comparison was difficult. Two actinopterygians with long jaws and large fangs were known from the preceding Devonian Period (Austelliscus ferox and Tegeolepis clarki), but the newly found jaw had more extreme curvature and the arrangement of its teeth. Its largest fangs are at the back of its jaw, but the largest fangs of Austelliscus and Tegeolepis are at the front.

    These differences were significant enough that we created a new genus and species: Sphyragnathus tyche. And, in view of the debate on actinopterygian diversification, we made a prediction: that the differences in anatomy between Sphyragnathus and Devonian actinopterygians represented different adaptations for feeding.

    Front fangs

    To test this prediction, we compared Sphyragnathus, Austelliscus and Tegeolepis to living actinopterygians. In modern actinopterygians, the difference in anatomy reflects a difference in function: front-fangs capture prey with their front teeth and grip it with their back teeth, but back-fangs use their back teeth.

    Since we couldn’t observe the fossil fish in action, we analyzed the stress their teeth would experience if we applied force. The back teeth of Sphyragnathus handled force with low stress, making them suited for a role in piercing prey, but the back teeth of Austelliscus and Tegeolepis turned low forces into significantly higher stress, making them best suited for gripping.

    We concluded that Sphyragnathus was the earliest actinopterygian adapted for breaking down prey by piercing, which also matches the broader predictions of the feeding-first hypothesis.

    Substantial work remains — only the jaw of Sphyragnathus is preserved, so the “locomotion-first” hypothesis was untested. But this represents the challenge and promise of paleontology: get enough tantalizing glimpses into the past and you can begin to unfold a history.

    As for the actinopterygians, research indicates they survived and diversified during Devonian times and had shifting roles during the birth of the modern vertebrate world — at least until more fossils are found that could determine whether that’s the case.

    Conrad Daniel Mackenzie Wilson receives funding from the Natural Sciences and Engineering Research Council of Canada, the Ontario Student Assistance Program, and the Society of Vertebrate Paleontology.

    ref. A chance discovery of a 350 million-year-old fossil reveals a new type of ray-finned fish – https://theconversation.com/a-chance-discovery-of-a-350-million-year-old-fossil-reveals-a-new-type-of-ray-finned-fish-254246

    MIL OSI – Global Reports

  • MIL-OSI Global: Ceasefires like the one between Iran and Israel often fail – but an agreement with specific conditions is more likely to hold

    Source: The Conversation – USA – By Donald Heflin, Executive Director of the Edward R. Murrow Center and Senior Fellow of Diplomatic Practice, The Fletcher School, Tufts University

    President Donald Trump speaks to reporters outside the White House on June 24, 2025, in Washington, less than 12 hours after announcing a ceasefire between Israel and Iran. Chip Somodevilla/Getty Images

    Within hours of President Donald Trump unexpectedly announcing an upcoming ceasefire between Israel and Iran on June 23, 2025, both countries launched airstrikes against the other.

    “We basically have two countries that have been fighting so long and so hard that they don’t know what the f–k they’re doing,” an angry and frustrated Trump told reporters outside the White House on June 24.

    While Iran and Israel have tentatively agreed to the truce – and Trump reiterated on June 24 that the “ceasefire is in effect” – it is not clear whether this deal can hold. Some research shows that an estimated 80% of ceasefire deals worldwide fail.

    Amy Lieberman, a politics and society editor at The Conversation U.S., spoke with former Ambassador Donald Heflin, an American career diplomat who serves as the executive director of the Edward R. Murrow Center at the Fletcher School, Tufts University, to understand how ceasefires typically work – and how the Israel-Iran deal stacks up against other agreements to end wars.

    An excavator removes debris from a residential building that was destroyed in Israel’s June 13, 2025, airstrike on Tehran, Iran.
    Majid Saeedi/Getty Images

    How do ceasefire deals typically happen?

    There are classes taught on how to negotiate ceasefires, but it is ad hoc with each situation.

    For example, in one scenario, one of the warring parties wants a ceasefire and has decided that the conflict isn’t going well. The second party might not want a ceasefire, but could agree that it is getting tired or the risks are too high, and agrees to work something out.

    The next scenario, which leads to more success, is when both parties want a ceasefire. They decide that the loss of life and money has gone too far for both sides. One of the parties approaches the other through intermediaries to say it wants a ceasefire, and the other warring party agrees.

    In a third situation – which is what we are seeing with the Iran-Israel deal – the outside world imposes a ceasefire. Trump likely told both Israel and Iran: Look, it’s enough. This is too dangerous for the rest of the world. We don’t care what you think. Time for a ceasefire.“

    The U.S. has done this in the Middle East before, like after the Yom Kippur War in 1973 between Israel and a coalition of Arab countries led by Egypt and Syria. Israel was achieving big military victories, but the risk was pretty great for the world. The U.S. came in and said, “That’s enough, stop it now.” And it worked.

    Does the US bring the warring parties to a table in this kind of situation, or simply pressure the countries to stop fighting?

    It is more of the U.S. saying, “We are done.” When the U.S. does something like this, it is often going to have backup from the European Union and other countries like Qatar, saying, “The Americans are right. It is time for a ceasefire.”

    It appears that this Israel-Iran deal does not have specific conditions attached to it. Is that typical of a ceasefire deal?

    This deal doesn’t seem to have any specific details attached to it. Ceasefires work better when they have that. Lasting ceasefires need to address the concerns of the warring parties and give each side some of what it wants.

    For instance, in the Ukraine and Russia war, we have not seen either one of those countries push for a ceasefire. Part of the problem is Crimea and eastern Ukraine, sections of land in Ukraine that Russia has annexed and claims as its own. Russia would be happy with a deal that puts it in charge of Crimea and Ukraine, but Ukraine won’t agree to that. The question of who controls specific areas of land has to be addressed in this conflict; otherwise, the ceasefire isn’t going to last.

    Search and rescue efforts continue in a building in Beersheba, Israel, hit by a ballistic missile fired from Iran shortly before the ceasefire announced by U.S. President Donald Trump came into effect on June 24, 2025.
    Mostafa Alkharouf/Anadolu via Getty Images)

    Who is responsible for ensuring that both sides uphold a ceasefire?

    Security guarantees are an important part of negotiating and maintaining long-term ceasefires. Big countries like the U.S. could say that if a warring party violates a ceasefire agreement, they are going to punish them.

    In the 1990s, the U.S. and Europe assured Ukraine that if it gave up its nuclear arsenal, the U.S. would defend Ukraine if Russia ever invaded it. Russia has invaded Ukraine twice since then, in 2014 and 2022. The U.S. gave a more substantial response in the form of sending weapons and other war materials to Ukraine after the 2022 invasion, but there have been no real consequences for Russia.

    That has created a problem for ceasefires in the future, because the U.S. didn’t deliver on its past security guarantees.

    The further away you get from Europe, the less interested the West is in wars. But in those kinds of disputes, United Nations and other international peacekeeping troops can be sent in. Sometimes, that can work brilliantly in one place, like with the example of international peacekeeping troops called the multilateral Observer Mission stationed between Israel and Egypt helping maintain peace between those countries. But you can copy it to another place and it just doesn’t work as well.

    How does this ceasefire fit within the history of other ceasefires?

    It’s too early to tell. What matters is how the details get fleshed out.

    Ideally, you can get representatives of the Israeli and Iranian governments to sit around a conference table to reach a detailed agreement. The Israelis might say, “We have got to have some kind of assurances that Iran is not going to use a nuclear weapon.” And the Iranians could say, “Assassinations of our military generals and scientists has got to stop.” That kind of conversation and agreement is what is missing, thus far, in this process.

    Why is it so common for ceasefire deals to fail?

    Some ceasefire deals don’t get to the underlying conditions of what really caused the problem and what made people start shooting this time around. If you don’t get to the core issues of a conflict, you are putting a Band-Aid on the situation. Putting a Band-Aid on someone when they are bleeding is a good move, but you ultimately might need more than that to stop the bleeding.

    The outside world might be pretty happy with a ceasefire deal that seems to stop the fighting, but if the details are not ironed out, the experts would say, “This isn’t going to last.”

    Donald Heflin does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Ceasefires like the one between Iran and Israel often fail – but an agreement with specific conditions is more likely to hold – https://theconversation.com/ceasefires-like-the-one-between-iran-and-israel-often-fail-but-an-agreement-with-specific-conditions-is-more-likely-to-hold-259739

    MIL OSI – Global Reports