Imagine itching, burning, swelling, or even struggling to breathe just moments after sex. For a small but growing number of women, that’s not an awkward anecdote – it’s a medical condition. It’s called seminal plasma hypersensitivity (SPH) – an allergy to semen.
This rare but underdiagnosed allergy isn’t triggered by sperm cells, but by proteins in the seminal plasma — the fluid that carries sperm. First documented in 1967, when a woman was hospitalised after a “violent allergic reaction” to sex, SPH is now recognised as a type 1 hypersensitivity, the same category as hay fever, peanut allergy and cat dander.
Symptoms range from mild to severe. Some women experience local reactions: burning, itching, redness and swelling of the vulva or vagina. Others develop full-body symptoms: hives, wheezing, dizziness, runny nose and even anaphylaxis, a potentially life-threatening immune response.
Get your news from actual experts, straight to your inbox.Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.
Until 1997, SPH was thought to affect fewer than 100 women globally. But a study led by allergist Jonathan Bernstein found that among women reporting postcoital symptoms, nearly 12% could be classified as having probable SPH.
I conducted a small, unpublished survey in 2013 and found a similar 12% rate. The true figure may be higher still. Many cases go unreported, misdiagnosed, or dismissed as STIs, yeast infections, or general “sensitivity”. One revealing clue: symptoms disappear when condoms are used.
A 2024 study reinforced this finding, suggesting that SPH is both more common and more commonly misdiagnosed than previously believed.
The problem isn’t the sperm
The main allergen appears to be prostate-specific antigen (PSA): a protein found in all seminal plasma, not just that of a particular partner. In other words, women can develop a reaction to any man’s semen, not just their regular partner’s.
There’s also evidence of cross-reactivity. For example, Can f 5, a protein found in dog dander, is structurally similar to human PSA. So women allergic to dogs may find themselves reacting to semen too. In one unusual case, a woman with a Brazil nut allergy broke out in hives after sex, probably due to trace nut proteins in her partner’s semen.
Diagnosis begins with a detailed sexual and medical history, often followed by skin prick testing with the partner’s semen or blood tests for PSA-specific antibodies (IgE).
In my own research involving symptomatic women, we demonstrated that testing with washed spermatozoa, free from seminal plasma, can help confirm that the allergic trigger is not the sperm cells themselves, but proteins in the seminal fluid.
And it’s not just women. It’s possible some men may be allergic to their own semen.
This condition, known as post-orgasmic illness syndrome (POIS), causes flu-like symptoms, such as fatigue, brain fog and muscle aches, immediately after ejaculation. It’s believed to be an autoimmune or allergic reaction. Diagnosis is tricky, but skin testing with a man’s own semen can yield a positive reaction.
What about fertility?
Seminal plasma hypersensitivity doesn’t cause infertility directly, but it can complicate conception. Avoiding the allergen – usually the most effective treatment for allergies – isn’t feasible for couples trying to conceive.
Treatments include prophylactic antihistamines (antihistamine medications taken in advance of anticipated exposure to an allergen, or before allergy symptoms are expected to appear to prevent or reduce the severity of allergic reactions), anti-inflammatories and desensitisation using diluted seminal plasma. In more severe cases, couples may choose IVF with washed sperm, bypassing the allergic trigger altogether.
It’s important to note: SPH is not a form of infertility. Many women with SPH have conceived successfully – some naturally, others with medical support.
So why don’t more people know about this?
Because sex-related symptoms often go unspoken. Embarrassment, stigma and a lack of awareness among doctors mean that many women suffer in silence. In Bernstein’s 1997 study, almost half of the women who had symptoms after sex had never been checked for SPH, and many had spent years being misdiagnosed and getting the wrong treatment.
If sex routinely leaves you itchy, sore or unwell – and condoms help – you might be allergic to semen.
It’s time to bring this hidden condition out of the shadows and into the consultation room.
Michael Carroll does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Following a particular diet or exercising a great deal are common and even encouraged in our health and image-conscious culture. With increased awareness of food allergies and other dietary requirements, it’s also not uncommon for someone to restrict or eliminate certain foods.
But these behaviours may also be the sign of an unhealthy relationship with food. You can have a problematic pattern of eating without being diagnosed with an eating disorder.
So, where’s the line? What is disordered eating, and what is an eating disorder?
What is disordered eating?
Disordered eating describes negative attitudes and behaviours towards food and eating that can lead to a disturbed eating pattern.
It can involve:
dieting
skipping meals
avoiding certain food groups
binge eating
misusing laxatives and weight-loss medications
inducing vomiting (sometimes known as purging)
exercising compulsively.
Disordered eating is the term used when these behaviours are not frequent and/or severe enough to meet an eating disorder diagnosis.
Not everyone who engages in these behaviours will develop an eating disorder. But disordered eating – particularly dieting – usually precedes an eating disorder.
What is an eating disorder?
Eating disorders are complex psychiatric illnesses that can negatively affect a person’s body, mind and social life. They’re characterised by persistent disturbances in how someone thinks, feels and behaves around eating and their bodies.
To make a diagnosis, a qualified health professional will use a combination of standardised questionnaires, as well as more general questioning. These will determine how frequent and severe the behaviours are, and how they affect day-to-day functioning.
How common are eating disorders and disordered eating?
The answer can vary quite radically depending on the study and how it defines disordered behaviours and attitudes.
An estimated 8.4% of women and 2.2% of men will develop an eating disorder at some point in their lives. This is most common during adolescence.
Disordered eating is also particularly common in young people with 30% of girls and 17% of boys aged 6–18 years reporting engaging in these behaviours.
Although the research is still emerging, it appears disordered eating and eating disorders are even more common in gender diverse people.
Can we prevent eating disorders?
There is some evidence eating disorder prevention programs that target risk factors – such as dieting and concerns about shape and weight – can be effective to some extent in the short term.
The issue is most of these studies last only a few months. So we can’t determine whether the people involved went on to develop an eating disorder in the longer term.
In addition, most studies have involved girls or women in late high school and university. By this age, eating disorders have usually already emerged. So, this research cannot tell us as much about eating disorder prevention and it also neglects the wide range of people at risk of eating disorders.
Is orthorexia an eating disorder?
In defining the line between eating disorders and disordered eating, orthorexia nervosa is a contentious issue.
The name literally means “proper appetite” and involves a pathological obsession with proper nutrition, characterised by a restrictive diet and rigidly avoiding foods believed to be “unhealthy” or “impure”.
These disordered eating behaviours need to be taken seriously as they can lead to malnourishment, loss of relationships, and overall poor quality of life.
However, orthorexia nervosa is not an official eating disorder in any diagnostic manual.
Additionally, with the popularity of special diets (such as keto or paleo), time-restricted eating, and dietary requirements (for example, gluten-free) it can sometimes be hard to decipher when concerns about diet have become disordered, or may even be an eating disorder.
For example, around 6% of people have a food allergy. Emerging evidence suggests they are also more likely to have restrictive types of eating disorders, such as anorexia nervosa and avoidant/restrictive food intake disorder.
However, following a special diet such as veganism, or having a food allergy, does not automatically lead to disordered eating or an eating disorder.
It is important to recognise people’s different motivations for eating or avoiding certain foods. For example, a vegan may restrict certain food groups due to animal rights concerns, rather than disordered eating symptoms.
What to look out for
If you’re concerned about your own relationship with food or that of a loved one, here are some signs to look out for:
preoccupation with food and food preparation
cutting out food groups or skipping meals entirely
obsession with body weight or shape
large fluctuations in weight
compulsive exercise
mood changes and social withdrawal.
It’s always best to seek help early. But it is never too late to seek help.
In Australia, if you are experiencing difficulties in your relationships with food and your body, you can contact the Butterfly Foundation’s national helpline on 1800 33 4673 (or via their online chat).
For parents concerned their child might be developing concerning relationships with food, weight and body image, Feed Your Instinct highlights common warning signs, provides useful information about help seeking and can generate a personalised report to take to a health professional.
Gemma Sharp receives funding from an NHMRC Investigator Grant. She is a Professor and the Founding Director and Member of the Consortium for Research in Eating Disorders, a registered charity.
Source: The Conversation – UK – By John Curtice, Professor of Politics, University of Strathclyde and Senior Research Fellow, National Centre for Social Research
The outcome of last year’s general election left an important question hanging in the air. Could the UK’s traditional system of two-party politics continue to survive?
True, power did change hands in a familiar fashion. A majority Conservative government was replaced by a majority Labour one. Indeed, the new administration won an overall majority of no less than 174.
However, the new government was elected with a lower share of the vote than that secured by any previous majority government. At the same time, the Conservatives won by far their lowest share of the vote ever. For the first time since 1922, when Labour replaced the then Liberal party as the Conservatives’ principal competitor, Labour and the Conservatives together won fewer than three in five of all votes cast.
Over the past 12 months, the foundations of Britain’s two-party system have come to look even shakier. Nigel Farage’s Reform UK party tops the polls. Only just over two in five of those who express a party preference say they would vote Labour or Conservative – a record low.
New analysis of last year’s election published by the National Centre for Social Research as part of the British Social Attitudes report confirms that Britain’s two-party system is in poor health.
Want more politics coverage from academic experts? Every week, we bring you informed analysis of developments in government and fact check the claims being made.
The traditional anchor of Conservative and Labour support – social class – has been cast adrift. The ideological underpinning of the battle between them, the division between left and right, has been replaced by a division between social conservatives and social liberals. This second division draws people towards Reform and the Greens. At the same time, low levels of trust and confidence in how they are being governed is also encouraging voters to back these two challenger parties.
From class divide to identity politics
Historically, middle-class voters voted Conservative, while their working-class counterparts were more likely to support Labour. In decline ever since the advent of New Labour, that pattern disappeared entirely in 2019 in the wake of a Brexit debate that drew pro-Leave working-class voters towards the Conservatives and pro-Remain middle-class supporters towards Labour.
Although Brexit was no longer in the news, the traditional link between social class and voting Conservative or Labour did not reappear in 2024. Labour won the support of just 30% of those in routine and semi-routine occupations, compared with 42% of those in professional and managerial jobs. At 17% and 21% respectively, the equivalent figures for Conservative support are also little different from each other.
As in the EU referendum, what now shapes how people vote is their age and education, not the job they do. Younger voters and graduates are more likely to vote Labour, while older people and those with less in the way of educational qualifications are more inclined to vote Conservative.
The problem is that the two parties now face competition for these demographic groups from the Greens and Reform. Last year the Greens won as much as 21% of the vote among under-25s. Reform secured 25% among those who do not have an A-level or its equivalent, nearly matching the Tories.
Equally, Brexit was not a divide between “left” and “right” – that is, between those who think the government should do more to reduce inequality and those who are more concerned about growing the whole economic pie. It was a battle between social liberals and social conservatives – between those who value living in a diverse society and those who believe that too much diversity undermines social cohesion.
That second divide has now come to matter as much as the left-right divide in shaping how people vote – and thereby helps draw support away from the Conservatives and Labour.
While the Conservatives are more popular among social conservatives, so also are Reform. Indeed, the competition between the two parties for these voters has intensified since the election. By this spring, Reform, on 37%, was winning the battle for their support, with the Conservatives supported by only 26%. Equally, although Labour are relatively popular among social liberals, both the Greens and the Liberal Democrats find them relatively fertile territory too. Three in ten (31%) social liberals backed the Liberal Democrats or the Greens last year, a figure that now stands at 37%.
Meanwhile, trust and confidence in government remain at a low ebb. For example, nearly half (46%) say they “almost never” trust governments of any party to put the interests of the country above those of their own parties. This perception is seemingly accompanied by a reluctance to vote for the parties of government too. Nearly one in four (24%) of those who almost never trust governments backed Reform last year, while one in ten (10%) supported the Greens.
This, of course, is not the first time that Britain’s two-party system has been under challenge. In the early 1980s the Liberal/SDP Alliance threatened to “break the mould of British politics”. In spring 2019, at the height of the Brexit impasse, the Brexit Party and the Liberal Democrats appeared poised to upset the traditional order. This time, however, the challenge to the Conservative/Labour duopoly seems more profound.
John Curtice currently receives funding from the Economic and Social Research Council.
The more we learn about orcas, the more remarkable they are. These giant dolphins are the ocean’s true apex predator, preying on great white sharks and other lesser predators.
They’re very intelligent and highly social. Their clans are matrilineal, centred around a older matriarch who teaches her clan her own vocalisations. Not only this, but the species is one of only six known to experience menopause, pointing to the social importance of older females after their reproductive years. Different orca groups have fashion trends, such as one pod who returned to wearing salmon as a hat, decades after it went out of vogue.
But for all their intelligence, one thing has been less clear. Can orcas actually make tools, as humans, chimps and other primates do? In research out today by United States and British researchers, we have an answer: yes.
Using drones, researchers watched as resident pods in the Salish Sea broke off the ends of bull kelp stalks and rolled them between their bodies. This, the researchers say, is likely to be a grooming practice – the first tool-assisted grooming seen in marine animals.
This video shows whales using kelp tools in what appears to be social grooming behaviour. Credit: Center for Whale Research.
Self kelp: why would orcas make tools?
Tool use and tool making have been well documented in land-based species. But it’s less common among marine species. This could be partly due to the challenge of observing them.
This field of research expands what we know these animals are capable of. Not only are orcas spending time making kelp into a grooming tool, but they’re doing it socially – two orcas have to work together to rub the kelp against their bodies.
To make the tool, the orcas use their teeth to grab a stalk of kelp by its “stipe” – the long, narrow part near the seaweed’s holdfast, where it tethers to the rock. They use their teeth, motion of their body and the drag of the kelp to break off a piece of this narrow stipe.
Next, they approach a social partner, flip the length of the kelp onto their rostrum (their snout-like projection) and press their head and the kelp against their partner’s flank. The two orcas use their fins and flukes to trap the kelp while rolling it between their bodies. During this contact, the orcas would roll and twist their bodies – often in an exaggerated S-shaped posture. A similar posture has been seen among orcas in other groups, who adopt it when rubbing themselves on sand or pebbles.
Why do it? The researchers suggest this practise may be social skin-maintenance. Bottlenose dolphin mothers are known to remove dead skin from their calves using flippers, while tool-assisted grooming of a partner has been seen in primates, but infrequently and usually in captivity.
Orcas across different social groups, ages and genders were seen doing this. But they were more likely to groom close relatives or those of similar age. There was some evidence suggesting whales with skin conditions were more likely to do the kelp-based grooming.
Humpback whales are known to wear kelp in a practice known as “kelping”. But this study covers a different behaviour, which the authors dub “allokelping” (kelping others).
A surprise from well-studied pods
Interestingly, this new discovery comes from some of the most well-studied and famous orcas in the world – a group known as the southern resident killer whales. If you were a child of the 90s, you would have seen them in the opening scene of Free Willy, the movie which set me on my path to study cetaceans.
These orcas consist of three pods known as J, K and L pods. Each live in the Salish Sea in the Pacific Northwest on the border of Canada and the US.
Researchers fly drones over these resident pods most days and have access to almost 50 years of observations. But this is the first time the tool-making behaviour has been seen.
Unfortunately, these pods are critically endangered. They’re threatened by sound pollution from shipping, polluted water, vessel strike and loss of their main food source – Chinook salmon.
A pod of killer whales off Vancouver, Canada. Vanessa Pirotta, CC BY-NC-ND
Orcas are smart
In one sense, the findings are not a surprise, given the intelligence of these animals.
In the Antarctic, orcas catch seals by making waves to wash them off ice floes. Before European colonisation, orcas and First Nations groups near Eden hunted whales together.
While orcas are often called “killer whales”, they’re not whales. They’re the biggest species of dolphin, growing up to nine metres long. They’re found across all the world’s oceans.
Within the species, there’s a surprising amount of diversity. Scientists group orcas into different ecotypes – populations adapted to local conditions. Different orca groups can differ substantially, from size to prey to habits. For instance, transient orcas cover huge distances seeking larger prey, while resident orcas stick close to areas with lots of fish.
Not just a fluke
Because orcas differ so much, we don’t know whether other pods have discovered or taught these behaviours.
But what this research does point to is that tool making may be more common among marine mammals than we expected. No hands – no problem.
Vanessa Pirotta does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Imagine watching your favourite nature documentary. The predator lunges rapidly from its hiding place, jaws wide open, and the prey … suddenly goes limp. It looks dead.
For some animals, this freeze response – called “tonic immobility” – can be a lifesaver. Possums famously “play dead” to avoid predators. So do rabbits, lizards, snakes, and even some insects.
But what happens when a shark does it?
In our recent study, we explored this strange behaviour in sharks, rays and their relatives. In this group, tonic immobility is triggered when the animal is turned upside down – it stops moving, its muscles relax, and it enters a trance-like state. Some scientists even use tonic immobility as a technique to safely handle certain shark species.
But why does it happen? And does it actually help these marine predators survive?
The mystery of the ‘frozen shark’
Despite being well documented across the animal kingdom, the reasons behind tonic immobility remain murky – especially in the ocean. It is generally thought of as an anti-predator defence. But there is no evidence to support this idea in sharks, and alternative hypotheses exist.
We tested 13 species of sharks, rays, and a chimaera — a shark relative commonly referred to as a ghost shark — to see whether they entered tonic immobility when gently turned upside down underwater.
Seven species did, but six did not. We then analysed these findings using evolutionary tools to map the behaviour across hundreds of million years of shark family history.
So, why do some sharks freeze?
Tonic immobility is triggered in sharks when they are turned upside down. Rachel Moore
Three main hypotheses
There are three main hypotheses to explain tonic immobility in sharks:
Anti-predator strategy – “playing dead” to avoid being eaten
Reproductive role – some male sharks invert females during mating, so perhaps tonic immobility helps reduce struggle
Sensory overload response – a kind of shutdown during extreme stimulation.
Our results don’t support any of these explanations.
There’s no strong evidence sharks benefit from freezing when attacked. In fact, modern predators such as orcas can use this response against sharks by flipping them over to immobilise them and then remove their nutrient-rich livers – a deadly exploit.
The reproductive hypothesis also falls short. Tonic immobility doesn’t differ between sexes, and remaining immobile could make females vulnerable to harmful or forced mating events.
And the sensory overload idea? Untested and unverified. So, we offer a simpler explanation. Tonic immobility in sharks is likely an evolutionary relic.
A case of evolutionary baggage
Our evolutionary analysis suggests tonic immobility is “plesiomorphic” – an ancestral trait that was likely present in ancient sharks, rays and chimaeras. But as species evolved, many lost the behaviour.
In fact, we found that tonic immobility was lost independently at least five times across different groups. Which raises the question: why?
In some environments, freezing might actually be a bad idea. Small reef sharks and bottom-dwelling rays often squeeze through tight crevices in complex coral habitats when feeding or resting. Going limp in such settings could get them stuck – or worse. That means losing this behaviour might have actually been advantageous in these lineages.
So, what does this all mean?
Rather than a clever survival tactic, tonic immobility might just be “evolutionary baggage” – a behaviour that once served a purpose, but now persists in some species simply because it doesn’t do enough harm to be selected against.
It’s a good reminder that not every trait in nature is adaptive. Some are just historical quirks.
Our work helps challenge long-held assumptions about shark behaviour, and sheds light on the hidden evolutionary stories still unfolding in the ocean’s depths. Next time you hear about a shark “playing dead”, remember – it might just be muscle memory from a very, very long time ago.
Jodie L. Rummer receives funding from the Australian Research Council. She is affiliated with the Australian Coral Reef Society, as President.
Joel Gayford receives funding from the Northcote Trust.
Shutting off the internet within an entire country is a serious action. It severely limits people’s ability to freely communicate and to find reliable information during times of conflict.
In countries that have privatised mobile and internet providers, control is often exercised through legislation or through government directives – such as age restrictions on adult content. By contrast, Iran has spent years developing the capacity to directly control its telecommunications infrastructure.
So how can a country have broad control over internet access, and could this happen anywhere in the world?
How does ‘blocking the internet’ work?
The “internet” is a broad term. It covers many types of applications, services and, of course, the websites we’re familiar with.
A nation may opt to physically disconnect the incoming internet connectivity at the point of entry to the country (imagine pulling the plug on a telephone exchange).
This allows for easy recovery of service when the government is ready, but the impact will be far-reaching. Nobody in the country, including the government itself, will be able to connect to the internet – unless the government has its own additional, covert connectivity to the rest of the world.
This is where it gets more technical. Every internet-connected endpoint – laptop, computer, mobile phone – has an IP (internet protocol) address. They’re strings of numbers; for example, 77.237.87.95 is an address assigned to one of the internet service providers in Iran.
IP addresses identify the device on the public internet. However, since strings of numbers are not easy to remember, humans use domain names to connect to services – theconversation.com is an example of a domain name.
That connection between the IP address and the domain is controlled by the domain name system or DNS. It’s possible for a government to control access to key internet services by modifying the DNS – this manipulates the connection between domain names and their underlying numeric addresses.
An additional way to control the internet involves manipulating the traffic flow. IP addresses allow devices to send and receive data across networks controlled by internet service providers. In turn, they rely on the border gateway protocol (BGP) – think of it like a series of traffic signs which direct internet traffic flow, allowing data to move around the world.
Governments could force local internet service providers to remove their BGP routes from the internet. As a result, the devices they service wouldn’t be able to connect to the internet. In the same manner, the rest of the world would no longer be able to “see” into the country.
These events clearly show that if a government anywhere in the world wants to turn off the internet, it really can. The democratic state of the country is the most significant influence on the willingness to undertake such action – not the technical capability.
However, in today’s world, being disconnected from the internet will heavily impact people’s lives, jobs and the economy. It’s not an action to be taken lightly.
How can people evade internet controls?
Virtual private networks or VPNs have long been used to hide communications in countries with strict internet controls, and continue to be an effective internet access method for many people. (However, there are indications Iran has clamped down on VPN use in recent times.)
However, VPNs won’t help when the internet is physically disconnected. Depending on configuration, if BGP routes are blocked, this may also prevent any VPN traffic from reaching the target.
This is where independent satellite internet services open up the most reliable alternative. Satellite internet is great for remote and rural areas where traditional internet service providers have yet to establish their cabling infrastructure – or can’t do so.
Even if traditional wired or wireless internet connections are unavailable, services such as Starlink, Viasat, Hughesnet and others can provide internet access through satellites orbiting Earth.
To use satellite internet, users rely on antenna kits supplied by providers. In Iran, Elon Musk’s Starlink was activated during the blackout, and independent reports suggest there are thousands of Starlink receivers secretly operating in the country.
The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.
On June 21, the United States launched airstrikes on three Iranian nuclear facilities – Fordow, Natanz and Isfahan – pounding deeply buried centrifuge sites with bunker-busting bombs.
Conducted jointly with Israel, the operation took place without formal congressional authorisation, drawing sharp criticism from lawmakers that it was unconstitutional and “unlawful”.
Much of the political debate has centred on whether the US is being pulled into “another Middle East war”.
The New York Times’ Nick Kristof weighed in on the uncertainties following the US’ surprise bombing of Iran and Tehran’s retaliation.
Even US Vice President JD Vance understood the unease, stating:
People are right to be worried about foreign entanglement after the last 25 years of idiotic foreign policy.
These reactions have revived comparisons with George W. Bush’s 2003 invasion of Iraq: a Republican president launching military action on the basis of flimsy weapons of mass destruction (WMD) evidence.
Hauntingly familiar?
While the surface similarity is tempting, the comparison may in fact obscure more about President Donald Trump than it reveals.
Comparisons to the Iraq War
In 2003, Bush ordered a full-scale invasion of Iraq based on flawed intelligence, claiming Iraqi dictator Saddam Hussein possessed WMDs. And while the war was extremely unpopular across the world, it did have bipartisan congressional support.
The invasion toppled Iraq’s regime in just a few weeks.
What followed was a brutal conflict and almost a decade of US occupation. The war triggered the rise of militant jihadism and a horrific sectarian conflict that reverberates today.
So far, Trump’s one-off strikes on Iran bear little resemblance to the 2003 Iraq intervention.
These were precision strikes within the context of a broader Iran-Israel war, designed to target Iran’s nuclear program.
And, so far, there appears to be little appetite for a full-scale military invasion or “boots on the ground”, and regime change seems unlikely despite some rumblings from both Trump and Israeli Prime Minister Benjamin Netanyahu.
Yet the comparison to Iraq persists, especially among audiences suspicious of repeated US military interventions in the Middle East. But poorly considered analogies carry costs.
For one, the Iraq comparison sheds little light on Trump’s foreign policy.
To better understand the recent strikes on Iran, we need to look at Trump’s broader foreign policy.
Much has been made of his “America first” mantra, a complex mix of prioritising domestic interests, questioning international agreements, and challenging traditional alliances.
Others, including Trump himself, have often touted his “no war” approach, pointing to large-scale military withdrawals from Afghanistan, Syria and Iraq,and the fact he had not started a new war.
But beyond this, Trump has increased US military spending and frequently used his office to conduct targeted strikes on adversaries – especially across the Middle East.
For example, in 2017 and 2018, Trump ordered airstrikes on a Syrian airbase and chemical weapons facilities. In both instances, he bypassed Congress and used precision air power to target weapons infrastructure without pursuing regime change.
Also, from 2017 to 2021, Trump authorised US support for the Saudi-led war in Yemen, enabling airstrikes that targeted militant cells but also led to mass civilian casualties.
Trump’s policy was the subject of intense bipartisan opposition, culminating in the first successful congressional invocation of the War Powers Resolution – though it was ultimately vetoed by Trump.
And in 2020, Trump launched a sequence of attacks on Iranian assets in Iraq. This included a drone strike that killed senior Iranian military commander Qassem Soleimani.
Again, these attacks were conducted without congressional support. The decision triggered intense bipartisan backlash and concerns about escalation without oversight.
While such attacks are not without precedent – think back to former US President Barack Obama’s intervention in Libya or Joe Biden’s targeting of terrorist assets – the scale and veracity of Trump’s attacks on the Middle East are much more useful as a framework to understanding the recent attacks on Iran than any reference to the 2003 Iraq war.
What this reveals about Trump
It is crucial to scrutinise any use of force. But while comparing the 2025 Iran strikes to Iraq in 2003 may be rhetorically powerful, it is analytically weak.
A better path is to situate these events within Trump’s broader political style.
He acts unilaterally and with near-complete impunity, disregarding traditional constraints and operating outside established norms and oversight.
This is just as true for attacks on foreign adversaries as it is for the domestic policy arena.
For example, Trump recently empowered agencies such as Immigration and Customs Enforcement (ICE) to operate with sweeping discretion in immigration enforcement, bypassing legal and judicial oversight.
Trump also uses policy as spectacle, designed to send shockwaves through the domestic or foreign arenas and project dominance to both friend and foe.
In this way, Trump’s dramatic attacks on Iran have some parallels to his unilateral imposition of tariffs on international trade. Both are abrupt, disruptive and framed as a demonstration of strength rather than a way to create a mutually beneficial solution.
Finally, Trump is more than willing to use force as an instrument of power rather than as a last resort. This is just as true for Iran as it is for the US people.
The recent deployment of US Marines to quell protests in Los Angeles reveals a similar impulse: military intervention as a first instinct in the absence of a broader strategy to foster peace.
To truly understand and respond to Trump’s Iran strikes, we need to move beyond sensationalist analogies and recognise a more dangerous reality. This is not the start of another Iraq; it’s the continuation of a presidency defined by impulsive power, unchecked force and a growing disdain for democratic constraint.
Benjamin Isakhan receives funding from the Australian Research Council and the Australian Department of Defence. The views expressed in this article do not reflect those of Government policy.
After 12 days of war, US President Donald Trump announced a ceasefire between Israel and Iran that would bring to an end the most dramatic, direct conflict between the two nations in decades.
Israel and Iran both agreed to adhere to the ceasefire, though they said they would respond with force to any breach.
If the ceasefire holds – a big if – the key question will be whether this signals the start of lasting peace, or merely a brief pause before renewed conflict.
As contemporary war studies show, peace tends to endure under one of two conditions: either the total defeat of one side, or the establishment of mutual deterrence. This means both parties refrain from aggression because the expected costs of retaliation far outweigh any potential gains.
What did each side gain?
The war has marked a turning point for Israel in its decades-long confrontation with Iran. For the first time, Israel successfully brought a prolonged battle to Iranian soil, shifting the conflict from confrontations with Iranian-backed proxy militant groups to direct strikes on Iran itself.
This was made possible largely due to Israel’s success over the past two years in weakening Iran’s regional proxy network, particularly Hezbollah in Lebanon and Shiite militias in Syria.
Over the past two weeks, Israel has inflicted significant damage on Iran’s military and scientific elite, killing several high-ranking commanders and nuclear scientists. The civilian toll was also high.
Additionally, Israel achieved a major strategic objective by pulling the United States directly into the conflict. In coordination with Israel, the US launched strikes on three of Iran’s primary nuclear facilities: Fordow, Natanz and Isfahan.
Despite these gains, Israel has not accomplished all of its stated goals. Prime Minister Benjamin Netanyahu had voiced support for regime change, urging Iranians to rise up against Supreme Leader Ali Khamenei’s government, but the senior leadership in Iran remains intact.
Although Iran was caught off-guard by Israel’s attacks — particularly as it was engaged in nuclear negotiations with the US — it responded by launching hundreds of missiles towards Israel.
Iran has demonstrated its capacity to strike back, though Israel has succeeded in destroying many of its air defence systems, some ballistic missile assets (including missile launchers) and multiple energy facilities.
Since the beginning of the assault, Iranian officials have repeatedly called for a halt to resume negotiations. Under such intense pressure, Iran has realised it would not benefit from a prolonged war of attrition with Israel — especially as both nations face mounting costs and the risk of depleting their military stockpiles if the war continues.
As theories of victory suggest, success in war is defined not only by the damage inflicted, but by achieving core strategic goals and weakening the enemy’s will and capacity to resist.
While Israel claims to have achieved the bulk of its objectives, the extent of the damage to Iran’s nuclear program is not fully known, nor is its capacity to continue enriching uranium.
Both sides could remain locked in a volatile standoff over Iran’s nuclear program, with the conflict potentially reigniting whenever either side perceives a strategic opportunity.
Sticking point over Iran’s nuclear program
Iran faces even greater challenges when it emerges from the war. With a heavy toll on its leadership and nuclear infrastructure, Tehran will likely prioritise rebuilding its deterrence capability.
That includes acquiring new advanced air defence systems — potentially from China — and restoring key components of its missile and nuclear programs. (Some experts say Iran has not used some of its most powerful missiles to maintain this deterrence.)
Iranian officials have claimed they safeguarded more than 400 kilograms of 60% enriched uranium before the attacks. This stockpile could theoretically be converted into nine to ten nuclear warheads if further enriched to 90%.
Trump declared Iran’s nuclear capacity had been “totally obliterated”, whereas Rafael Grossi, the United Nations’ nuclear watchdog chief, said damage to Iran’s facilities was “very significant”.
However, analysts have argued Iran will still have a depth of technical knowledge accumulated over decades. Depending on the extent of the damage to its underground facilities, Iran could be capable of restoring and even accelerating its program in a relatively short time frame.
And the chances of reviving negotiations on Iran’s nuclear program appear slimmer than ever.
What might future deterrence look like?
The war has fundamentally reshaped how both Iran and Israel perceive deterrence — and how they plan to secure it going forward.
For Iran, the conflict has reinforced the belief that its survival is at stake. With regime change openly discussed during the war, Iran’s leaders appear more convinced than ever that true deterrence requires two key pillars: nuclear weapons capability, and deeper strategic alignment with China and Russia.
As a result, Iran is expected to move rapidly to restore and advance its nuclear program, potentially moving towards actual weaponisation — a step it had long avoided, officially.
At the same time, Tehran is likely to accelerate military and economic cooperation with Beijing and Moscow to hedge against isolation. Iranian Foreign Minister Abbas Araghchi emphasised this close engagement with Russia during a visit to Moscow this week, particularly on nuclear matters.
Israel, meanwhile, sees deterrence as requiring constant vigilance and a credible threat of overwhelming retaliation. In the absence of diplomatic breakthroughs, Israel may adopt a policy of immediate preemptive strikes on Iranian facilities or leadership figures if it detects any new escalation — particularly related to Iran’s nuclear program.
In this context, the current ceasefire already appears fragile. Without comprehensive negotiations that address the core issues — namely, Iran’s nuclear capabilities — the pause in hostilities may prove temporary.
Mutual deterrence may prevent a more protracted war for now, but the balance remains precarious and could collapse with little warning.
Ali Mamouri does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
In a decade of international security crises, this could be the most serious. Is there still time to prevent this from happening?
A successful but vulnerable treaty
In May 2015, I attended the five-yearly review conference of the NPT. Delegates debated a draft outcome for weeks, and then, not for the first time, went home with nothing. Delegates from the US, United Kingdom and Canada blocked the final outcome to prevent words being added that would call for Israel to attend a disarmament conference.
Russia did the same in 2022 in protest at language on its illegal occupation of the Zaporizhzhia nuclear power station in Ukraine.
Now, in the latest challenge to the NPT, Israel and the US have bombed Iran’s nuclear complexes to ostensibly enforce a treaty neither one respects.
When the treaty was adopted in 1968, it allowed the five nuclear-armed states at the time – the US, Soviet Union, France, UK and China – to join if they committed not to pass weapons or material to other states, and to disarm themselves.
All other members had to pledge never to acquire nuclear weapons. Newer nuclear powers were not permitted to join unless they gave up their weapons.
Israel declined to join, as it had developed its own undeclared nuclear arsenal by the late 1960s. India, Pakistan and South Sudan have also never signed; North Korea was a member but withdrew in 2003. Only South Sudan does not have nuclear weapons today.
To make the obligations enforceable and strengthen safeguards against the diversion of nuclear material to non-nuclear weapons states, members were later required to sign the IAEA Additional Protocol. This gave the International Atomic Energy Agency (IAEA) wide powers to inspect a state’s nuclear facilities and detect violations.
It was the IAEA that first blew the whistle on Iran’s concerning uranium enrichment activity in 2003. Just before Israel’s attacks this month, the organisation also reported Iran was in breach of its obligations under the NPT for the first time in two decades.
The NPT is arguably the world’s most universal, important and successful security treaty, but it is also paradoxically vulnerable.
The treaty’s underlying consensus has been damaged by the failure of the five nuclear-weapon states to disarm as required, and by the failure to prevent North Korea from developing a now formidable nuclear arsenal.
North Korea withdrew from the treaty in 2003, tested a weapon in 2006, and now may have up to 50 warheads.
Iran could be next.
How things can deteriorate from here
Iran argues Israel’s attacks have undermined the credibility of the IAEA, given Israel used the IAEA’s new report on Iran as a pretext for its strikes, taking the matter out of the hands of the UN Security Council.
For its part, the IAEA has maintained a principled position and criticised both the US and Israeli strikes.
Iran has retaliated with its own missile strikes against both Israel and a US base in Qatar. In addition, it wasted no time announcing it would withdraw from the NPT.
On June 23, an Iranian parliament committee also approved a bill that would fully suspend Iran’s cooperation with the IAEA, including allowing inspections and submitting reports to the organisation.
Iran’s envoy to the IAEA, Reza Najafi, said the US strikes:
[…] delivered a fundamental and irreparable blow to the international non-proliferation regime conclusively demonstrating that the existing NPT framework has been rendered ineffective.
Even if Israel and the US consider their bombing campaign successful, it has almost certainly renewed the Iranians’ resolve to build a weapon. The strikes may only delay an Iranian bomb by a few years.
Iran will have two paths to do so. The slower path would be to reconstitute its enrichment activity and obtain nuclear implosion designs, which create extremely devastating weapons, from Russia or North Korea.
Alternatively, Russia could send Iran some of its weapons. This should be a real concern given Moscow’s cascade of withdrawals from critical arms control agreements over the last decade.
An Iranian bomb could then trigger NPT withdrawals by other regional states, especially Saudi Arabia, who suddenly face a new threat to their security.
Why Iran might now pursue a bomb
Iran’s support for Hamas, Hezbollah and Syria’s Assad regime certainly shows it is a dangerous international actor. Iranian leaders have also long used alarming rhetoric about Israel’s destruction.
However repugnant the words, Israeli and US conservatives have misjudged Iran’s motives in seeking nuclear weapons.
Israel fears an Iranian bomb would be an existential threat to its survival, given Iran’s promises to destroy it. But this neglects the fact that Israel already possesses a potent (if undeclared) nuclear deterrent capability.
Israeli anxieties about an Iranian bomb should not be dismissed. But other analysts (myself included) see Iran’s desire for nuclear weapons capability more as a way to establish deterrence to prevent future military attacks from Israel and the US to protect their regime.
Iranians were shaken by Iraq’s invasion in 1980 and then again by the US-led removal of Iraqi dictator Saddam Hussein in 2003. This war with Israel and the US will shake them even more.
Last week, I felt that if the Israeli bombing ceased, a new diplomatic effort to bring Iran into compliance with the IAEA and persuade it to abandon its program might have a chance.
However, the US strikes may have buried that possibility for decades. And by then, the damage to the nonproliferation regime could be irreversible.
Anthony Burke received funding from the UK’s Economic and Social Research Council for a project on global nuclear governance (2014–17).
We all like to imagine we’re ageing well. Now a simple blood or saliva test promises to tell us by measuring our “biological age”. And then, as many have done, we can share how “young” we really are on social media, along with our secrets to success.
While chronological age is how long you have been alive, measures of biological age aim to indicate how old your body actually is, purporting to measure “wear and tear” at a molecular level.
The appeal of these tests is undeniable. Health-conscious consumers may see their results as reinforcing their anti-ageing efforts, or a way to show their journey to better health is paying off.
But how good are these tests? Do they actually offer useful insights? Or are they just clever marketing dressed up to look like science?
How do these tests work?
Over time, the chemical processes that allow our body to function, known as our “metabolic activity”, lead to damage and a decline in the activity of our cells, tissues and organs.
Biological age tests aim to capture some of these changes, offering a snapshot of how well, or how poorly, we are ageing on a cellular level.
Our DNA is also affected by the ageing process. In particular, chemical tags (methyl groups) attach to our DNA and affect gene expression. These changes occur in predictable ways with age and environmental exposures, in a process called methylation.
Research studies have used “epigenetic clocks”, which measure the methylation of our genes, to estimate biological age. By analysing methylation levels at specific sites in the genome from participant samples, researchers apply predictive models to estimate the cumulative wear and tear on the body.
What does the research say about their use?
Although the science is rapidly evolving, the evidence underpinning the use of epigenetic clocks to measure biological ageing in research studies is strong.
Studies have shown epigenetic biological age estimation is a better predictor of the risk of death and ageing-related diseases than chronological age.
Epigenetic clocks also have been found to correlate strongly with lifestyle and environmental exposures, such as smoking status and diet quality.
In addition, they have been found to be able to predict the risk of conditions such as cardiovascular disease, which can lead to heart attacks and strokes.
Taken together, a growing body of research indicates that at a population level, epigenetic clocks are robust measures of biological ageing and are strongly linked to the risk of disease and death
But how good are these tests for individuals?
While these tests are valuable when studying populations in research settings, using epigenetic clocks to measure the biological age of individuals is a different matter and requires scrutiny.
For testing at an individual level, perhaps the most important consideration is the “signal to noise ratio” (or precision) of these tests. This is the question of whether a single sample from an individual may yield widely differing results.
A study from 2022 found samples deviated by up to nine years. So an identical sample from a 40-year-old may indicate a biological age of as low as 35 years (a cause for celebration) or as high as 44 years (a cause of anxiety).
While there have been significant improvements in these tests over the years, there is considerable variability in the precision of these tests between commercial providers. So depending on who you send your sample to, your estimated biological age may vary considerably.
Another limitation is there is currently no standardisation of methods for this testing. Commercial providers perform these tests in different ways and have different algorithms for estimating biological age from the data.
As you would expect for commercial operators, providers don’t disclose their methods. So it’s difficult to compare companies and determine who provides the most accurate results – and what you’re getting for your money.
A third limitation is that while epigenetic clocks correlate well with ageing, they are simply a “proxy” and are not a diagnostic tool.
In other words, they may provide a general indication of ageing at a cellular level. But they don’t offer any specific insights about what the issue may be if someone is found to be “ageing faster” than they would like, or what they’re doing right if they are “ageing well”.
So regardless of the result of your test, all you’re likely to get from the commercial provider of an epigenetic test is generic advice about what the science says is healthy behaviour.
Are they worth it? Or what should I do instead?
While companies offering these tests may have good intentions, remember their ultimate goal is to sell you these tests and make a profit. And at a cost of around A$500, they’re not cheap.
While the idea of using these tests as a personalised health tool has potential, it is clear that we are not there yet.
For this to become a reality, tests will need to become more reproducible, standardised across providers, and validated through long-term studies that link changes in biological age to specific behaviours.
So while one-off tests of biological age make for impressive social media posts, for most people they represent a significant cost and offer limited real value.
The good news is we already know what we need to do to increase our chances of living longer and healthier lives. These include:
We don’t need to know our biological age in order to implement changes in our lives right now to improve our health.
Hassan Vally does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
In our guides to the classics, experts explain key literary works.
Ibn Battuta, was born in Tangier, Morocco, on February 24, 1304. From a statement in his celebrated travel book the Rihla (“legal affairs are my ancestral profession,”) he evidently came from an intellectually distinguished family.
According to the Rihla (travelogue), Ibn Battuta embarked on his travels from Tangier at the age of 22 with the intention of performing the Hajj (the sacred pilgrimage to Mecca) in 1325. Although he returned to Fez (his adopted home-town) around the end of 1349, he continued to visit various regions, including Granada and Sudan, in subsequent years.
Over the course of his almost 30 years of travel, Ibn Battuta covered an astonishing distance of approximately 73,000 miles (117,000 kilometres), visiting a region that today encompasses more than 50 countries. His journeys covered much of the medieval Islamic world and beyond, excluding Northern Europe.
In 1355, he returned to Morocco for the last time and remained there for the rest of his life. Upon his return he dictated his experiences, observations and anecdotes to the Andalusian scholar Ibn Juzayy, with a compilation of his travels completed in 1355 or 1356.
The work, formally titled A Gift to Researchers on the Curiosities of Cities and the Marvels of Journeys, is more commonly referred to as Rihlat Ibn Battuta or simply Rihla.
A painting of Ibn Battuta (on right) in Egypt by Leon Benett. Wikimedia Commons, CC BY
More than a travelogue or geographical record, this book provides rich insights into 14th-century social and political life, capturing cultural diversity across nations. Ibn Battuta details local lifestyles, linguistic traits, beliefs, clothing, cuisines, holidays, artistic traditions and gender relations, as well as commercial activities and currencies.
His observations also include geographical features such as mountains, rivers and agricultural products. Notably, the work highlights his encounters with over 60 sultans and more than 2,000 prominent figures, making it a valuable historical resource.
The travels
His travels began after a dream. According to Ibn Battuta, one night, while in Fuwwa, a town near Alexandria in Egypt, he dreamed of flying on a massive bird across various lands, landing in a dark, greenish country.
To test the local sheikh’s mystical knowledge, he decided if the sheikh knew of his dream, he was truly extraordinary. The next morning, after leading the dawn prayer, he saw the sheikh bid farewell to visitors. Later, the sheikh astonishingly revealed knowledge of Ibn Battuta’s dream and prophesied his pilgrimage through Yemen, Iraq, Turkey and India.
At the time, the Middle East was under the rule of the Mamluk sultanate, Anatolia was divided among principalities and the Mongol Ilkhanate state controlled Iran, Central Asia, and the Indian subcontinent.
Ibn Battuta initially travelled through North Africa, Egypt, Palestine and Syria, completing his first Hajj in 1326.
He then visited Iraq and Iran, returning to Mecca. In 1328, he explored East Africa, reaching Mogadishu, Mombasa, Sudan and Kilwa (modern Tanzania), as well as Yemen, Oman and Anatolia, where he documented cities like Alanya, Konya, Erzurum, Nicaea and Bursa.
His descriptions are vivid. Describing the city of Dimyat, on the bank of the Nile, he says:
Many of the houses have steps leading down to the Nile. Banana trees are especially abundant there, and their fruit is carried to Cairo in boats. Its sheep and goats are allowed to pasture at liberty day and night, and for this reason the saying goes of Dimyat, ‘Its wall is a sweetmeat and its dogs are sheep’. No one who enters the city may afterwards leave it except by the governor’s seal […]
Farmland on the banks of the Nile river today. Alice-D/shutterstock
When it comes to Anatolia (in modern-day Turkey), he declares:
This country, known as the Land of Rum, is the most beautiful in the world. While Allah Almighty has distributed beauty to other lands separately, He has gathered them all here. The most beautiful and well-dressed people live in this land, and the most delicious food is prepared here […] From the moment we arrived, our neighbors — both men and women — showed great concern for our wellbeing. Here, women do not shy away from men; when we departed, they bid us farewell as if we were family, expressing their sadness through tears.
A judge and husband
In 1332, Ibn Battutua met the Byzantine Emperor Andronikos III Palaiologos. Wikimedia Commons, CC BY
Since Ibn Battuta dictated his work, it’s difficult to assess the extent of the scribe’s influence in recording his narratives. Despite being an educated man, he occasionally narrates like a commoner and sometimes exceeds the bounds of polite language. At times, he provides excessive detail, giving the impression he may be quoting from sources beyond his own observations.
Nevertheless, the Rihla stands out for its engaging style and captivating anecdotes, drawing readers in.
Ibn Battuta later journeyed through Crimea, Central Asia, Khwarezm (a large oasis region in the territories of present-day Turkmenistan and Uzbekistan), Bukhara (a city in Uzbekistan), and the Hindu Kush Mountains. In 1332, he met Byzantine Emperor Andronikos III Palaiologos and travelled to Istanbul with the caravan of Uzbek Khan’s third wife. He mentions a caravan that even has a market:
Whenever the caravan halted, food was cooked in great brass cauldrons, called dasts, and supplied from them to the poorer pilgrims and those who had no provisions. […] This caravan contained also animated bazaars and great supplies of luxuries and all kinds of food and fruit. They used to march during the night and light torches in front of the file of camels and litters, so that you saw the countryside gleaming with light and the darkness turned into radiant day.
Ibn Battuta arrived in Delhi in 1333, where he served as a judge under Sultan Muhammad bin Tughluq for seven years. He married or was married to local women in many of the places he stayed. Among his wives were ordinary people as well as the daughters of the administrative class.
Miniature painting in Mughal style depicting the court of Muhammad bin Tughluq. Wikimedia Commons, CC BY
The Sultan’s generosity, intelligence and unconventional ruling style both impressed and surprised Ibn Battuta. However, Muhammad bin Tughluq was known for making excessively harsh and abrupt decisions at times, which led Ibn Battuta to approach him with caution. Nevertheless, with the Sultan’s support, he remained in India for a long time and was eventually chosen as an ambassador to China in 1341.
In 1345 his mission was disrupted when his ship capsized off the coast of Calcutta (then known as Sadqawan) in the Indian Ocean. Though he survived, he lost most of his possessions.
After the incident, he remained in India for a while before continuing his journey by other means. During this period, he travelled through India, Sri Lanka and the Maldives. He served as a judge in the latter for one and a half years. In 1345, he journeyed to China via Bengal, Burma and Sumatra, reaching the city of Guangzhou but limiting his exploration to the southern coast.
He was among the first Arab travellers to record Islam’s spread in the Malay Archipelago, noting interactions between Muslims and Hindu-Buddhist communities. Visiting Java and Sumatra, he praised Sultan Malik al-Zahir of Sumatra as a generous, pious and scholarly ruler and highlighted his rare practice of walking to Friday prayers.
On his return, Ibn Battuta explored regions such as Iran, Iraq, North Africa, Spain and the Kingdom of Mali, documenting the vast Islamic world.
Back in his homeland, Ibn Battuta served as a judge in several locations. He died around 1368-9 while serving as a judge in Morocco and was buried in his birthplace, Tangier.
Historic copy of selected parts of the Travel Report by Ibn Battuta, 1836 CE, Cairo. Wikimedia Commons, CC BY
The status of women
Ibn Battuta’s travels revealed intriguing insights into the status of women across regions. In inner West Africa, he observed matriarchal practices where lineage and inheritance were determined by the mother’s family.
Among Turks, women rode horses like raiders, traded actively and did not veil their faces.
In the Maldives, husbands leaving the region had to abandon their wives. He noted that Muslim women there, including the ruling woman, did not cover their heads. Despite attempting to enforce the hijab as a judge, he failed.
He offers fascinating insights into food cultures. In Siberia, sled dogs were fed before humans. He described 15-day wedding feasts in India.
He tried local produce such as mango in the Indian subcontinent, which he compared to an apple, and sun-dried, sliced fish in Oman.
Religious practices
Ibn Battuta’s accounts of the Hajj (pilgrimage) rituals he performed six times provide a unique perspective. He references a fatwa by Ibn Taymiyyah, prominent Islamic scholar and theologian known for his opposition to theological innovations and critiques of Sufism and philosophy, advising against shortening prayers for those travelling to Medina.
Ibn Battuta’s accounts, particularly regarding the Iranian region, offer important perspectives into religious sects during a period when Iran started shifting from Sunnism to Shiism. He describes societies with diverse demographics, including Persians, Azeris, Kurds, Arabs and Baluchis. His observations on religious practices are especially significant.
Inclined toward Sufism, Ibn Battuta often dressed like a dervish during his travels. He offers a compelling view of Islamic mysticism. He considered regions like Damascus as places of abundance and Anatolia as a land of compassion, interpreting them with a spiritual perspective.
His accounts of Sufi education, dervish lodges, zawiyas (similar to monasteries), and tombs, along with the special invocations of Sufi masters, are important historical records. He also observed and documented unique practices, such as the followers of the Persian Sufi saint Sheikh Qutb al-Din Haydar wearing iron rings on their hands, necks, ears, and even private parts to avoid sexual intercourse.
While Ibn Battuta primarily visited Muslim lands, he also travelled to non-Muslim territories, offering key understandings into different religious cultures, for instance interactions between Crimean Muslims and Christian Armenians in the Golden Horde region.
He also documented churches, icons and monasteries, such as the tomb of the Virgin Mary in Jerusalem. His observation of Muslims openly reciting the call to prayer (adhan) in China is significant.
Other anecdotes include the division of the Umayyad Mosque in Damascus into a mosque and Christian church. Most importantly, his encounters with Hindus and Buddhists in the Indian subcontinent and Malay Islands provide rich historical context.
His accounts of death rituals reveal diverse practices. In Sinop (a city in Turkey), 40 days of mourning were declared for a ruler’s mother, while in Iran, a funeral resembled a wedding celebration. He observed similarities in cremation practices between India and China and described a chilling custom in some regions where slaves and concubines were buried alive with the deceased.
Ibn Battuta’s Rihla, widely translated into Eastern and Western languages, has drawn some criticism for containing depictions that sometimes diverge from historical continuity or borrow from other works. Ibn Battuta himself admitted to using earlier travel books as references.
Despite limited recognition in older sources, the Rihla gained prominence in the West in the 19th century. His legacy remains vibrant today. Morocco declared 1996–1997 the “Year of Ibn Battuta,” and established a museum in Tangier to honour him. In Dubai, a mall is named after him.
Notably, Ibn Battuta travelled to more destinations than Marco Polo and shared a broader range of humane anecdotes, showcasing the depth and diversity of his experiences.
Ismail Albayrak does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
It’s almost a decade since San Francisco 49ers quarterback Colin Kaepernick started a worldwide trend and sparked fierce debate when he knelt during the US national anthem.
In 2016, Kaepernick refused to follow the pre-game protocol related to the national anthem and knelt instead, saying:
I am not going to stand up to show pride in a flag for a country that oppresses black people and people of colour.
Soon, many athletes and teams began “taking a knee” at sports events to express their solidarity with victims of racial injustice.
Following the intense public debate over the appropriateness of Kaepernick’s act, the ritual quickly spread worldwide, with athletes in major soccer leagues, cricket, rugby, Formula 1, top-tier tennis and the US’s Major League Baseball and National Basketball Association taking a knee.
Athletes didn’t always kneel during national anthems, with the majority kneeling at certain points pre-game.
Despite the occasional “defection” of a small number of players who would stand while their teammates knelt – such as Israel Folau in rugby league, Wilfried Zaha in soccer and Quinton de Kock in cricket – the ritual was widely embraced by teams and athletes and helped raise awareness of the issue.
Even major sports organisations notorious for prohibiting any type of political activism generally accepted the kneeling ritual. For example, soccer’s International Football Federation (FIFA) showcased kneeling as a “stand against discrimination” and as human rights advocacy.
The International Olympic Committee (IOC) initially stood firm by its Rule 50, which states “no kind of demonstration or political, religious, or racial propaganda is permitted in any Olympic sites, venues or other areas”.
But just three weeks before the 2021 Olympic and Paralympic Games in Tokyo, the IOC relaxed its interpretation, and athletes were permitted to express their views in ways that included taking a knee.
A surprising turn of events
Despite permission and even encouragement from sports governing bodies, our research shows the practice is disappearing from major sports competitions.
Take soccer, for example. At the FIFA World Cup 2022, England and Wales were the only national teams that knelt at their games in Qatar.
At the FIFA Women’s World Cup 2023 in Australia and New Zealand, no teams or players knelt.
The same happened at the 2024 Olympic soccer tournament in Paris.
That only a handful of teams knelt in Tokyo at the 2021 Olympics, two at the FIFA Mens’ World Cup in Qatar in 2022, none at the FIFA Womens’ World Cup in Australia and New Zealand in 2023, and again none at the Paris 2024 Olympics indicates a growing reluctance throughout the sports world.
This surely cannot mean athletes have become indifferent to racial injustice or other forms of oppression in the interval between the late 2010s and the mid-2020s.
The explanation must be sought elsewhere. A hint was provided when Crystal Palace soccer player Zaha, the first player of colour in the UK who refused to kneel, explained:
I feel like taking the knee is degrading, because growing up my parents just let me know that I should be proud to be Black no matter what and I feel like we should just stand tall.
The explanation may therefore be, at least in part, the players’ uncomfortable feelings related to the kneeling posture.
In sociology, this bothersome state of mind is called “cognitive dissonance”: the mental conflict a person experiences in the presence of contrasting beliefs.
A history of kneeling
The body posture of kneeling is not deemed, in any culture, as expressing solidarity.
Ancient Greek and the Roman societies, on whose values Western civilisation was built, rejected kneeling as improper, even when praying to gods.
When performed outside the church, kneeling meant submission to nobility or royalty.
The significance of kneeling as humility is not limited to the Western world.
In African tribal culture, the young kneel in front of elders, and everyone kneels before the king.
In China in 1949, Chairman Mao famously proclaimed at the first plenary of the Chinese People’s Political Consultative Conference:
From now on our nation […] will no longer be a nation subject to insult and humiliation. We have stood up.
With this in mind, kneeling may be deemed unfit at sporting events, which often feature a powerful cocktail of emotions, values and social expectations.
The inconsistency between the excitement of competition and the expectation to kneel — a gesture associated with submission and humility — likely creates a bothersome state of mind for athletes.
This potentially motivates some players to reject one of the two – in this case, the kneeling – to restore cognitive harmony.
What could replace the kneeling ritual?
After refusing, by unanimous players’ vote, to take a knee before their October 2020 game against the All Blacks, the Australian rugby union team chose instead to wear a First Nations jersey.
The same year, several teams in German soccer’s top league chose to show their support for Black Lives Matter by wearing distinctive armbands.
So it appears wearing a distinctive jersey or at least an armband is more easily accepted by modern-day athletes. This may be challenging given the governing bodies of many sports, such as FIFA, ban athletes from wearing political symbols on their clothing.
Depending on whether sports code accept this type of activism in the future, wearing suportive clothing could replace taking a knee as symbolic communication of solidarity with oppressed minorities.
The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.
Bats are often cast as the unseen night-time stewards of nature, flitting through the dark to control pest insects, pollinate plants and disperse seeds. But behind their silent contributions lies a remarkable and underappreciated survival strategy: seasonal fattening.
Much like bears and squirrels, bats around the world bulk up to get through hard times – even in places where you might not expect it.
In a paper published today in Ecology Letters, we analysed data from bat studies around the world to understand how bats use body fat to survive seasonal challenges, whether it’s a freezing winter or a dry spell.
The surprising conclusion? Seasonal fattening is a global phenomenon in bats, not just limited to those in cold climates.
Even bats in the tropics, where it’s warm all year, store fat in anticipation of dry seasons when food becomes scarce. That’s a survival strategy that’s been largely overlooked. But it may be faltering as the climate changes, putting entire food webs at risk.
Climate shapes fattening strategies
We found bats in colder regions predictably gain more weight before winter.
But in warmer regions with highly seasonal rainfall, such as tropical savannas or monsoonal forests, bats also fatten up. In tropical areas, it’s not cold that’s the enemy, but the dry season, when flowers wither, insects vanish and energy is hard to come by.
The extent of fattening is impressive. Some species increased their body weight by more than 50%, which is a huge burden for flying animals that already use a lot of energy to move around. This highlights the delicate balancing act bats perform between storing energy and staying nimble in the air.
In colder climates, female bats used their fat reserves more sparingly than males – a likely adaptation to ensure they have enough energy left to raise young when spring returns. Since females typically emerge from hibernation to raise their young, conserving fat through winter can directly benefit their reproductive success.
Interestingly, this sex-based difference vanished in warmer climates, where fat use by males and females was more similar, likely because more food is available in warmer climates. It’s another clue that climate patterns intricately shape behaviour and physiology.
Climate change is shifting the rules
Beyond the biology, our study points to a more sobering trend. Bats in warm regions appear to be increasing their fat stores over time. This could be an early warning sign of how climate change is affecting their survival.
Climate change isn’t just about rising temperatures. It’s also making seasons more unpredictable.
Bats may be storing more energy in advance of dry seasons that are becoming longer or harder to predict. That’s risky, because it means more foraging, more exposure to predators and potentially greater mortality.
The implications can ripple outward. Bats help regulate insect populations, fertilise crops and maintain healthy ecosystems. If their survival strategies falter, entire food webs could feel the effects.
Fat bats, fragile futures
Our study changes how we think about bats. They are not just passive victims of environmental change but active strategists, finely tuned to seasonal rhythms. Yet their ability to adapt has limits, and those limits are being tested by a rapidly changing world.
By understanding how bats respond to climate, we gain insights into broader ecosystem resilience. We also gain a deeper appreciation for one of nature’s quiet heroes – fattening up, flying through the night and holding ecosystems together, one wingbeat at a time.
Nicholas Wu was the lead author of a funded Australian Research Council Linkage Grant awarded to Christopher Turbill at Western Sydney University.
We know surprisingly little about the lives of children in ancient Egypt.
And what records we do have about them often concern the lives of the elite – the young king or the children of senior officials. They are more prominent in surviving material evidence, especially funerary art. Infant mortality rates were high in ancient Egypt.
As a result, much of the work in Egyptology on representations of childhood in ancient Egypt is dominated by evidence for the lives of boys and young adult men.
But what were the lives of ordinary girls like in ancient Egypt? And how did they make their way in a deeply patriarchal culture?
Finding hieroglyphic words for girls
An initial problem in studying girls’ lives in ancient Egypt is answering the question: who was a girl in ancient Egypt?
Chronological age was not always recorded by ancient Egyptians in their letters or inscriptions.
Instead, more general words and hieroglyphic signs tended to accompany images of men, women and children to indicate their social roles.
These words and signs were only loosely associated with biological development.
Hieroglyphic words for infants and small children, for instance, could be marked with an image of a small, seated child – sometimes with a finger held to its mouth.
Among the words used to describe young girls – talking, walking, and participating alongside adults in their work – was sheriyt.
This is the word often found in ancient accounting documents recording payments of wages, indicating a girl-child worker. They are distinguished from older women in these documents, although it is difficult to know precisely how young they might have been.
In this way, written administrative records and archaeological evidence reveals girls of many social classes were integrated into economic production from an early age.
Payment for work
Elephantine, a town at Egypt’s southern frontier near modern-day Aswan, provides a unique window into the urban life of some girls who worked in textile workshops during the ancient Egyptian Middle Kingdom, which dates approximately 2030–1650 BCE.
First published in 1996, archaeologists found a ceramic bowl repurposed as a writing surface in a house in the densely packed urban settlement.
The excavators initially dated the bowl to the reign of King Amenemhat III, who ruled almost 3,800 years ago. However, based on the style of writing and the types of names listed, some scholars have also dated it earlier. It contains lists of payments of provisions of grain for textile workers over the course of a month.
What makes this document so important is that it names at least 18 child workers. Of these, 11 are girls, clearly marked with the Egyptian word sheriyt, working alongside 28 adult women.
The list shows adult women in this workshop received between 50–57 heqat (around 240–274 litres) of grain – although it’s not entirely clear if this was a one-off payment, a payment per month, or something else. The girls earned smaller but still significant wages of 3–7 heqat (around 14–34 litres).
Some other adult women seem to have also received comparable provisions to the girls, although without further information it is difficult know their social status or age.
This document not only confirms that girls received payment for their labour. It also suggests a structured apprenticeship system where young girls (and boys) worked alongside experienced craftswomen.
Archaeological evidence suggests textile production occurred both within homes and in dedicated workshops.
Evidence from the excavations at Elephantine suggests homes had several rooms with multiple purposes, including courtyards, entrance vestibules, kitchens with ovens (recognisable by blackened walls and ash deposits), and possible stairs leading to roof spaces.
Privacy would have been limited. Daily life would have included close interaction with animals, as evidenced by attached animal pens.
More recently, close to the house where the provision list was discovered, archaeologists found needles, spindles, shuttles, and remains of pegs for a large loom.
These were found both inside houses and in the courtyards attached to them.
It’s hard to know what exactly these buildings were for; they probably served multiple purposes.
Lives shaped by class and legal status
Not all girls at Elephantine had the same experience of life. The town’s position at Egypt’s southern frontier in this period meant it was home to diverse populations, which included migrants, enslaved people and transitory workers.
A letter dating to the reign of King Amenemhat III documents some families, including women and children, arriving at Elephantine seeking work during a famine in their home region.
This evidence can be compared to a legal document from the same time period but from another Egyptian town, El Lahun. This document mentions the purchase and transfer of enslaved women and infants who are called Aamut, referring to a region in West Asia. The document shows they have been given new Egyptian names.
These documents remind us factors such as class and legal status have always profoundly shaped girls’ lives.
Valuing the work of girls
Accessing the everyday thoughts, feelings, and perspectives of many ancient people, especially children, is challenging for historians. We don’t, for instance, have a wealth of personal diaries from ancient Egypt to learn about girls’ interior lives.
But what’s clear is that girls were not merely passive participants in society. They were active economic contributors, who often received formal compensation for their work.
Historians must always look beyond elite contexts to incorporate diverse evidence types – administrative documents, archaeological remains, and artistic representations – to construct a more complete picture of ancient lives.
Julia Hamilton does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Source: The Conversation – Global Perspectives – By Vitomir Kovanovic, Associate Professor and Associate Director of the Centre for Change and Complexity in Learning (C3L), Education Futures, University of South Australia
Since ChatGPT appeared almost three years ago, the impact of artificial intelligence (AI) technologies on learning has been widely debated. Are they handy tools for personalised education, or gateways to academic dishonesty?
Most importantly, there has been concern that using AI will lead to a widespread “dumbing down”, or decline in the ability to think critically. If students use AI tools too early, the argument goes, they may not develop basic skills for critical thinking and problem-solving.
Is that really the case? According to a recent study by scientists from MIT, it appears so. Using ChatGPT to help write essays, the researchers say, can lead to “cognitive debt” and a “likely decrease in learning skills”.
So what did the study find?
The difference between using AI and the brain alone
Over the course of four months, the MIT team asked 54 adults to write a series of three essays using either AI (ChatGPT), a search engine, or their own brains (“brain-only” group). The team measured cognitive engagement by examining electrical activity in the brain and through linguistic analysis of the essays.
The cognitive engagement of those who used AI was significantly lower than the other two groups. This group also had a harder time recalling quotes from their essays and felt a lower sense of ownership over them.
Interestingly, participants switched roles for a final, fourth essay (the brain-only group used AI and vice versa). The AI-to-brain group performed worse and had engagement that was only slightly better than the other group’s during their first session, far below the engagement of the brain-only group in their third session.
The authors claim this demonstrates how prolonged use of AI led to participants accumulating “cognitive debt”. When they finally had the opportunity to use their brains, they were unable to replicate the engagement or perform as well as the other two groups.
Cautiously, the authors note that only 18 participants (six per condition) completed the fourth, final session. Therefore, the findings are preliminary and require further testing.
Does this really show AI makes us stupider?
These results do not necessarily mean that students who used AI accumulated “cognitive debt”. In our view, the findings are due to the particular design of the study.
The change in neural connectivity of the brain-only group over the first three sessions was likely the result of becoming more familiar with the study task, a phenomenon known as the familiarisation effect. As study participants repeat the task, they become more familiar and efficient, and their cognitive strategy adapts accordingly.
When the AI group finally got to “use their brains”, they were only doing the task once. As a result, they were unable to match the other group’s experience. They achieved only slightly better engagement than the brain-only group during the first session.
To fully justify the researchers’ claims, the AI-to-brain participants would also need to complete three writing sessions without AI.
Similarly, the fact the brain-to-AI group used ChatGPT more productively and strategically is likely due to the nature of the fourth writing task, which required writing an essay on one of the previous three topics.
As writing without AI required more substantial engagement, they had a far better recall of what they had written in the past. Hence, they primarily used AI to search for new information and refine what they had previously written.
What are the implications of AI in assessment?
To understand the current situation with AI, we can look back to what happened when calculators first became available.
Back in the 1970s, their impact was regulated by making exams much harder. Instead of doing calculations by hand, students were expected to use calculators and spend their cognitive efforts on more complex tasks.
Effectively, the bar was significantly raised, which made students work equally hard (if not harder) than before calculators were available.
The challenge with AI is that, for the most part, educators have not raised the bar in a way that makes AI a necessary part of the process. Educators still require students to complete the same tasks and expect the same standard of work as they did five years ago.
In such situations, AI can indeed be detrimental. Students can for the most part offload critical engagement with learning to AI, which results in “metacognitive laziness”.
However, just like calculators, AI can and should help us accomplish tasks that were previously impossible – and still require significant engagement. For example, we might ask teaching students to use AI to produce a detailed lesson plan, which will then be evaluated for quality and pedagogical soundness in an oral examination.
In the MIT study, participants who used AI were producing the “same old” essays. They adjusted their engagement to deliver the standard of work expected of them.
The same would happen if students were asked to perform complex calculations with or without a calculator. The group doing calculations by hand would sweat, while those with calculators would barely blink an eye.
Learning how to use AI
Current and future generations need to be able to think critically and creatively and solve problems. However, AI is changing what these things mean.
Producing essays with pen and paper is no longer a demonstration of critical thinking ability, just as doing long division is no longer a demonstration of numeracy.
Knowing when, where and how to use AI is the key to long-term success and skill development. Prioritising which tasks can be offloaded to an AI to reduce cognitive debt is just as important as understanding which tasks require genuine creativity and critical thinking.
The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.
But the situation is perhaps not as rosy for the animal itself. Domesticated animals often live longer than their free-living counterparts, but the quality of those lives can be compromised. Pets can be fed processed foods that can lead to obesity. Many are denied a sexual life and experience of parenthood. Exercise can be limited, isolation is common and boredom must be endured.
Is this the best life for the species we feel closest to? This question was raised for me when I heard the story of Valerie, the dachshund recaptured in April this year after almost 18 months living on her own on South Australia’s Karta Pintingga/Kangaroo Island.
Is being a pet the best life for the species we feel closest to? Oleksandr Rupeta/NurPhoto via Getty Images
Valerie: the story that captivated a nation
Valerie, a miniature dachshund, escaped into the bush during a camping trip on Kangaroo Island in November 2023. After several days of searching, her bereft humans returned to their home in New South Wales. They assumed the tiny dog, who had lived her life as a “little princess”, was gone forever.
Fast-forward a year, and sightings were reported on the island of a small dog wearing a pink collar. Word spread and volunteers renewed the search. A wildlife rescue group designed a purpose-built trap, fitting it out with items from Valerie’s former home.
After several weeks, a remotely controlled gate clattered shut behind Valerie and she was caught.
Cue great celebrations. The searchers were triumphant and the family was delighted. Social media lit up. It was a canine reenactment of one of settler Australia’s enduring narratives: the lost child rescued from the hostile bush.
A dog’s-eye view
But imagine if Valerie’s story was told from a more dog-centred perspective. Valerie found herself alone in a strange place and took the opportunity to run away. She embarked on a new life in which she was responsible for herself and could exercise the intelligence inherited from her boar-hunting ancestors.
No longer required to be a good girl, Valerie applied her own judgement – that notorious dachshund “stubbornness” – to evade predators, fill her stomach and pass her days.
Some commentators assumed Valerie must have been fed by anonymous benefactors – reflecting a widely held view that pets have limited abilities.
Veterinary experts, however, said her diet likely consisted of small birds, mammals and reptiles she killed herself – as well as roadkill, other carrion and faeces.
Valerie was clearly good at life on the lam. Unlike the human competitors in the series Alone Australia, she did not waste away when left in an island wilderness. Instead, she gained 1.8 kg of muscle – and was so stocky she no longer fit the old harness her humans brought to collect her. She had literally outgrown her former bonds.
Valerie could have sought shelter with the island’s humans at any time, but chose not to. She had to be actively trapped. Once returned to her humans, she needed time to reacclimatise to life as a pet.
Not all missing pets thrive in the wild. But all this raises the question of whether Valerie’s rescue would be better understood as a forced return from a full life of freedom, to a diminished existence in captivity?
A long history of pets thriving in the wild
Other examples exist which suggest an animal’s best life can take place outside the constraints of being a pet.
Exotic parrots have fled lives in cages to form urban flocks. In the United States, 25 species initially imported as pets have set up self-sustaining, free-living populations across 23 states.
Or take the red-eared slider turtle, which is native to parts of the US and Mexico. It’s illegal to keep the turtles as pets in Australia, but some of those smuggled in have later been released into urban wetlands where they have established large and widespread populations.
Cats are perhaps the most notorious example of escaped pets thriving on their own in Australia. They numbers in the millions, in habitats from cities to the Simpson Desert to the Snowy Mountains, showing how little they need human assistance.
One mark of their success is their prodigious size. At up to 7kg, free-living cats can be more than twice the weight of the average domestic cat.
Of course, I am not advocating that pets be released to the wild, creating new problems. But I do believe current pet-keeping practices are due for reconsideration.
A dramatic solution would be to take the animal out of the pet relationship. Social robots that look like seals and teddy bears are already available to welcome you home, mirror your emotions and offer up cuddles without the cost to other animals.
A less radical option is to rethink the idea of animals as “pets” and instead see them as equals.
Some people already enjoy these unforced bonds. Magpies, for example, are known to have strong allegiances with each other and are sometimes willing to extend those connections to humans in multi-species friendships.
As for Valerie, she did make “her little happy sounds” when reunited with her humans. But she might look back with nostalgia to her 529 days of freedom on Kangaroo Island.
Nancy Cushing receives funding from the State Library of New South Wales as the Coral Thomas Fellow. She is a member of the executive committee of the Australian Historical Association.
Have you had a tonsillectomy (your tonsils taken out), appendectomy (your appendix removed) or lumpectomy (removal of a lump from your breast)? The suffix “ectomy” denotes surgical removal of the named body part, so these terms give us a clear idea of what the procedure entails.
So why is the removal of the uterus called a hysterectomy and not a uterectomy?
The name hysterectomy is rooted in a mental health condition – “hysteria” – that was once believed to affect women. But we now know this condition doesn’t exist.
Continuing to call this significant operation a hysterectomy both perpetuates misogyny and hampers people’s understanding of what it is.
From the defunct condition ‘hysteria’
Hysteria was a psychiatric condition first formally defined in the 5th century BCE. It had many symptoms, including excessive emotion, irritability, anxiety, breathlessness and fainting.
But hysteria was only diagnosed in women. Male physicians at the time claimed these symptoms were caused by a “wandering womb”. They believed the womb (uterus) moved around the body looking for sperm and disrupted other organs.
Because the uterus was blamed for hysteria, the treatment was to remove it. This procedure was called a hysterectomy. Sadly, many women had their healthy uterus unnecessarily removed and most died.
The word “hysteria” did originally came from the ancient Greek word for uterus, “hystera”. But the modern Greek word for uterus is “mitra”, which is where words such as “endometrium” come from.
uterine prolapse (when the uterus protrudes down into the vagina)
adenomyosis (when the inner layer of the uterus grows into the muscle layer)
cancer.
However, in a survey colleagues and I did of almost 500 Australian adults, which is yet to be published in a peer-reviewed journal, one in five people thought hysterectomy meant removal of the ovaries, not the uterus.
It’s true some hysterectomiesfor cancer do also remove the ovaries. A hysterectomy or partial hysterectomy is the removal of only the uterus, a total hysterectomy removes the uterus and cervix, while a radical hysterectomy usually removes the uterus, cervix, uterine tubes and ovaries.
There are important differences between these hysterectomies, so they should be named to clearly indicate the nature of the surgery.
Research has shown ambiguous terminology such as “hysterectomy” is associated with low patient understanding of the procedure and the female anatomy involved.
Uterectomy should be used for removal of the uterus, in combination with the medical terms for removal of the cervix, uterine tubes and ovaries as needed. For example, a uterectomy plus cervicectomy would refer to the removal of the uterus and the cervix.
This could help patients understand what is (and isn’t) being removed from their bodies and increase clarity for the wider public.
Other female body parts and procedures have male names
There are many eponyms (something named after a person) in anatomy and medicine, such as the Achilles tendon and Parkinson’s disease. They are almost exclusively the names of white men.
Eponyms for female anatomy and procedures include the Fallopian tubes, Pouch of Douglas, and Pap smear.
The anatomical term for Fallopian tubes is uterine tubes. “Uterine” indicates these are attached to the uterus, which reinforces their important role in fertility.
The Pouch of Douglas is the space between the rectum and uterus. Using the anatomical name (rectouterine pouch) is important, because this a common site for endometriosis and can explain any associated bowel symptoms.
Pap smear gives no indication of its location or function. The new cervical screening test is named exactly that, which clarifies it samples cells of the cervix. This helps people understand this tests for risk of cervical cancer.
In line with increasing awareness and discussions around female reproductive health and medical misogyny, now is the time to improve terminology. We must ensure the names of body parts and medical procedures reflect the relevant anatomy.
Theresa Larkin does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Many smartwatches, fitness and wellness trackers now offer sleep tracking among their many functions.
Wear your watch or ring to bed, and you’ll wake up to a detailed sleep report telling you not just how long you slept, but when each phase happened and whether you had a good night’s rest overall.
Surfing is done in the ocean, planes fly in the sky, and sleep occurs in the brain. So how can we measure sleep from the wrist or finger?
The gold standard of sleep measurement
If you’ve ever had a sleep study or seen someone with dozens of wires attached to their head, body and face, you’ve encountered polysomnography or PSG.
Eye movements, muscle tone, heart rate and brain activity are measured and assessed by experts to detect which stage of sleep or wakefulness a person is in.
When we sleep, we cycle through different stages, generally classified as light sleep, slow-wave sleep (also known as deep sleep), and rapid eye movement or REM sleep.
Each stage has an effect on brain activity, muscle tone and heart rate – which is why sleep scientists need so many wires.
Accurate? Absolutely. Convenient? Like two left shoes.
This is where the convenience of wearable at-home sleep trackers comes in.
What sensors are in sleep trackers?
Since the 1990s, sleep researchers have been using actigraphy to measure people’s sleep outside the laboratory.
An actigraphy device is similar to a wristwatch and uses accelerometers to measure the person’s movement. Coupled with sleep diaries, actigraphy assumes a person is awake when they’re moving and asleep when still. Simple.
While this is a scientifically accepted method of estimating sleep, it’s prone to mislabelling being awake but at rest (such as when reading a book) as sleep.
There’s one key addition that makes wrist-worn sleep trackers more accurate – PPG or photoplethysmography.
It’s hard to pronounce, but photoplethysmography is a key driver in the explosion of wearable health tracking.
It uses those little green lights on the skin-side of the wearable to track the amount of blood passing through your wrist at any given time. Clip-on pulse oximeters used by doctors are the same type of tech.
The addition of PPG to a wrist tracker allows for the measurement of raw data like heart rate and breathing rate. From this data, the wearable can estimate a number of physiological metrics, including sleep stages.
Since fitness wearables already have accelerometers and PPG to track your physical activity and heart rate, it makes sense to use these sensors to track sleep too. But how accurate are they?
Many fitness trackers leverage the sensors used to measure your fitness activities and heart rate for sleep tracking. The Conversation
How do scientists test sleep trackers?
Two main factors determine the accuracy of sleep trackers. How well does the device detect whether you’re asleep or awake? And how well can it distinguish the sleep stages?
To answer these questions, sleep scientists conduct validation studies. Participants sleep overnight in a laboratory while wearing both a sleep tracker and undergoing PSG.
Then, scientists compare the data from both methods in 30-second blocks called “epochs”. That means for a nine-hour sleep there will be 1,080 epochs to compare.
If both the device and PSG indicate “sleep” for the same epoch, they’re in agreement. If the device indicates “wake” and PSG indicates “sleep” for the same epoch, that’s considered an error. The same is done for sleep stages.
How accurate are sleep trackers?
In a 2022 study of several popular trackers, most correctly identified more than 90% of sleep epochs. But because light sleep and restful wake are so similar, wearables struggle more to estimate wakefulness, correctly identifying between 26% and 73% of wake epochs.
When it comes to sleep stages, wearables are less precise, correctly identifying between 53% and 60% of sleep stage epochs. However, for some devices and some sleep stages the precision can be greater. A recent validation study showed that a latest generation ring-shaped wearable didn’t differ from PSG for estimating light sleep and slow wave sleep.
In short, most modern sleep trackers do a decent job of estimating your total sleep each night. Some are more accurate for sleep staging, but this level of detail isn’t essential for improving the basics of your sleep.
Do I need a sleep tracker?
If you’re struggling with sleep, you should speak to your doctor. A sleep tracker can be a useful tool to help track your sleep goals, but ultimately your behaviour is what will improve sleep.
Keeping regular bedtimes and wake-up times, having a distraction-free sleep space, and keeping home lighting low in the evenings can all help to improve your sleep.
If you love tracking your sleep, make sure your device has been independently validated. While sleep stage data may not be essential, devices that perform well in estimating sleep stage also tend to be more accurate at detecting when you’re asleep or awake. When reviewing your data, look at long term trends in sleep rather than day-to-day variability.
If you don’t love your sleep tracker, you can take it off or ignore it. For some people, access to sleep data can negatively impact sleep by creating stress and anxiety for getting a perfect night’s sleep. Instead, focus on improving your healthy sleep strategies and pay attention to how you feel during the day.
Dr Dean J. Miller is a member of a research group at Central Queensland University that receives support for research (i.e., funding, equipment) from WHOOP Inc, a smart device maker.
Is AI going to take over the world? Have scientists created an artificial lifeform that can think on its own? Is it going to replace all our jobs, even creative ones, like doctors, teachers and care workers? Are we about to enter an age where computers are better than humans at everything?
The answers, as the authors of The AI Con stress, are “no”, “they wish”, “LOL” and “definitely not”.
The AI Con: How To Fight Big Tech’s Hype and Create the Future We Want – Emily M. Bender and Alex Hanna (Bodley Head)
Artificial intelligence is a marketing term as much as a distinct set of computational architectures and techniques. AI has become a magic word for entrepreneurs to attract startup capital for dubious schemes, an incantation deployed by managers to instantly achieve the status of future-forward leaders.
In a mere two letters, it conjures a vision of automated factories and robotic overlords, a utopia of leisure or a dystopia of servitude, depending on your point of view. It is not just technology, but a powerful vision of how society should function and what our future should look like.
In this sense, AI doesn’t need to work for it to work. The accuracy of a large language model may be doubtful, the productivity of an AI office assistant may be claimed rather than demonstrated, but this bundle of technologies, companies and claims can still alter the terrain of journalism, education, healthcare, service work and our broader sociocultural landscape.
Bender is a linguistics professor at the University of Washington, who has become a prominent technology critic. Hanna is a sociologist and former employee of Google, who is now the director of research at the Distributed AI Research Institute. After teaming up to mock AI boosters in their popular podcast, Mystery AI Hype Theater 3000, they have distilled their insights into a book written for a general audience. They meet the unstoppable force of AI hype with immovable scepticism.
Step one in this program is grasping how AI models work. Bender and Hanna do an excellent job of decoding technical terms and unpacking the “black box” of machine learning for lay people.
Driving this wedge between hype and reality, between assertions and operations, is a recurring theme across the pages of The AI Con, and one that should gradually erode readers’ trust in the tech industry. The book outlines the strategic deceptions employed by powerful corporations to reduce friction and accumulate capital. If the barrage of examples tends to blur together, the sense of technical bullshit lingers.
What is intelligence? A famous and highly cited paper co-written by Bender asserts that large language models are simply “stochastic parrots”, drawing on training data to predict which set of tokens (i.e. words) is most likely to follow the prompt given by a user. Harvesting millions of crawled websites, the model can regurgitate “the moon” after “the cow jumped over”, albeit in much more sophisticated variants.
Rather than actually understanding a concept in all its social, cultural and political contexts, large language models carry out pattern matching: an illusion of thinking.
But I would suggest that, in many domains, a simulation of thinking is sufficient, as it is met halfway by those engaging with it. Users project agency onto models via the well-known Eliza effect, imparting intelligence to the simulation.
Management are pinning their hopes on this simulation. They view automation as a way to streamline their organisations and not be “left behind”. This powerful vision of early adopters vs extinct dinosaurs is one we see repeatedly with the advent of new technologies – and one that benefits the tech industry.
In this sense, poking holes in the “intelligence” of artificial intelligence is a losing move, missing the social and financial investment that wants this technology to work. “Start with AI for every task. No matter how small, try using an AI tool first,” commanded DuoLingo’s chief engineering officer in a recent message to all employees. Duolingo has joined Fiverr, Shopify, IBM and a slew of other companies proclaiming their “AI first” approach.
The AI Con is strongest when it looks beyond or around the technologies to the ecosystem surrounding them, a perspective I have also argued is immensely helpful. By understanding the corporations, actors, business models and stakeholders involved in a model’s production, we can evaluate where it comes from, its purpose, its strengths and weaknesses, and what all this might mean downstream for its possible uses and implications. “Who benefits from this technology, who is harmed, and what recourse do they have?” is a solid starting point, Bender and Hanna suggest.
These basic but important questions extract us from the weeds of technical debate – how does AI function, how accurate or “good” is it really, how can we possibly understand this complexity as non-engineers? – and give us a critical perspective. They place the onus on industry to explain, rather than users to adapt or be rendered superfluous.
We don’t need to be able to explain technical concepts like backpropagation or diffusion to grasp that AI technologies can undermine fair work, perpetuate racial and gender stereotypes, and exacerbate environmental crises. The hype around AI means to distract us from these concrete effects, to trivialise them and thus encourage us to ignore them.
As Bender and Hanna explain, AI boosters and AI doomers are really two sides of the same coin. Conjuring up nightmare scenarios of self-replicating AI terminating humanity or claiming sentient machines will usher us into a posthuman paradise are, in the end, the same thing. They place a religious-like faith in the capabilities of technology, which dominates debate, allowing tech companies to retain control of AI’s future development.
The risk of AI is not potential doom in the future, à la the nuclear threat during the Cold War, but the quieter and more significant harm to real people in the present. The authors explain that AI is more like a panopticon “that allows a single prison warden to keep track of hundreds of prisoners at once”, or the “surveillance dragnets that track marginalised groups in the West”, or a “toxic waste, salting the earth of a Superfund site”, or a “scabbing worker, crossing the picket line at the behest of an employer who wants to signal to the picketers that they are disposable. The totality of systems sold as AI are these things, rolled into one.”
A decade ago, with another “game-changing” technology, author Ian Bogost observed that
rather than utopia or dystopia, we usually end up with something less dramatic yet more disappointing. Robots neither serve human masters nor destroy us in a dramatic genocide, but slowly dismantle our livelihoods while sparing our lives.
The pattern repeats. As AI matures (to some degree) and is adopted by organisations, it moves from innovation to infrastructure, from magic to mechanism. Grand promises never materialise. Instead, society endures a tougher, bleaker future. Workers feel more pressure; surveillance is normalised; truth is muddied with post-truth; the marginal become more vulnerable; the planet gets hotter.
Technology, in this sense, is a shapeshifter: the outward form constantly changes, yet the inner logic remains the same. It exploits labour and nature, extracts value, centralises wealth, and protects the power and status of the already-powerful.
Co-opting critique
In The New Spirit of Capitalism, sociologists Luc Boltanski and Eve Chiapello demonstrate how capitalism has mutated over time, folding critiques back into its DNA.
After enduring a series of blows around alienation and automation in the 1960s, capitalism moved from a hierarchical Fordist mode of production to a more flexible form of self-management over the next two decades. It began to favour “just in time” production, done in smaller teams, that (ostensibly) embraced the creativity and ingenuity of each individual. Neoliberalism offered “freedom”, but at a price. Organisations adapted; concessions were made; critique was defused.
AI continues this form of co-option. Indeed, the current moment can be described as the end of the first wave of critical AI. In the last five years, tech titans have released a series of bigger and “better” models, with both the public and scholars focusing largely on generative and “foundation” models: ChatGPT, StableDiffusion, Midjourney, Gemini, DeepSeek, and so on.
Scholars have heavily criticised aspects of these models – my own work has explored truth claims, generative hate, ethics washing and other issues. Much work focused on bias: the way in which training data reproduces gender stereotypes, racial inequality, religious bigotry, western epistemologies, and so on.
Much of this work is excellent and seems to have filtered into the public consciousness, based on conversations I’ve had at workshops and events. However, its flagging of such issues allows tech companies to practise issue resolving. If the accuracy of a facial-recognition system is lower with Black faces, add more Black faces to the training set. If the model is accused of English dominance, fork out some money to produce data on “low-resource” languages.
Companies like Anthropic now regularly carry out “red teaming” exercises designed to highlight hidden biases in models. Companies then “fix” or mitigate these issues. But due to the massive size of the data sets, these tend to be band-aid solutions, superficial rather than structural tweaks.
For instance, soon after launching, AI image generators were under pressure for not being “diverse” enough. In response, OpenAI invented a technique to “more accurately reflect the diversity of the world’s population”. Researchers discovered this technique was simply tacking on additional hidden prompts (e.g. “Asian”, “Black”) to user prompts. Google’s Gemini model also seems to have adopted this, which resulted in a backlash when images of Vikings or Nazis had South Asian or Native American features.
The point here is not whether AI models are racist or historically inaccurate or “woke”, but that models are political and never disinterested. Harder questions about how culture is made computational, or what kind of truths we want as society, are never broached and therefore never worked through systematically.
Such questions are certainly broader and less “pointy” than bias, but also less amenable to being translated into a problem for a coder to resolve.
What next?
How, then, should those outside the academy respond to AI? The past few years have seen a flurry of workshops, seminars and professional development initiatives. These range from “gee whiz” tours of AI features for the workplace, to sober discussions of risks and ethics, to hastily organised all-hands meetings debating how to respond now, and next month, and the month after that.
Bender and Hanna wrap up their book with their own responses. Many of these, like their questions about how models work and who benefits, are simple but fundamental, offering a strong starting point for organisational engagement.
For the technosceptical duo, refusal is also clearly an option, though individuals will obviously have vastly different degrees of agency when it comes to opting out of models and pushing back on adoption strategies. Refusal of AI, as with many technologies that have come before it, often relies to some extent on privilege. The six-figure consultant or coder will have discretion that the gig worker or service worker cannot exercise without penalties or punishments.
If refusal is fraught at the individual level, it seems more viable and sustainable at a cultural level. Bender and Hanna suggest generative AI be responded to with mockery: companies who employ it should be derided as cheap or tacky.
The cultural backlash against AI is already in full swing. Soundtracks on YouTube are increasingly labelled “No AI”. Artists have launched campaigns and hashtags, stressing their creations are “100% human-made”.
These moves are attempts to establish a cultural consensus that AI-generated material is derivative and exploitative. And yet, if these moves offer some hope, they are swimming against the swift current of enshittification. AI slop means faster and cheaper content creation, and the technical and financial logic of online platforms – virality, engagement, monetisation – will always create a race to the bottom.
The extent to which the vision offered by big tech will be accepted, how far AI technologies will be integrated or mandated, how much individuals and communities will push back against them – these are still open questions. In many ways, Bender and Hanna successfully demonstrate that AI is a con. It fails at productivity and intelligence, while the hype launders a series of transformations that harm workers, exacerbate inequality and damage the environment.
Yet such consequences have accompanied previous technologies – fossil fuels, private cars, factory automation – and hardly dented their uptake and transformation of society. So while praise goes to Bender and Hanna for a book that shows “how to fight big tech’s hype and create the future we want”, the issue of AI resonates, for me, with Karl Marx’s observation that people “make their own history, but they do not make it just as they please”.
Luke Munn does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Last week, one of the dark web’s most prominent drug marketplaces – Archetyp – was shut down in an international, multi-agency law enforcement operation following years of investigations. It was touted as a major policing win and was accompanied by a slick cyberpunk-themed video.
But those of us who have studied this space for years weren’t surprised. Archetyp may have been the most secure dark web market. But shutdowns like this have become a recurring feature of the dark web. And they are usually not a significant turning point.
The durability of these markets tells us that if policing responses keep following the same playbook, they will keep getting the same results. And by focusing so heavily on these hidden platforms, authorities are neglecting the growing digital harms in the spaces we all use.
One of the most popular dark web markets
Dark web markets mirror mainstream e-commerce platforms – think Amazon meets cybercrime. These are encrypted marketplaces accessed via the Tor Browser, a privacy-focused browser that hides users’ IP addresses. Buyers use cryptocurrency and escrow systems (third-party payment systems which hold funds until the transaction is complete) to anonymously purchase illicit drugs.
Usually these products are sent to the buyer by post and money transferred to the seller through the escrow system.
Archetyp launched in May 2020 and quickly grew to become one of the most popular dark web markets with an estimated total transaction volume of €250 million (A$446 million). It had more than 600,000 users worldwide and 17,000 listings consisting mainly of illicit drugs including MDMA, cocaine and methamphetamine.
Compared to its predecessors, Archetyp enforced enhanced security expectations from its users. These included an advanced encryption program known as “Pretty Good Privacy” and a cryptocurrency called Monero. Unlike Bitcoin, which records every payment on a public ledger, Monero conceals all transaction details by default which makes them nearly impossible to trace.
Despite the fact Archetyp had clearly raised the bar on security on the dark web, Operation Deep Sentinel – a collaborative effort between law enforcement agencies in six countries supported by Europol and Eurojust – took down the market. The front page has now been replaced by a banner.
While these publicised take-downs feel effective, evidence has shown such interventions only have short-term impacts and the dark web ecosystem will quickly adapt.
A persistent trade
These shutdowns aren’t new. Silk Road, AlphaBay, WallStreet and Monopoly Market are all familiar names in the digital graveyard of the dark web. Before these dark web marketplaces were shutdown, they sold a range of illegal products, from drugs to firearms.
Yet still, the trade persists. New markets emerge and old users return. In some cases, established sellers on closed-down markets are welcomed onto new markets as digital “refugees” and have joining fees waived.
What current policing strategies neglect is that dark web markets are not isolated to the storefronts that are the popular target of crackdowns. These are communities stretched across dark and surface web forums which develop shared tutorials and help one another adapt to any new changes. These closures bind users together and foster a shared resilience and collective experience in navigating these environments.
Law enforcement shutdowns are also only one type of disruption that dark web communities face. Dark web market users routinely face voluntary closures (the gradual retirement of a market), exit scams (sudden closures of markets where any money in escrow is taken), or even scheduled maintenance of these markets.
Ultimately, this disruption to accessibility is not a unique event. In fact, it is routine for individual’s participating in these dark web communities, par for the course of engaging in the markets.
This ability of dark web communities to thrive in disruptions reflects how dark web market users have become experts at adapting to risks, managing disruptions and rebuilding quickly.
The other emerging issue is that current policing efforts treat dark web markets as the core threat, which might miss the wider landscape of digital harms. Illicit drug sales, for example, are promoted on social media, where platform features such as recommendation systems are affording new means of illicit drug supply.
This is all alongside the countless cases of celebrities and social media influencers caught up in crypto pump-and-dump schemes, where hype is used to artificially inflate the price of a token before the creators sell off their holdings and leave investors with worthless tokens.
This shows that while the dark web gets all the attention, it’s far from the internet’s biggest problem.
Archetyp’s takedown might make headlines, but it won’t stop the trade of illicit drugs on the dark web. It should force us to think about where harm is really happening online and whether current strategies are looking in the wrong direction.
The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.
An artist’s rendition of the newly discovered fish, _Sphyragnathus tyche_.(C. Wilson), CC BY
In 2015, two members of the Blue Beach Fossil Museum in Nova Scotia found a long, curved fossil jaw, bristling with teeth. Sonja Wood, the museum’s owner, and Chris Mansky, the museum’s curator, found the fossil in a creek after Wood had a hunch.
The fossil they found belonged to a fish that had died 350 million years ago, its bony husk spanning nearly a metre on the lake bed. The large fish had lived in waters thick with rival fish, including giants several times its size. It had hooked teeth at the tip of its long jaw that it would use to trap elusive prey and fangs at the back to pierce it and break it down to eat.
Blue Beach Fossil Museum curator Chris Mansky below the fossil cliffs. (C. Wilson), CC BY
Birth of the modern vertebrate world
The modern vertebrate world is defined by the dominance of three groups: the cartilaginous fishes or chondrichthyans (including sharks, rays and chimaeras), the lobe-finned fishes or sarcopterygians (including tetrapods and rare lungfishes and coelacanths), and the ray-finned fishes or actinopterygians (including everything from sturgeon to tuna). Only a few jawless fishes round out the picture.
Armoured jawless fishes had dwindled by the Late Devonian Period (419.2-358.9 MA), but the rest were still diverse. Actinopterygians were still restricted to a few species with similar body shapes.
By the immediately succeeding early Carboniferous times, everything had changed. The placoderms were gone, the number of species of fishy sarcopterygians and acanthodians had cratered, and actinopterygians and chondrichthyans were flourishing in their place.
A shortnose chimaera, belonging to the chondrichthyan group of vertebrates. (Shutterstock)
A sea change
Blue Beach has helped build our understanding of how this happened. Studies describing its tetrapods and actinopterygians have showed the persistence of Devonian-style forms in the Carboniferous Period.
Whereas the abrupt end-Devonian decline of the placoderms, acanthodians and fishy sarcopterygians can be explained by a mass extinction, it now appears that multiple types of actinopterygians and tetrapods survived to be preserved at Blue Beach. This makes a big difference to the overall story: Devonian-style tetrapods and actinopterygians survive and contribute to the evolution of these groups into the Carboniferous Period.
Comparing the jawbones of Sphyragnathus, Austelliscus and Tegeolepis. (C. Wilson), CC BY
The Blue Beach fossil was actinopterygian, and we wondered what it could tell us about this issue. Comparison was difficult. Two actinopterygians with long jaws and large fangs were known from the preceding Devonian Period (Austelliscus ferox and Tegeolepis clarki), but the newly found jaw had more extreme curvature and the arrangement of its teeth. Its largest fangs are at the back of its jaw, but the largest fangs of Austelliscus and Tegeolepis are at the front.
These differences were significant enough that we created a new genus and species: Sphyragnathus tyche. And, in view of the debate on actinopterygian diversification, we made a prediction: that the differences in anatomy between Sphyragnathus and Devonian actinopterygians represented different adaptations for feeding.
Front fangs
To test this prediction, we compared Sphyragnathus, Austelliscus and Tegeolepis to living actinopterygians. In modern actinopterygians, the difference in anatomy reflects a difference in function: front-fangs capture prey with their front teeth and grip it with their back teeth, but back-fangs use their back teeth.
Since we couldn’t observe the fossil fish in action, we analyzed the stress their teeth would experience if we applied force. The back teeth of Sphyragnathus handled force with low stress, making them suited for a role in piercing prey, but the back teeth of Austelliscus and Tegeolepis turned low forces into significantly higher stress, making them best suited for gripping.
Substantial work remains — only the jaw of Sphyragnathus is preserved, so the “locomotion-first” hypothesis was untested. But this represents the challenge and promise of paleontology: get enough tantalizing glimpses into the past and you can begin to unfold a history.
Conrad Daniel Mackenzie Wilson receives funding from the Natural Sciences and Engineering Research Council of Canada, the Ontario Student Assistance Program, and the Society of Vertebrate Paleontology.
Source: The Conversation – USA – By Donald Heflin, Executive Director of the Edward R. Murrow Center and Senior Fellow of Diplomatic Practice, The Fletcher School, Tufts University
President Donald Trump speaks to reporters outside the White House on June 24, 2025, in Washington, less than 12 hours after announcing a ceasefire between Israel and Iran. Chip Somodevilla/Getty Images
“We basically have two countries that have been fighting so long and so hard that they don’t know what the f–k they’re doing,” an angry and frustrated Trump told reporters outside the White House on June 24.
Amy Lieberman, a politics and society editor at The Conversation U.S., spoke with former Ambassador Donald Heflin, an American career diplomat who serves as the executive director of the Edward R. Murrow Center at the Fletcher School, Tufts University, to understand how ceasefires typically work – and how the Israel-Iran deal stacks up against other agreements to end wars.
An excavator removes debris from a residential building that was destroyed in Israel’s June 13, 2025, airstrike on Tehran, Iran. Majid Saeedi/Getty Images
How do ceasefire deals typically happen?
There are classes taught on how to negotiate ceasefires, but it is ad hoc with each situation.
For example, in one scenario, one of the warring parties wants a ceasefire and has decided that the conflict isn’t going well. The second party might not want a ceasefire, but could agree that it is getting tired or the risks are too high, and agrees to work something out.
The next scenario, which leads to more success, is when both parties want a ceasefire. They decide that the loss of life and money has gone too far for both sides. One of the parties approaches the other through intermediaries to say it wants a ceasefire, and the other warring party agrees.
In a third situation – which is what we are seeing with the Iran-Israel deal – the outside world imposes a ceasefire. Trump likely told both Israel and Iran: Look, it’s enough. This is too dangerous for the rest of the world. We don’t care what you think. Time for a ceasefire.“
The U.S. has done this in the Middle East before, like after the Yom Kippur War in 1973 between Israel and a coalition of Arab countries led by Egypt and Syria. Israel was achieving big military victories, but the risk was pretty great for the world. The U.S. came in and said, “That’s enough, stop it now.” And it worked.
Does the US bring the warring parties to a table in this kind of situation, or simply pressure the countries to stop fighting?
It is more of the U.S. saying, “We are done.” When the U.S. does something like this, it is often going to have backup from the European Union and other countries like Qatar, saying, “The Americans are right. It is time for a ceasefire.”
It appears that this Israel-Iran deal does not have specific conditions attached to it. Is that typical of a ceasefire deal?
This deal doesn’t seem to have any specific details attached to it. Ceasefires work better when they have that. Lasting ceasefires need to address the concerns of the warring parties and give each side some of what it wants.
For instance, in the Ukraine and Russia war, we have not seen either one of those countries push for a ceasefire. Part of the problem is Crimea and eastern Ukraine, sections of land in Ukraine that Russia has annexed and claims as its own. Russia would be happy with a deal that puts it in charge of Crimea and Ukraine, but Ukraine won’t agree to that. The question of who controls specific areas of land has to be addressed in this conflict; otherwise, the ceasefire isn’t going to last.
Search and rescue efforts continue in a building in Beersheba, Israel, hit by a ballistic missile fired from Iran shortly before the ceasefire announced by U.S. President Donald Trump came into effect on June 24, 2025. Mostafa Alkharouf/Anadolu via Getty Images)
Who is responsible for ensuring that both sides uphold a ceasefire?
Security guarantees are an important part of negotiating and maintaining long-term ceasefires. Big countries like the U.S. could say that if a warring party violates a ceasefire agreement, they are going to punish them.
That has created a problem for ceasefires in the future, because the U.S. didn’t deliver on its past security guarantees.
The further away you get from Europe, the less interested the West is in wars. But in those kinds of disputes, United Nations and other international peacekeeping troops can be sent in. Sometimes, that can work brilliantly in one place, like with the example of international peacekeeping troops called the multilateral Observer Mission stationed between Israel and Egypt helping maintain peace between those countries. But you can copy it to another place and it just doesn’t work as well.
How does this ceasefire fit within the history of other ceasefires?
It’s too early to tell. What matters is how the details get fleshed out.
Ideally, you can get representatives of the Israeli and Iranian governments to sit around a conference table to reach a detailed agreement. The Israelis might say, “We have got to have some kind of assurances that Iran is not going to use a nuclear weapon.” And the Iranians could say, “Assassinations of our military generals and scientists has got to stop.” That kind of conversation and agreement is what is missing, thus far, in this process.
Why is it so common for ceasefire deals to fail?
Some ceasefire deals don’t get to the underlying conditions of what really caused the problem and what made people start shooting this time around. If you don’t get to the core issues of a conflict, you are putting a Band-Aid on the situation. Putting a Band-Aid on someone when they are bleeding is a good move, but you ultimately might need more than that to stop the bleeding.
The outside world might be pretty happy with a ceasefire deal that seems to stop the fighting, but if the details are not ironed out, the experts would say, “This isn’t going to last.”
Donald Heflin does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Source: The Conversation – USA – By Donald Heflin, Executive Director of the Edward R. Murrow Center and Senior Fellow of Diplomatic Practice, The Fletcher School, Tufts University
President Donald Trump speaks to reporters outside the White House on June 24, 2025, in Washington, less than 12 hours after announcing a ceasefire between Israel and Iran. Chip Somodevilla/Getty Images
“We basically have two countries that have been fighting so long and so hard that they don’t know what the f–k they’re doing,” an angry and frustrated Trump told reporters outside the White House on June 24.
Amy Lieberman, a politics and society editor at The Conversation U.S., spoke with former Ambassador Donald Heflin, an American career diplomat who serves as the executive director of the Edward R. Murrow Center at the Fletcher School, Tufts University, to understand how ceasefires typically work – and how the Israel-Iran deal stacks up against other agreements to end wars.
An excavator removes debris from a residential building that was destroyed in Israel’s June 13, 2025, airstrike on Tehran, Iran. Majid Saeedi/Getty Images
How do ceasefire deals typically happen?
There are classes taught on how to negotiate ceasefires, but it is ad hoc with each situation.
For example, in one scenario, one of the warring parties wants a ceasefire and has decided that the conflict isn’t going well. The second party might not want a ceasefire, but could agree that it is getting tired or the risks are too high, and agrees to work something out.
The next scenario, which leads to more success, is when both parties want a ceasefire. They decide that the loss of life and money has gone too far for both sides. One of the parties approaches the other through intermediaries to say it wants a ceasefire, and the other warring party agrees.
In a third situation – which is what we are seeing with the Iran-Israel deal – the outside world imposes a ceasefire. Trump likely told both Israel and Iran: Look, it’s enough. This is too dangerous for the rest of the world. We don’t care what you think. Time for a ceasefire.“
The U.S. has done this in the Middle East before, like after the Yom Kippur War in 1973 between Israel and a coalition of Arab countries led by Egypt and Syria. Israel was achieving big military victories, but the risk was pretty great for the world. The U.S. came in and said, “That’s enough, stop it now.” And it worked.
Does the US bring the warring parties to a table in this kind of situation, or simply pressure the countries to stop fighting?
It is more of the U.S. saying, “We are done.” When the U.S. does something like this, it is often going to have backup from the European Union and other countries like Qatar, saying, “The Americans are right. It is time for a ceasefire.”
It appears that this Israel-Iran deal does not have specific conditions attached to it. Is that typical of a ceasefire deal?
This deal doesn’t seem to have any specific details attached to it. Ceasefires work better when they have that. Lasting ceasefires need to address the concerns of the warring parties and give each side some of what it wants.
For instance, in the Ukraine and Russia war, we have not seen either one of those countries push for a ceasefire. Part of the problem is Crimea and eastern Ukraine, sections of land in Ukraine that Russia has annexed and claims as its own. Russia would be happy with a deal that puts it in charge of Crimea and Ukraine, but Ukraine won’t agree to that. The question of who controls specific areas of land has to be addressed in this conflict; otherwise, the ceasefire isn’t going to last.
Search and rescue efforts continue in a building in Beersheba, Israel, hit by a ballistic missile fired from Iran shortly before the ceasefire announced by U.S. President Donald Trump came into effect on June 24, 2025. Mostafa Alkharouf/Anadolu via Getty Images)
Who is responsible for ensuring that both sides uphold a ceasefire?
Security guarantees are an important part of negotiating and maintaining long-term ceasefires. Big countries like the U.S. could say that if a warring party violates a ceasefire agreement, they are going to punish them.
That has created a problem for ceasefires in the future, because the U.S. didn’t deliver on its past security guarantees.
The further away you get from Europe, the less interested the West is in wars. But in those kinds of disputes, United Nations and other international peacekeeping troops can be sent in. Sometimes, that can work brilliantly in one place, like with the example of international peacekeeping troops called the multilateral Observer Mission stationed between Israel and Egypt helping maintain peace between those countries. But you can copy it to another place and it just doesn’t work as well.
How does this ceasefire fit within the history of other ceasefires?
It’s too early to tell. What matters is how the details get fleshed out.
Ideally, you can get representatives of the Israeli and Iranian governments to sit around a conference table to reach a detailed agreement. The Israelis might say, “We have got to have some kind of assurances that Iran is not going to use a nuclear weapon.” And the Iranians could say, “Assassinations of our military generals and scientists has got to stop.” That kind of conversation and agreement is what is missing, thus far, in this process.
Why is it so common for ceasefire deals to fail?
Some ceasefire deals don’t get to the underlying conditions of what really caused the problem and what made people start shooting this time around. If you don’t get to the core issues of a conflict, you are putting a Band-Aid on the situation. Putting a Band-Aid on someone when they are bleeding is a good move, but you ultimately might need more than that to stop the bleeding.
The outside world might be pretty happy with a ceasefire deal that seems to stop the fighting, but if the details are not ironed out, the experts would say, “This isn’t going to last.”
Donald Heflin does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Many consumers – especially gen Z and millennials – use buy-now-pay-later (BNPL) to split or defer payments. The types of purchases made with BNPL can range from groceries and takeaway deliveries to luxury items.
Nearly 40% of regular BNPL users consider shopping a leisure activity. Easily accessing such credit could increase consumption in this group. It is, therefore, unsurprising that the UK BNPL market is projected to triple from 2021 levels by 2030.
With timely repayments, this short-term credit option is free from interest and fees. As an unregulated service, BNPL requires minimal financial checks, ensuring that most purchases will be swiftly approved.
A buyer can acquire items quickly without paying the full amount upfront – the BNPL provider pays the retailer for the goods and recoups the amount from the buyer through instalments.
Get your news from actual experts, straight to your inbox.Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.
So how do BNPL providers make their money? While they may charge customers late fees and account costs, their primary revenue comes from taking a percentage of each BNPL transaction from the retailer and a service fee. This business model is standard for payment services.
But retailers often pay much more for BNPL transactions – sometimes three times more than traditional credit card processing. So to ensure they make a profit, BNPL providers deftly encourage consumers to shop with retailers that use their services.
BNPL is a form of embedded finance – meaning that it seamlessly integrates payments into retailer sites. More than half of retailers are seeing better conversion (more people going on to buy after browsing) when they offer BNPL. This also allows many retailers to expand their market, as BNPL makes products accessible to more consumers.
But there’s a catch. With higher BNPL fees, nearly one in three retailers pass these costs on to customers through higher product prices at the checkout. Consumers face higher prices, and yet BNPL promotes affordability.
A marriage made in heaven?
In this scenario, BNPL acts only as a credit product. But in reality it is more than that. Several providers have created shopping platforms promoting retailers and offering easy repayment management.
This combination of easy funds, appealing shopping experiences and technology-enabled repayment distinguishes BNPL. Our research indicates that BNPL could reshape retail landscapes by weakening competition.
Many BNPL providers offer user-friendly websites and apps, exceeding traditional financial service expectations and influencing key psychological determinants of BNPL use, such as viewing it as a way to save money or being psychologically distanced from the act of borrowing.
As revealed in our most recent study, these platforms are visually appealing, highlight various brands and offer targeted discounts. BNPL is easy to navigate, expands budgets and provides access to credit to those who might otherwise struggle. While BNPL appears to democratise credit, its opaque nature can also present pitfalls.
The package can promote consumer spending, debt and over-consumption. Consequently, there has been a rise in late fees. More than half of BNPL users have incurred a fee, one in three have missed a payment and three in four are at risk of needing debt advice. Others have borrowed to repay BNPL debt.
This escalates when consumers have multiple agreements across providers, complicating debt management. Many BNPL users feel vulnerable, weighing long-term savings against marketing that encourages spending. Their ability to manage this vulnerability affects their financial health, wellbeing and self-image.
As concerns about BNPL debt rise, regulators in countries such as the UK are addressing its financial service aspects. However, they often overlook providers’ techniques for targeting consumers and supporting their shopping habits.
Potential regulation focuses on financial attributes, including affordability checks, but neglects the technological mechanisms that keep customers using BNPL.
Our research suggests that BNPL’s success rests on its effective use of technology, particularly artificial intelligence and its algorithms. They streamline the loan process, enable repayments to be tailored to each consumer, help shoppers find what they’re looking for and identify retailers, brands and products that a user might like. BNPL providers are technology-based retail platforms as much as financial institutions.
BNPL in numbers
To protect consumers, legislation like that proposed in the UK must address the technological heart of BNPL and the risks of algorithmic marketing when designing retail sites. These risks could include targeted retailer and product promotions that nudge buying behaviour, or building a customer’s reliance on delaying payments.
Proposed regulation focuses on the individual credit agreement between a user and provider. This overlooks cumulative BNPL spending and its persistence. What’s needed is a holistic approach considering that consumers often enter multiple agreements at once. This affects shopping habits, budgeting and repayment behaviour.
Only by addressing this will consumers be appropriately protected. But rethinking BNPL will also mean thinking again about who might be a vulnerable consumer. Traditional demographic factors fail to capture BNPL users’ psycho-social characteristics – things like materialism, impulsiveness and financial literacy. These are more influential than demographic markers on their usage and repayment behaviour.
Regulators need to understand who is using BNPL and why. Only then will they appreciate BNPL’s full scope and market impact and be able to enable consumers to have a healthy relationship with credit.
The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.
This spring has been the driest in the UK since 1893. May’s rainfall was 43% lower than the long-term average. Fish rescues have already taken place in Shropshire as rivers dried up. Low water levels have made it difficult for boats to navigate along some canals.
Years of drainage, overgrazing and peatland degradation have turned much of the UK’s uplands into fast-draining systems. Rainfall that once infiltrated slowly now rushes off hillsides, filling rivers quickly, before vanishing just as fast.
Get your news from actual experts, straight to your inbox.Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.
Even after a year of exceptional rain and flooding, the soils and ecosystems that should be buffering us against drought are depleted. This recent spell of dry weather has exposed just how fragile the system has become.
The UK government reconvened the national drought group – a coalition of its most senior decision-makers, Environment Agency, water companies, plus key farming and environmental groups – on June 5 to address growing concerns as reservoir levels which are at 77% of capacity nationally.
Water availability remains under pressure across much of England. Sources in the northwest Pennines, Haweswater and Thirlmere in the Lake District, which supply much of the northwest, are currently at around 50% of capacity. Normally, they would be around 75% full. In Yorkshire, these water levels are currently around 60%.
The reservoir at Anglezarke in Lancashire is drying out. Neil Entwistle, CC BY-NC-ND
But landscapes can be restored in ways that reduce both flood risk and the effects of drought. At Smithills Estate near Bolton, the Mersey Forest (Cheshire and Merseyside’s community forest), conservation charity Woodland Trust and the Environment Agency have spent the last decade restoring 1,700 hectares of upland.
They have blocked old drainage channels, rewetted peat bogs, planted trees, improved soil structure and adapted farming. These changes (often referred to as natural flood management) allow the land to hold water longer, slow its release, and sustain the flow of water in rivers during dry periods that can help water conservation and reduce the risk of floods.
Restoring rivers
We both grew up in the shadow of the moorlands around Rivington and Smithills in Bolton. We built our careers restoring rivers and their catchments and want to prevent “water-stressed” situations where water demand exceeds the available supply. We continue to study the implications and resilience of natural flood management here in the UK and overseas.
At Smithills, restored bogs act like sponges, soaking up rain and releasing it gradually. Newly planted woodland supports biodiversity, encourages water infiltration and provides shade, which reduces evaporation. Natural flood management has slowed water down across the catchment, helping to reduce peak flows during storms by 27.3% and has boosted river flows during dry spells by storing and slowly releasing water by 27.1%.
Tree trunks slow down the flow of water. Neil Entwistle, CC BY-NC-ND
Tree trunks laid across the gullies have kept areas of Smithills wet throughout spring, creating valuable habitat and supporting water resilience in the landscape. We’re working with partners to monitor natural flood management benefits and expand restoration, while also exploring new questions.
These include how the structures influence greenhouse gas emissions through wetting and drying cycles, affect sediment capture and storage, and how their function changes over time. This research is helping to shape how nature-based solutions are understood, valued and adopted more widely.
Mitigation (tackling the root causes) and adaptation (adjusting systems and behaviours) to water stresses require landowners, water companies, local authorities, regulators, environmental groups and communities to work together to deliver shared outcomes.
But this effort needs to be matched by an understanding that changes in how land is managed too. If the landscape continues to shed water rapidly, reservoirs will struggle to recover even when rain does arrive. We need to slow the flow of water and rejuvenate the lost natural processes at large scales through restoration.
Farmers are grazing cattle on the heath. Neil Entwistle, CC BY-NC-ND
The UK will face water shortages within the next decade unless urgent action is taken. The recent Independent Water Commission, set up by the UK government to recommend a major overhaul of the water sector’s planning, regulation and infrastructure, highlights the importance of nature-based solutions, such as restoring natural processes like river flow and wetland function, alongside natural capital investment.
This involves putting money and resources into the protection, restoration or enhancement of nature, to secure long-term benefits such as clean air, water purification or flood protection.
Nature-based solutions can be scaled up quickly, plus they benefit people and the environment. Local communities can also get involved in meaningful restoration work. At Smithills, volunteers plant trees and help monitor the benefits of natural flood management, including changes in water quality, water levels and biodiversity. Farmers are exploring regenerative grazing.
Schools use the estate for environmental learning. This is not only about resilience – it is about reconnecting people with the natural landscapes that surround them.
To avoid routine hosepipe bans, protect biodiversity and secure food and water supply into the future, land needs to be at the centre of the UK’s drought strategy. Restoring bogs, woodlands and soils is not a luxury. It is essential infrastructure in a changing climate.
Don’t have time to read about climate change as much as you’d like?
Neil Entwistle has received previous funding from British Council, Universities UK, NERC for work related to river restoration and climate resilience. He also works for a boutique fund manager, to fund and deploy solutions to some of the most pressing Nature-related challenges our economy faces today.
The UK government aims to cut energy bills for large businesses by up to a quarter over four years, thanks to a £2 billion investment within its new industrial strategy. The aim is to make British manufacturers of steel, cars, chemicals, glass and other industrial sectors more competitive with foreign firms.
UK businesses pay some of the highest energy prices in Europe. Under the new scheme, roughly 7,000 energy-intensive businesses will be exempt from paying green levies on their electricity bills. These levies raise funds to support the deployment of renewable energy and to enact energy-efficiency measures like the insulation of low-income households.
The exemption should make it a bit easier for British companies to switch from fossil fuels to electricity by making the latter cheaper – an important step in the decarbonisation of the economy to tackle climate change. And it may lower costs enough to bring them within orbit of prices paid elsewhere in Europe.
However, heavy industry in the UK is already largely shielded from many of the levies applied to the average energy bill. The British Industry Supercharger scheme, which since April 2024 has exempted energy-intensive industries from renewable energy policy costs and provided discounted network charges, is set to save British manufacturers between £320 million and £410 million in 2025 alone.
Get your news from actual experts, straight to your inbox.Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.
The supercharger scheme fully exempts eligible firms from paying several costs linked to encouraging renewable energy investment and production. Industrial energy users covered under this scheme also enjoy a 60% reduction in network charges, compared with businesses outside the scheme.
Modelling conducted before the government’s announcement suggested that, if the major green levies on electricity were removed, average non-domestic electricity bills could fall by around 15%.
While significant, this reduction is unlikely to fully resolve the competitiveness challenges facing most businesses, as even discounted energy prices would remain high by international standards.
There are other limitations with the strategy. To start, more could be done to encourage firms to switch from fossil fuels to electricity by not just cutting electricity levies but shifting some onto gas bills.
The cost of expanding and upgrading the grid to support more electrification and renewables is another concern. These investments in power lines and wind farms will be essential, but they won’t come cheap. Reducing the contribution made by big businesses to these costs means the burden for these essential upgrades will fall on smaller businesses and households.
There are several options for addressing these challenges, however. One is to make energy demand more flexible, by financially incentivising businesses to use electricity when its supply from renewable sources is generally greater.
Another way to cut network costs for businesses is to offer grid connection arrangements with a less secure electricity supply. These arrangements include allowing the network operator to reduce maximum capacity during times of grid congestion, and sharing a connection with several other businesses.
Most importantly, the UK needs to move away from a system where the cost of gas sets the price of electricity most of the time, even though less than half of the country’s electricity now comes from gas. This can be achieved by expanding renewable energy storage (in the form of grid-scale batteries for example), so that grid operators are less reliant on gas power plants to fill gaps in electricity supply from wind and solar.
Reform to Britain’s energy market and its pricing structure would make a real difference too, though this will also require significant investment in grid infrastructure and careful regulatory change.
While the government’s priority is energy savings for larger businesses, small and medium-sized enterprises (SMEs) typically pay the highest rates for their energy. This is even despite most smaller firms being exempt from green levies.
Energy-intensive sectors, such as hospitality and retail, remain highly vulnerable to energy costs. Average non-domestic electricity prices increased by over 75% between 2021 and 2024, while gas prices more than doubled. This has contributed to a surge in business failures: in June 2024, company insolvencies were 17% higher than a year earlier, reaching the third highest monthly total since 2000.
Unfortunately, support for SMEs is heading in the wrong direction. Having funded a pilot energy advice service in the West Midlands, the government’s June spending review did not include funding to expand support for energy efficiency or renewable installations to SMEs nationwide. This leaves millions of smaller businesses exposed to high energy prices, without help to cut costs or emissions.
The government’s new strategy may help some of the UK’s largest manufacturers compete internationally. But without targeted support for smaller firms, the benefits could be unevenly shared. The UK’s wider economy will continue to struggle with high energy costs and business failures as a result.
Don’t have time to read about climate change as much as you’d like?
Source: The Conversation – UK – By James Fitzgerald, Associate Professor of Terrorism Studies, Dublin City University
American pop star Lady Gaga delivered a free concert to over 2.1 million revellers on Copacabana beach in the Brazilian city of Rio de Janeiro in May. Those attuned to security concerns saw a policing and public safety nightmare.
And shortly after the concert, Rio de Janeiro’s civil police secretary, Felipe Curi, announced that the worst realisation of this nightmare had almost come to pass. An improvised bomb attack targeting fans had been thwarted thanks to police intelligence.
A loose group of conspirators from across Brazil, gelled across chat apps and other social media by anti-LGBTQ+ sentiments, planned to murder civilians. The intention was to send a political message about resisting what they see as “indecency” and “social decadence”.
Given the setting, volume of media coverage and possibility of a panicked stampede, Brazil had surely avoided the worst terrorist attack in its history.
For an attack to qualify as “terrorism”, it must be carried out for explicitly political purposes – motives akin to reshaping society violently or agitating for self-determination through force.
Yet, a month after the thwarted Copacabana attack, the main conversation about terrorism in Brazil is focused on mistaken efforts to label criminal groups as terrorists.
In late May, Brazil’s Congress fast tracked a bill that would broaden the definition of terrorism to include the actions of criminal organisations and militias. This is on the basis that their routine practices of “imposing territorial control” are designed to spread “social or widespread terror”. The bill is overly vague and extremely dangerous.
Brazilian organised crime
Equating organised crime and the violence it produces with “terrorism” is somewhat understandable. Organised gangs in Brazil, such as Comando Vermelho (CV) and Primeiro Comando da Capital (PCC), control vast expanses of territory, and civilians ultimately pay the price.
However, as endemic as organised crime is in Brazil, these groups strive for self-enrichment. Their violence is used solely to either protect or enhance this goal. Neither CV nor PCC have any political motive that would qualify their actions as terrorism.
The government already has legal ways to deal with criminal groups, but it has been hard to achieve lasting, positive results using these methods.
Should the actions of criminal organisations be reclassified as terrorism, a new suite of measures will become available to the state’s repressive apparatus. This will be true for the current government and future administrations.
New measures to fight terrorism are practically guaranteed to erode democratic and procedural norms. Armed with a remit to eradicate terrorism, states have repeatedly shown that they exacerbate the very cycles of violence they aim to erase.
Get your news from actual experts, straight to your inbox.Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.
French-Algerian philosopher Jacques Derrida identified the essence of this dilemma in 2003. In an interview reflecting on the 9/11 attacks on the US, Derrida said that the primary threat of terrorism was not just in the violence itself, but in how societies respond to it.
The US’s disastrous “war on terror”, for example, led to a consequential wave of violence worldwide. It is estimated to have killed over 500,000 civilians in Iraq, Afghanistan and Pakistan. And western countries that joined the fray have suffered jihadist attacks in return.
Governments also adopted new measures to deal with security issues inside their own countries. Potential terrorists were apprehended through surveillance, with the new goal of counterterrorism being to intervene before violence is able to occur.
States of emergency, which significantly curtail civil liberties, were routinely imposed in the aftermath of high-profile terrorist attacks. This included a state of emergency after the November 2015 attacks in Paris that gave the authorities power to search any premises without judicial oversight.
The implementation of this logic continues today. At the time of writing, denunciations of Israel’s assault on Gaza continue to be spuriously tied to support for “terrorism”.
Hamas is a terrorist organisation. But that should not see Palestinian civilians – nor supporters of their rights – labelled as potential terrorists. Yet student protesters in the US have been threatened with deportation, financial ruin and even imprisonment.
The term “terrorism” contains within it a power to dress state repression as a proportionate response to emergency. In El Salvador, we have seen how counterterrorism is being applied as an emergency means to solve the country’s organised crime problem.
Nayib Bukele’s government has sent countless criminals to the Terrorism Confinement Centre mega-prison in Tecoluca. It has also condemned many innocent civilians to a parallel fate, with little-to-no chance of redress or due process.
The tragic consequences of state crackdowns against those spuriously labelled as “terrorists” lingers in the historical memory of Brazil. This new bill moves to the Senate at a time of renewed culturing reckoning with the consequences of Brazil’s repressive campaigns under the military dictatorship of 1964 to 1985.
Brazil should recognise its fortune in never having truly adopted the discourse of the war on terror. Now, it should not adopt an evolved discourse of counterterrorism to address the very serious – but very separate – problem of organised crime.
In the name of order and progress, and with an eye towards civilians who would ultimately pay the price, this bill cannot be allowed to become law.
James Fitzgerald does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Source: The Conversation – UK – By Phil Tomlinson, Professor of Industrial Strategy, Co-Director Centre for Governance, Regulation and Industrial Strategy (CGR&IS), University of Bath
Brexit, COVID, the war in Ukraine and now Trump’s tariffs have all highlighted how vulnerable life in the UK is to disruptions in trade. Everyday items that people rely on can be subject to major shortages, delays and price rises, due to something as simple as a ship getting stuck in a canal.
This is because the UK is hugely reliant on other countries to provide much of what it needs. Medical supplies, cars, electronics and fruit are just a few of Britain’s favourite things that it tends to buy in from elsewhere.
Global supply chains deliver lower prices and wider choice to consumers but they are also often highly complex. In the car industry for example, components may move within and between companies and cross national boundaries many times, before ending up in the final assembled vehicle. This can make them vulnerable.
Get your news from actual experts, straight to your inbox.Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.
In response to the disruption of recent years, Chancellor Rachel Reeves has long been arguing for what she calls “securonomics” – investing in domestic energy sources and resilient networks. So perhaps it was no surprise that the British government’s new industrial strategy plans emphasise the importance of supply chain security.
A new industrial competitiveness scheme for example, is designed to cut energy costs for the UK’s most energy intensive firms, which manufacture things like steel, ceramics and glass. This should help domestic supply capacity.
A reported £600 million has also been allocated to develop the UK’s logistics industry. And there is a proposal for a “national supply chain centre” to identify weaknesses, enhance domestic capability and build strategic international partnerships. Vulnerabilities and dependencies will also be more closely monitored.
Another focus will be to diversify critical supply chains by reducing the UK’s dependence on single supplier nations (such as China for rare earth elements or semiconductors). One option should be strengthening alliances with friendly nations (known as “friendshoring”) with the aim of embedding supply chains in places that can be relied upon.
The recently announced trade deals with the US and India, and signs of greater cooperation with the EU do offer some promise in this area. Trade deals help with supply chain cooperation, but could go further and include resilience initiatives (such as creating joint stockpiles of things like critical minerals) to reduce disruption in the future.
On the domestic front, the UK could still do more to incentivise “reshoring” (bringing some manufacturing or production of goods back to the UK). Reversing decades of decline in these sectors would be challenging, and require a long-term investment in domestic capacity and skills. But it could also deliver a boost to jobs and growth, potentially in parts of the UK which need it most.
Supply chains within that industry (and others, such as healthcare) can be vulnerable to cyberattacks and economic coercion from malicious groups and hostile foreign states. So enhancing cybersecurity in logistics and infrastructure will also be critical.
This will mean better protection for ports, customs systems and logistics software. There is some limited additional funding on offer for this, but more will be required, which in turn will open up new opportunities for firms in the cyber industry. Indeed, a “cyber cluster” of businesses is already emerging in central England from the government defence and technology campus at Porton Down in Wiltshire across to GCHQ – the national centre for intelligence and security – in Gloucestershire.
But with still much to do, overall Reeves has been right to stress the importance of supply chains. They are crucial to people’s jobs and homes, the medicines they need and the food they eat. And supply chain security is not just an economic issue. It is a strategic imperative for safeguarding the UK, its businesses and the welfare of its citizens.
The tone of the new industrial strategy reflects Reeves’s “securonomics” rhetoric. But how far this goes in actually strengthening supply chains and boosting their resilience remains open to question, especially in the context of limited resources and a chancellor keen to build a reputation for fiscal prudence.
Phil Tomlinson receives funding from the Innovation and Research Caucus (IRC).
David Bailey receives funding from the ESRC’s UK in a Changing Europe programme.
Paddy Bradley is affiliated with the National Innovation Centre for Rural Enterprise based at Newcastle University.
He is Chair of TransWilts Community Interest Company which aims to increase public use of trains and buses in the Wiltshire area.
He is Chair of Governors of Wiltshire College and University Centre.
Source: The Conversation – UK – By David Hastings Dunn, Professor of International Politics in the Department of Political Science and International Studies, University of Birmingham
We still don’t know the extent to which Iran’s stock of enriched uranium and the capability to use it have been destroyed. But leaving aside such practical considerations, the US bombing raid also constituted an attack on the prevailing international legal order.
In some ways, the US actions echo the 1981 Israeli strike on Osirak when the Israeli Air Force attacked and partially destroyed Iraq’s Osirak nuclear reactor, killing ten Iraqi soldiers and one French technician.
However, the US attack can be seen as more serious because it has been launched in a far more fragile and geopolitical environment. Moreover, the state violating the legal rules is the erstwhile guardian of the legal order –– the USA.
Get your news from actual experts, straight to your inbox.Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.
The attacks appear to be the logical follow through of Trump’s withdrawal from the joint comprehensive plan of action (JCPOA) in 2018. This was the Obama-era agreement that significantly limited Iran’s enrichment of nuclear material. For Trump, that negotiated deal was imperfect, as it relied on ongoing Iranian restraint. His decision to unleash US bombers was designed to end the nascent Iranian nuclear threat once and for all.
But such unilateral actions rarely result in such black and white results. And this situation shows every indication of being no different. It is for this reason that negotiated solutions and agreed legal frameworks are generally regarded as better long-term solutions than military force.
A significant inhibition on the use of force to remove nuclear threats has been its lack of justification under international law. When the administration of George W Bush decided to launch its invasion of Iraq in 2003, the US, UK and Australian governments that spearheaded the invasion relied on the express legal justification that Iraq was already in breach of existing UN security council resolutions that required it to be disarmed of all weapons of mass destruction (WMD).
For his part, Trump relied on the argument that Iran’s nuclear facilities already posed an imminent threat to US security. This argument had been undermined by none other than Trump’s director of national intelligence, Tulsi Gabbard, just weeks previously.
Gabbard testified before Congress in March that the US “continues to assess that Iran is not building a nuclear weapon and Supreme Leader Khamenei has not authorised the nuclear weapons programme he suspended in 2003”.
Tulsi Gabbard delivers the annual threat assessment in March 2025.
Trump, who has a habit of ignoring his intelligence community, dismissed Gabbard’s assessment saying, “I don’t care what she said. I think they’re very close to having it”.
No legal justification
One thing that is striking about the June 22 US bombing campaign is the cursory attention given to any substantive legal justification. It’s a distinct contrast to Bush’s attempts – however much this strained the law to breaking point – to justify his 2003 use of force.
The US ambassador to the United Nations, Dorothy Camille Shea, made only the most limited of references to the legality of the action in her speech to the UN security council a day after the US strikes.
George W Bush’s ‘Mission accomplished’ speech.
In our book Drones, Force, and Law we demonstrate how the defining mark of an international society is that states recognise the need to give an account of their behaviour in terms of the accepted legal rules.
Even when policymakers know that they are breaking established interpretations of the law, they rarely admit this publicly. They seek to offer a legal justification – however strained and implausible – that is in conformity with the rules.
If a state openly admitted that it was violating the law, giving a justification for its conduct only in terms of that state’s values and beliefs, then it would be treating others with contempt. It would, to quote the respected Australian international relations theorist, Hedley Bull, “place in jeopardy all the settled expectations that states have about one another’s behaviour”.
This is exactly what Trump is doing by not seeking to expressly justify the US’ use of force in legal terms. This invites others to mount a broader assault on international law itself as something that is both fragile and hypocritical in the hands of the powerful.
Unintended consequences
The US has justified its attack as aimed at preventing Iran from developing a nuclear weapon. But a perverse consequence of the attack is that it is likely to further erode the norm against proliferation. There are two key arguments here.
The first is that all three Iranian facilities attacked were, before Israel initially attacked Iran on June 12, under International Atomic Energy Agency (IAEA) safeguards. So, by attacking these installations, the US – like Israel four decades ago with its attack against Osirak – was signalling that it had no confidence in the multilateral mechanisms of non-proliferation. It was essentially saying that it has to rely on unilateral action.
The second consequence is that a strike aimed at preventing Iran from acquiring nuclear weapons may instead push it – and others – to accelerate weaponisation efforts. These US attacks may confirm for many the earlier lessons from Iraq, as well as subsequently in Libya and Ukraine. States without nuclear weapons are vulnerable to regime change or military action.
If this is the lesson that is drawn by those who live in dangerous neighbourhoods and who are increasingly worried about their security, then the US action could serve as a further spur to nuclear proliferation.
Trump has shown a worrying propensity to ignore legal constraints on his power both domestically and internationally. This action, less than six months into his administration, is an alarming harbinger of his contempt for the internationally agreed legal rules restricting the use of force.
David Hastings Dunn has previously received funding from the ESRC, the Gerda Henkel Foundation, the Open Democracy Foundation and has previously been both a NATO and a Fulbright Fellow.
Nicholas Wheeler has formally received funding from the Economic and Social Research Council and the Open Society Foundations.
If you’ve ever shared your home with more than one cat, you’ll know how different their personalities can be. One might chirp for food, purr loudly on your lap and greet visitors at the door. Another might prefer quiet observation from a distance.
So why do some cats become chatty companions while others seem more reserved?
A recent study led by wildlife researcher Yume Okamoto and their colleagues at Kyoto University suggests that part of the answer may lie in cat genes.
Cat owners from across Japan were asked to complete a questionnaire about their cat (the Feline Behavioural Assessment and Research Questionnaire), and to take a cheek swab from their pet to provide a DNA sample. The survey included questions about a range of cat behaviour, including purring and vocalisations directed at people.
The researchers in the recent Japanese study focused on the cats’ androgen receptor (AR) gene, located on the X chromosome. This gene helps regulate the body’s response to hormones such as testosterone and contains a section where a DNA sequence is repeated. AR is an essential part of vertebrate biology.
Get your news from actual experts, straight to your inbox.Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.
The most ancient form of AR appeared in the common ancestor of all jawed vertebrates, over 450 million years ago. AR controls the formation of male reproductive organs, secondary sexual characteristics and reproductive behaviour. The number of these sequences alters how responsive the gene is. Shorter repeats make the receptor more sensitive to androgens. In other species, including humans and dogs, shorter repeats in the AR gene have been linked with increased aggression and extraversion.
Among 280 spayed or neutered cats, those with the short AR gene variant purred more often. Males with the variant also scored higher for directed vocalisations such as meowing to be fed or let out. Females with the same genotype, however, were more aggressive towards strangers. Meanwhile, cats with the longer, less active version of the gene tended to be quieter. This variant was more common in pedigree breeds, which are typically bred for docility.
Domestication is generally thought to have increased vocal behaviour in cats, so it may seem odd that the version of the gene linked to increased communication and assertiveness is the one also found in wild species such as lynx.
But this study doesn’t tell a straightforward narrative about how cat domestication selects for sociable traits. Instead, it points to a more complex picture. One where certain ancestral traits like aggression may still be useful, especially in high-stress or resource-scarce domestic environments.
Some animals spend a lot of time around humans because they are attracted by our resources rather than bred as companion animals or farmed. Urban gulls offer an interesting example of how close proximity to humans doesn’t always make animals more docile. In cities, herring and lesser black-backed gulls (both often referred to as seagulls) have become bolder and more aggressive.
Researchers at Liverpool John Moores University found that urban gulls were less fearful of humans and more prone to squabbling compared to their rural counterparts. In urban areas, where food is highly contested, being assertive gets results. Gulls are often vilified in the UK press during breeding season as urban villains, swooping down to snatch your lunch or chase pedestrians. This suggests that life alongside humans can sometimes favour more confrontational behaviour.
The parallels with cats raise broader questions about how environment and genes shape behaviour. Okamoto and colleagues’ findings may reflect a trade-off. Traits linked to the short AR variant, such as greater vocalisation or assertiveness, might offer advantages in gaining human attention in uncertain or competitive settings. But these same traits may also manifest as aggression, suggesting that domestication can produce a mix of desirable and challenging traits.
It’s worth bearing in mind that this kind of variation between individuals is fundamental to the evolution of species. Without variation in behaviour, species would struggle to adapt to changing environments. For cats, this means there may be no single ideal temperament, but rather a range of traits that prove useful under different domestic conditions.
From cats to gulls, life alongside humans doesn’t always produce gentler animals. Sometimes, a little pushiness pays off.
Grace Carroll does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.