The most striking feature of the Australian economy in the 21st century has been the exceptionally long period of fairly steady, though not rapid, economic growth.
The deep recession of 1989–91, and the painfully slow recovery that followed, led most observers to assume another recession was inevitable sooner or later.
And nearly everywhere in the developed world, the Global Financial Crisis of 2007–08 did lead to recessions comparable in length and severity to the Great Depression of the 1930s.
Through a combination of good luck and good management, Australia avoided recession, at least as measured by the commonly used criterion of two successive quarters of negative GDP growth.
Recessions cause unemployment to rise in the short run. Even after recessions end, the economy often remains on a permanently lower growth path.
Good management – and good luck
The crucial example of good management was the use of expansionary fiscal policy in response to both the financial crisis and the COVID pandemic. Governments supported households with cash payments as well as increasing their own spending.
The most important piece of good luck was the rise of China and its appetite for Australian mineral exports, most notably iron ore.
This demand removed the concerns about trade deficits that had driven policy in the 1990s, and has continued to provide an important source of export income. Mining is also an important source of government revenue, though this is often overstated.
Still more fortunately, the Chinese response to the Global Financial Crisis, like that in Australia, was one of massive fiscal stimulus. The result was that both domestic demand and export demand were sustained through the crisis.
The shift to an information economy
The other big change, shared with other developed countries, has been the replacement of the 20th century industrial economy with an economy dominated by information and information-intensive services.
The change in the industrial makeup of the economy can be seen in occupational data.
In the 20th century, professional and managerial workers were a rarefied elite. Now they are the largest single occupational group at nearly 40% of all workers. Clerical, sales and other service workers account for 33% and manual workers (trades, labourers, drivers and so on) for only 28%.
The results are evident in the labour market. First, the decline in the relative share of the male-dominated manual occupations has been reflected in a gradual convergence in the labour force participation rates of men (declining) and women (increasing).
Suddenly, work from home was possible
Much more striking than this gradual trend was the (literally) overnight shift to remote work that took place with the arrival of COVID lockdowns.
Despite the absence of any preparation, it turned out the great majority of information work could be done anywhere workers could find a desk and an internet connection.
Despite strenuous efforts by managers, remote or hybrid work has remained common among information workers.
CEOs regularly demand a return to full-time office work. But few if any have been prepared to pay the wage premium that would be required to retain their most valuable (and mobile) employees without the flexibility of hybrid or remote work.
The employment miracle
The confluence of all these trends has produced an outcome that seemed unimaginable in the year 2000: a sustained period of near-full employment. That is defined by a situation in which almost anyone who wants a job can get one.
The unemployment rate has dropped from 6.8% in 2000 to around 4%. While this is higher than in the post-war boom of the 1950s and 1960s, this is probably inevitable given the greater diversity of both the workforce and the range of jobs available.
Matching workers to jobs was relatively easy in an industrial economy where large factories employed thousands of workers. It’s much harder in an information economy where job categories include “Instagram influencer” and “search engine optimiser”.
As we progress through 2025, it is possible all this may change rapidly, for better or for worse.
The chaos injected into the global economy by the Trump Administration will radically reshape patterns of trade.
Meanwhile the rise of artificial intelligence holds out the promise of greatly increased productivity – but also the threat of massive job destruction. Economists, at least, will be busy for quite a while to come.
John Quiggin does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
The world had its eyes on Sydney in 2000. A million people lined the harbour to ring in the new millennium (though some said it was actually the final year of the old one) on January 1.
US television reporters called it “the biggest party in Australian history”. Bill Gates, chairman of Microsoft, whose corporation seemed to represent the coming age, was among those watching on.
Sydney offered not only a world-leading party, but also a litmus test for the much-feared Y2K bug, which threatened to knock planes out of the sky and bring the global economy to a halt. Australia and New Zealand were said to be the “tripwire for the world’s computer systems”.
It was fine in the end, although plenty of work had in fact been undertaken behind the scenes to make Australia’s systems more millennium-proof than they might have been.
This was arguably the defining feature of Australia in the year 2000: a confident display for the world concealing a lot of angst and uncertainty. Australia was the “oldest continent on Earth”, the US broadcasters told their viewers, but it was “much more of an Asian nation”, and much closer to the rest of the world “thanks to technology”.
Those confident claims would probably have surprised many Australians. Theirs was an old country trying to keep up with a new, interconnected world, and also a relatively young one trying to reconcile itself with the ancient cultures that its settler forebears had dispossessed.
A curated Australia
In September, the world’s sporting and political elite, followed by a train of journalists, arrived in Sydney for the 2000 Olympic Games. It had been years in the making, and every level of government was involved. There were no fewer than 47,000 volunteers.
There was something for everyone in the well-curated opening ceremony. The event opened with the crack of a stockman’s whip and a fleet of flag-waving bushmen on horseback. There were highly sanitised displays of European arrival, pastoral settlement and a tribute to an armour-clad colonial Victorian bushranger that must have baffled those viewers watching from abroad who had not seen a Sidney Nolan painting before.
Ancient stories and new cultural sensibilities were on display too. There were stylised performances of the Dreaming, striking First Nations dances and the distinctive sounds of the didgeridoo. A section entitled “Arrivals” recognised the importance of migration in the nation’s story.
A young Aboriginal sprinter, Cathy Freeman, lit the cauldron in what became one of the iconic images of the year. The cauldron’s hydraulics unfortunately got stuck as it ascended, and the flame was mere seconds from snuffing out in what could have been a global embarrassment. But big ambitions incur big risks.
This global performance of Australian-ness was arrestingly simple: that of a nation confident in its own diversity and capable of catering to everyone’s tastes.
Even the musical selections seemed to reconcile the needs of the youth (with performances from a young Vanessa Amorosi and even younger Nikki Webster), and the more mature (represented by John Farnham and Olivia Newton-John).
Australia’s athletes had their best ever showing with 58 medals, including Freeman’s own gold.
Not quite comfortable, not quite relaxed
The Olympics masked as much as they revealed.
In 2000, many white Australians still weren’t sure if theirs was, or should be, a multicultural society.
The reactionary Pauline Hanson was out of parliament for the time being, but her One Nation Party had won 7.5% of the vote in New South Wales in the March 1999 state election, and nearly 23% of the vote in Queensland the year before.
Eight weeks before millennium day, Australians had roundly rejected two referendum proposals, one to become a republic, and for a Constitutional preamble that, among other things, recognised Indigenous Australians as “the nation’s first people”.
But whether Hanson liked it or not, her lifetime had coincided with great demographic and social change.
In 1976, roughly 1.8% of the population said they were born in Asia or the Middle East. In the 2001 census, 1.6% of the population were born in China or Vietnam alone, and many more were the descendants of migrants from these places.
The Aboriginal and Torres Strait Islander population had more than doubled over the same period, while those identifying as Christian decreased from nearly 79% in 1976 to 56% in 2001.
This increasingly diverse Australia claimed to be on a journey to “reconciliation”. That process had been sorely tested during the nasty debates about land rights and the Stolen Generations.
Corroboree 2000, held on May 27 in Sydney, saw the Council for Aboriginal Reconciliation and the nation’s political leaders present their visions for the next phase of national healing. The leaders symbolically left their handprints on a “reconciliation canvas”.
The following day, 250,000 Australians walked across the Sydney Harbour Bridge in a moving display of togetherness. John Howard, the prime minister, declined to participate.
But his treasurer, Peter Costello, made a point of showing up for a similar event in Melbourne that December, leading Victorian Liberals and another 200,000 or so Australians.
Their different approaches showed that the past was still a troubling present. Howard rebuffed suggestions of a treaty between Indigenous and settler Australians and maintained his refusal to apologise on behalf of the Commonwealth to the Stolen Generations, though all the states had done so by this time.
The idea of such an apology was not as popular then as it seemed later on. The prime minister was sensitive to the fact that his was “an unpopular view with a lot of people”, but an opinion poll in The Australian newspaper showed a majority of voters were opposed to a national apology.
Two survivors of the Stolen Generations, Peter Gunner and Lorna Cubillo, sued the Commonwealth for damages in 2000, giving their opponents the chance to challenge the legitimacy of their experiences. None of this looked like a nation that was as “comfortable and relaxed” as Howard had hoped it would be under his watch.
Border politics
Australian collective memory often gravitates toward 2001, the year of the Tampa affair and the September 11 terrorist attacks in New York.
But Australia’s border was already highly politicised in 2000.
In January, a boat arrived from Indonesia carrying 54 Christians fleeing religious conflict. They spent ten weeks at Port Hedland Immigration Detention facility, from which 39 went back to Indonesia and only 15 moved on to Adelaide to build new lives.
Port Hedland and other detention centres made the news for all the wrong reasons. There were riots, hunger strikes and multiple breakouts. Authorities responded with upgraded security perimeters, character checks, and strip searches without warrants.
Frustrated refugees set fire to South Australia’s Woomera facility, which former prime minister Malcolm Fraser publicly condemned as a “hell-hole”.
In an end-of-year reflection for The Age newspaper, Gary Tippet said there had been a “touch of mean-spiritedness” about the handling of it all. Chris Wallace rightly suggests 2000 was a crucial moment in the “march towards an absolute offshore, extraterritorial approach” to refugees in Australia.
In the intervening quarter-century, Australian officials have made mean-spiritedness an art form at the border and on the seas.
First-rate democracy, third-rate economy
Compared to the many legal challenges that came out of the US presidential contest in November 2000, Australia’s elections looked pretty smooth and sensible. The US seemed to have a backward democracy grafted onto its world-leading, information-age economy.
Australia looked the opposite: a first-rate democracy with what looked increasingly like a “branch-office economy”.
Reformers had tried for 20 years to make Australia efficient and competitive, but as one editorial in The Australian Financial Review explained, the country still suffered from its “old economy image”.
Certainly, Australia still sold its minerals and farm products to the world in exchange for quality cars and cutting-edge computers.
With global capitalists still enthralled by the global tech boom (though it was soon to become the “tech wreck”), they had little need for the Aussie dollar.
The currency’s value declined through the year to just 50 US cents, and it would fall further in the following months. On its own, this mattered little, but a quarter of negative growth at the end of the year meant, as Paul Kelly later wrote, an “election-year recession” seemed a “real threat”.
In the meantime, the much-debated Goods and Services Tax took effect around midnight on June 30 (a few hours later for businesses trading through the night).
The 10% consumption tax was a big deal. Costello said in his memoir the “prices of three billion products were to change all at the same time”.
The measure was politically brave, but soon became unpopular, helping raise petrol prices and alienate small business owners.
The punters were pretty confident the Howard government was heading for defeat in 2001. They were wrong.
Between the old and new
The pace of social change accelerated from 2000.
In the 2021 census, 2.6% of the population said they were born in India, and a further 3.2% in China and Vietnam. Aboriginal and Torres Strait Islander Australians had more than doubled over two decades, such that they made up 3.2% of the total population in 2021.
People increasingly related to their economy differently, too. Half of the workforce had been unionised in the 1980s, but coverage fell to roughly a quarter in 2000 and just 12.5% in 2022.
These and other changes make our politics look different from that of 25 years ago. Nailbiter elections are now more common than thumping majorities and attitudes toward the once-feared “minority government” have softened.
For all that, many of the challenges of 2000 are still with us.
Many Australians are less tolerant of overt racism than they once were, but the 2023 Voice referendum and our offshore detention regime remind us that race still matters in this country.
Kevin Rudd apologised to the Stolen Generations in 2008, but Treaty and Truth-Telling are left unresolved.
And for all our talk about human capital and the digital economy, resources make up a much higher share of our total export mix today than in 2000.
A quarter-century on, Australia is still caught between the old and the new.
Dr Joshua Black is a Postdoctoral Research Fellow at The Australia Institute.
In the remnant rainforests of coastal far-north Queensland, bushwalkers may be lucky enough to catch a glimpse of a diminutive marsupial that’s the last living representative of its family.
The musky rat-kangaroo (Hypsiprymnodon moschatus) weighs only 500 grams and looks a bit like a potoroo. It’s part of a lineage that extends back to before kangaroos evolved their distinctive hopping gait.
Unlike their bigger relatives, muskies can be seen out and about during the day, foraging in the forest litter for fruits, fungi and invertebrates.
As the only living macropodoid (the group that includes kangaroos, wallabies, potoroos and bettongs) that doesn’t hop, they can provide a crucial insight into how and when this iconic form of locomotion evolved in Australia.
Our study, published in Australian Mammalogy today, aimed to observe muskies in their native habitat in order to better understand how they move.
Muskies can shed light on the evolution of kangaroo hops, but they haven’t been studied in detail. Amy Tschirn
Why kangaroos are special
If we look around the world, hopping animals are quite rare. Hopping evolved once in macropodoids, four times in rodents, and probably once in an extinct group of South American marsupials known as argyrolagids.
However, the vast majority of animals that hop are really small. The only hopping animals with body masses over 500 grams are kangaroos. And Australia used to have a lot more kangaroo species, many of them quite large.
Despite the abundance of fossil kangaroos, we still don’t really know why they evolved their hopping gait, especially given it only really becomes more efficient at body masses over five kilograms. Hypotheses range from predator escape, to energy preservation, to the opening of vegetation as Australia shifted to a drier climate.
Muskies can sometimes be seen foraging for fallen fruit in the leaf litter in the dense rainforests of far northern Queensland. Aaron Camens
Why muskies are key in roo evolution
Muskies are the last living member of the Hypsiprymnodontidae, a macropodoid family that branched off early in kangaroo evolution. For this reason, it is thought muskies may move in a similar way to early kangaroo ancestors.
Studies on kangaroo evolution will often mention locomotion in muskies, but only in passing. And only a single, brief, first-hand description of locomotor behaviour in muskies has actually been published, in 1982. The authors observed that muskies moved their hindlimbs together in a bound and that all four limbs were used, even at fast speeds.
So, we set out to answer the question: can H. moschatus hop? And if not, what form of locomotion does it use?
Using high-speed video recordings, we studied the sequence in which muskies place their four feet on the ground, and the relative timing and duration of each footfall.
The musky rat-kangaroo (Hypsiprymnodon moschatus) is the only macropodoid not to hop; instead, it bounds over obstacles on the forest floor. Amy Tschirn
Through this gait analysis, we determined that muskies predominantly use what is called a “bound” or “half-bound” gait. Bounding gaits are characterised by the hindfeet moving together in synchrony – just like when bipedal kangaroos hop. In the case of muskies, the forefeet (or “hands”) also generally move together in close synchrony.
No other marsupial that moves on all fours is known to use this distinctive style of movement to the same extent as muskies. Rather, other species tend to use a combination of the half-bound and some form of galloping (the gait that horses, cats and dogs use) or hopping.
From all fours to hopping
We were also able to confirm that tantalisingly brief observation from the 1980s: even when travelling at high speeds, muskies always use quadrupedal gaits, never rearing up on just their back legs.
They are, therefore, the only living kangaroo that doesn’t hop.
Combined with further investigation of their anatomy, these observations help us get closer to understanding how and why kangaroos adopted their distinctive bipedal hopping behaviours.
These results also signal a potential pathway to how bipedal hopping evolved in kangaroos. Perhaps it started with an ancestor that moved about on all fours like other marsupials, such as brush-tail possums, then an animal that bounded like the muskies, and finally evolved into the iconic hopping kangaroos we see in Australia today.
However, we are no clearer on how the remarkable energy economy of kangaroo movement evolved, or why hopping kangaroos got so much bigger than hopping rodents.
The next part of the research needs to focus on that and will be informed by key fossil discoveries from early periods in kangaroo evolution.
There’s more research to be done, but understanding musky gait in detail is a great first step. Amy Tschirn
Amy Tschirn received funding from an Australian Government Research Training Program Scholarship and an Australian Research Council Discovery Grant (to G.J.P) during this project.
Aaron Camens and Peter Bishop do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.
Source: The Conversation – UK – By Siobhan Mclernon, Senior Lecturer, Adult Nursing and co-lead, Ageing, Acute and Long Term Conditions. Member of Health and Well Being Research Center, London South Bank University
As a nurse working in a neurocritical care, I witnessed the sudden and devastating effects of stroke on survivors and their carers.
Following my nursing career, I became a researcher specialising in stroke. Knowledge of stroke risk factors in the general public is poor, so stroke prevention is a priority for public health.
Stroke is a leading cause of death and disability in England – yet it is largely preventable. It’s often considered an older person’s illness but, although stroke risk does increase with age, it can happen at any time of life. In fact, stroke incidence is increasing among adults below the age of 55 years.
Stroke risk factors that tend to be more common among older people – such as high blood pressure (hypertension), high cholesterol, obesity, diabetes, smoking, physical inactivity and poor diet – are increasingly found in younger people. Other lifestyle risks include heavy alcohol consumption or binge drinking and recreational drugs such as amphetamines, cocaine and heroin.
Some risk factors are not modifiable such as age, sex, ethnicity, family history of stroke, genetics and certain inherited conditions. Women, for example, are particularly susceptible to strokes – and women of all ages are more likely than men to die from a stroke.
Stroke risks unique to women include pregnancy and some contraceptive pills (especially for smokers), as well as endometriosis, premature ovarian failure (before 40 years of age), early-onset menopause (before 45 years of age) and oestrogen for transgender women.
Some risk factors are social rather than biological, however. Studies have found that people with a lower income and education level are at a higher risk of having a stroke. This is due to a combination of factors. Unhealthy lifestyle habits, such as smoking, heavier drinking and lower physical activity levels are more common in people with lower incomes.
However, research also shows that people with lower socioeconomic status are less likely to receive good quality healthcare than people with higher incomes.
But, regardless of biological or social risk factors, there are things you can do – right now – to reduce your risk of having a stroke.
Essential eight
1. Stop smoking Smokers are more than twice as likely to have a stroke than non-smokers. Smoking causes damage to blood vessel walls, increases blood pressure and heart rate but reduces oxygen levels. Smoking also causes blood to become sticky, further increasing the risk of blood clots that can block blood vessels and cause a stroke.
2. Keep blood pressure in checkHigh blood pressure damages the walls of blood vessels, making them weaker and more prone to rupture or blockage. It can also cause blood clots to form, which can then travel to the brain and block blood flow, leading to a stroke. If you’re over 18 years of age, get your blood pressure checked regularly so, if you do show signs of developing high blood pressure, you can nip it in the bud and make appropriate changes to your lifestyle to help reduce your risk of stroke.
3. Keep an eye on your cholesterol According to the UK Stroke Association your risk of a stroke is nearly three and a half times higher if you have both high cholesterol and high blood pressure. To lower cholesterol, aim to keep saturated fat – found in fatty meats, butter, cheese, and full-fat dairy – below 7% of your daily calories, stay active and maintain a healthy weight.
4. Watch your blood sugar High blood glucose levels are linked to an increased risk of stroke. This is because high blood sugar damages blood vessels, which can lead to blood clots that travel to the brain. To reduce blood glucose levels, try to take regular exercise, eat a balanced diet rich in fibre, drink enough water, maintain a healthy weight, and try to manage stress.
5. Maintain a healthy weight Being overweight is one of the main risk factors for stroke. It is associated with almost one in five strokes, and increases your stroke risk by 22%. Being obese raises that risk by 64%. Carrying too much weight increases your risk of high blood pressure, heart disease, high cholesterol and type 2 diabetes, which all contribute to higher stroke risk.
6. Follow a Mediterranean diet One way to eat a fibre-rich balanced diet and maintain a healthy weight is to follow a Mediterranean diet. This has been shown to reduce the risk of stroke, especially when supplemented with nuts and olive oil.
7. Sleep well Try to to get seven to nine hours of sleep daily. Too little sleep can lead to high blood pressure, one of the most important modifiable risk factors for stroke. Too much sleep, however, is also associated with increased stroke risk, so try to stay as active as possible so you can sleep as well as possible.
8. Stay active The NHS recommends that people should avoid prolonged sedentary behaviour and aim for at least 150 minutes of moderate intensity activity or 75 minutes of vigorous intensity activity a week. Exercise should be spread evenly over four to five days a week, or every day. Do strengthening activities, usually more than two days per week.
The good news is that while the effects of stroke can be devastating and life-changing, it is largely preventable. Adopting these eight simple lifestyle changes can help to reduce stroke risk and optimise both heart and brain health.
Siobhan Mclernon does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Canadian Parliament has been unanimous in its response: “Canada is not for sale.” But Canada’s head of state, King Charles, has remained largely silent on the matter — until recently.
Over the last several weeks, observers have started to pick up on subtle signs of support for Canadians from the King. But many people have no doubt been wondering why there’s not been a direct statement of support from King Charles.
The answer to that question isn’t as simple as many people might think.
King of Canada
Since 1689, Britain has been a constitutional monarchy. The sovereign is the head of state, but the prime minister leads the government. As such, the King can’t interfere with politics. He is supposed to remain neutral and be the embodiment of the nation.
This crucial separation between palace and Parliament was solidified in Canada and throughout the Commonwealth in 1931 with the Statute of Westminster. In 1954, the Royal Styles and Titles Act separated the British Crown from the other Commonwealth realms. Queen Elizabeth became the first sovereign to ever be called Queen of Canada.
As a constitutional monarch, King Charles is bound by parliamentary limitations on his authority. He cannot act without taking advice from the prime ministers in his various realms.
This means King Charles can’t make a political statement about the ongoing tensions between Canada and the U.S. without the green light from Ottawa. When asked about the situation in January, a palace official said simply that this is “not something we would comment on.”
“For Canadians disappointed that King Charles has not commented on President Trump’s threats to annex Canada: in his capacity as King of Canada, he can only act on the advice of his Canadian first minister, i.e. Justin Trudeau.”
Or, at this moment, Mark Carney.
Signs of support
The King met with Trudeau at Sandringham, the royal family’s private estate in Norfolk, England, on March 3. This meeting seems to have prompted a series of symbolic gestures demonstrating the monarchy’s solidarity with Canadians.
The next day, the King conducted an inspection of the British aircraft carrier HMS Prince of Wales in his capacity as head of the Armed Forces. Canadian medals and honours adorned his naval dress uniform during the inspection.
A week later, the King planted a red maple tree at Buckingham Palace to honour Queen Elizabeth’s commitment to the preservation of forests and the bonds among Commonwealth nations.
On March 12, the King met with representatives from the Canadian Senate.
He presented a ceremonial sword to Gregory Peters, the Usher of the Black Rod (one of the Senate’s chief protocol officers). Raymonde Gagné, the speaker of the Senate, was also present for that meeting.
And on March 17, the King met with Carney as part of new prime minister’s whirlwind diplomatic tour of western Europe.
Some observers even pointed to the Princess of Wales’s red dress at the Commonwealth Day Service of Celebration on March 10 as yet another nod of recognition for Canada.
Soft power and the Royal Family
These sorts of gestures are examples of what is known as “soft power.” Unlike the hard power of military and economic force used by governments, soft power describes any number of ways that people or groups can influence others through culture, personal diplomacy and even fashion.
One of the best known forms of the monarchy wielding soft power is through the use of state visits. At the British prime minister’s request, world leaders are invited to London by the sovereign. The red carpet is rolled out for them, they’re wined and dined in lavish dinners at Buckingham Palace and they often make a speech to Parliament.
These state visits are a way for the Royal Family to use their soft power to positively influence diplomatic relations.
In February, British Prime Minister Keir Starmer presented Trump with an invitation from the King for a second state visit to the U.K.. So far, no date for the trip has been announced, but the King’s meetings with Trudeau and Ukraine’s Volodymyr Zelenskyy reportedly irritated Trump.
It remains to be seen how King Charles navigates his constitutional role as both king of the United Kingdom and of Canada. Will Trump’s state visit only be about British interests? Or will Charles use it as a chance to address the concerns of his Canadian subjects?
Justin Vovk received funding from the Social Sciences and Humanities Research Council of Canada. Justin Vovk is an advisory board member for the Institute of the Study of the Crown in Canada.
Most of the United States’ major climate regulations are underpinned by one important document: It’s called the endangerment finding, and it concludes that greenhouse gas emissions are a threat to human health and welfare.
The Trump administration is vowing to eliminate it.
Environmental Protection Agency Administrator Lee Zeldin referred to the 2009 endangerment finding as the “holy grail of the climate religion” when he announced on March 12, 2025, that he would reconsider the finding and all U.S. climate regulations and actions that rely on it. That would include rules to control planet-warming emissions of greenhouse gases like carbon dioxide and methane from power plants, vehicles and oil and gas operations.
But revoking the endangerment finding isn’t a simple task. And doing so could have unintended consequences for the very industries Trump is trying to help.
EPA Administrator Lee Zeldin announces plans to reconsider more than 30 climate regulations.
As a law professor, I have tracked federal climate regulations and the lawsuits and court rulings that have followed them over the past 25 years. To understand the challenges, let’s look at the endangerment finding’s origins and Zeldin’s options.
Origin and limits of the endangerment finding
In 2007, the U.S. Supreme Court ruled in Massachusetts v. EPA that six greenhouse gases are pollutants under the Clean Air Act and that the EPA has a duty under the same law to determine whether they pose a danger to public health or welfare.
The court also ruled that once the EPA made an endangerment finding, the agency would have a mandatory duty under the Clean Air Act to regulate all sources that contribute to the danger.
The Court emphasized that the endangerment finding was a scientific determination and rejected a laundry list of policy arguments made by the George W. Bush administration for why the government preferred to use nonregulatory approaches to reduce emissions. The court said the only question was whether sufficient scientific evidence exists to determine whether greenhouse gases are harmful.
The finding was challenged and upheld in 2012 by the U.S. District Circuit for the District of Columbia. In that case, Coalition for Responsible Regulation v. EPA, the court found that the “body of scientific evidence marshaled by the EPA in support of the endangerment finding is substantial.” The Supreme Court declined to review the decision. The endangerment finding was updated and confirmed by the EPA in 2015 and 2016.
Challenging the endangerment finding
The scientific basis for the endangerment finding is stronger today than it was in 2009.
The Intergovernmental Panel on Climate Change’s latest assessment report, involving hundreds of scientists and thousands of studies from around the world, concluded that the scientific evidence for warming of the climate system is “unequivocal” and that greenhouse gases from human activities are causing it.
According to the National Climate Assessment released in 2023, the effects of human-caused climate change are already “far-reaching and worsening across every region of the United States.”
Summer temperatures have climbed in much of the U.S. and the world as greenhouse gas emissions have risen. Fifth National Climate Assessment
During President Donald Trump’s first term, then-EPA Administrator Scott Pruitt considered repealing the endangerment finding but ultimately decided against it. In fact, he relied on it in proposing the Affordable Clean Energy Rule to replace President Barack Obama’s Clean Power Plan for regulating emissions for coal-fired power plants.
What happens if the EPA revokes the endangerment finding?
For the Trump administration to now revoke that finding, Zeldin must first recruit new members of the EPA’s Science Advisory Board to replace those dismissed by the Trump administration. Congress created the board in 1978 to provide independent, unbiased scientific advice to the EPA administrator, and it has consistently supported the 2009 endangerment finding.
Zeldin must then initiate rulemaking in compliance with the Administrative Procedure Act, provide the opportunity for public comment and respond to comments that are likely to be voluminous. This process could take several months if done properly.
If Zeldin then decides to revoke the endangerment finding, lawsuits will immediately challenge the move.
Even if Zeldin is able to revoke the finding, that does not automatically repeal all the rules that rely on it. Each of those rules must go through separate rulemaking processes that will also take months.
Zeldin could simply refuse to enforce the rules on the books while he reconsiders the endangerment finding.
However, a blanket policy abdicating any enforcement responsibility could be challenged in lawsuits as arbitrary and capricious. Further, the regulated industries would be taking a chance if they delayed complying with regulations only to find the endangerment finding and climate laws still in place.
His first argument is that the 2009 endangerment finding did not consider costs. However, that argument was rejected by the D.C. Circuit Court in Coalition for Responsible Regulation v. EPA. Cost becomes relevant once the EPA considers new regulations – after the endangerment finding.
Moreover, in a unanimous 2001 decision, the Supreme Court in Whitman v. American Trucking Associations held that the EPA cannot consider cost in setting air quality standards.
A repeal could backfire
Repealing the endangerment finding could also backfire on the fossil fuel industry.
States and cities have filed dozens of lawsuits against the major oil companies. The industry’s strongest argument has been that these cases are preempted by federal law. In AEP v. Connecticut in 2011, the Supreme Court ruled that the Clean Air Act “displaced” federal common law, barring state claims for remedies related to damages from climate change.
However, if the endangerment finding is repealed, then there is arguably no basis for federal preemption, and these state lawsuits would have legal grounds. Prominent industry lawyers have warned the EPA about this and urged it to focus instead on changing individual regulations. The industry is concerned enough that it may try to get Congress to grant it immunity from climate lawsuits.
To the extent that Zeldin is counting on the conservative Supreme Court to back him up, he may be disappointed.
In 2024, the court overturned the Chevron doctrine, which required courts to defer to agencies’ reasonable interpretations when laws were ambiguous. That means Zeldin’s reinterpretation of the statute is not entitled to deference. Nor can he count on the court overturning its Massachusetts v. EPA ruling to free him to disregard science for policy reasons.
Patrick Parenteau does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Source: The Conversation – USA – By Sarah Stroup, Professor of Political Science; Director, Conflict Transformation Collaborative, Middlebury
The U.S. and U.K. used to be major funders of global immunization programs for children. AP Photo/Sunday Alamba, File
The Trump administration’s dismantling of the United States Agency for International Development is unconstitutional, a federal judge ruled on March 18, 2025. The court order to pause the agency’s shuttering came days after Secretary of State Marco Rubio said that 83% of its programs had been cut.
USAID was created in 1961 as the lead agency for U.S. international development. Until recently, it funded health and humanitarian aid programs in more than 130 countries. Despite the administration’s claim of cost-cutting, USAID was a relatively small and economical operation. Its US$40 billion budget accounted for just 0.7% of annual federal spending. Congress also required regular reporting and evaluations on USAID, helping to ensure substantial oversight of how it spent its taxpayer dollars.
Both the U.S. and British foreign aid programs have long prompted heated debates over the proper relationship between development, diplomacy and national security. The U.S. and Britain have long been among the top five providers of development assistance worldwide, and both USAID and DFID have played leading roles in the development community.
Countries give foreign aid for both altruistic and self-interested reasons. Treating global diseases and addressing civil conflicts is a way for wealthy Western governments to limit threats that could destabilize their countries, as well as the rest of the world. It also burnishes their reputation and encourages cooperation with other governments.
Scholars from across the political spectrum and around the world have questioned the general efficacy of foreign assistance, arguing that these programs are designed to serve the interests of donors, not the needs or recipients. Other development experts contend that foreign aid programs, while imperfect, have still made meaningful progress in improving health, education and freedoms.
Britain’s DFID was created in 1997 as an independent, Cabinet-level department deliberately independent of partisan politics. It quickly developed a reputation as a model donor, even among skeptics of international aid.
For example, a staffer at the international medical charity Doctors without Borders told me in a 2006 interview that he had scoffed at the idea of a politics-free aid agency.
Yet, he said, he had found DFID “relatively easier to work with” than other donors.
“I have never heard of someone being told, as a result of accepting DFID funds, what to do, either explicitly or behind closed doors,” he told me.
But its good reputation could not protect DFID. At the height of the COVID-19 pandemic, Johnson announced that DFID would merge with the Foreign Office, Britain’s equivalent of the State Department, to create a new government agency. By uniting aid and diplomacy, Johnson said, the new Foreign, Commonwealth and Development Office would get “maximum value for the British taxpayer,” and he cited the economic impact of COVID to justify his decision.
Foreign aid dropped sharply after the merger, from 0.7% of Britain’s gross national income to 0.5% – a cut of about US$6 billion.
Development professionals decried Johnson’s merger, arguing it could not have happened at a worse time, with the pandemic heightening the need for global health funding. And coming shortly after Brexit, Britain’s withdrawal from the European Union, DFID’s demise further called into question Britain’s commitment to global cooperation.
Less money, less impact
Five years later, it’s not clear that dismantling DFID has made British foreign aid more efficient or effective, as Johnson pledged.
“We have seen evidence of where a more integrated approach has improved the organisation’s ability to respond to international crises and events, which has led to a better result,” reads one 2025 report by the U.K.’s National Audit Office.
Yet, the auditors add, the British government has spent at least £24.7 million – US$32 million – to merge its aid and diplomacy offices, and it failed to track these costs. Nor did the leaders of the merger set out a clear vision for its new purpose.
From the outset, DFID had invested substantially in building expertise in global development, particularly in conflict-ridden states. In 2001, for example, it spent almost 5% of its budget – an unusually high amount – on research and policy analysis to design and assess its programs.
Given the “development expertise that was lost with the merger,” the U.K. government can no longer conduct “the kind of rigorous, long-term focus necessary to make a real impact,” said the Center for Global Development in a recent report.
A 2022 study suggests that DFID’s dismantling was a fundamentally political move, “divorced from substantive analysis of policy or inter-institution relationships.”
Britain’s new Prime Minister Keir Starmer, of the leftist Labour Party, initially promised to boost British foreign aid. But in early March 2025, he backtracked, announcing instead a further cut to foreign aid.
By 2027, the U.K. government will spend just 0.3% of its budget on overseas aid. That’s roughly $11 billion less than before the merger in 2019.
‘Clear and easy target’
USAID’s budget was much larger than DFID’s, and the administration apparently wants not to streamline U.S. foreign aid but halt it almost entirely. If this effort succeeds, it will have even more severe effects worldwide, at least in the immediate term.
Development professionals tend to see independent government agencies such as USAID and DFID as better able to prioritize the needs of the poor because their programming is run separately from partisan policies.
Yet standalone agencies are also more visible – and so more vulnerable to political targeting.
DFID was a clear and easy target when Johnson began his pandemic-era budget-slashing. USAID is now suffering a similar fate.
Sarah Stroup does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Source: The Conversation – UK – By Siobhan Mclernon, Senior Lecturer, Adult Nursing and co-lead, Ageing, Acute and Long Term Conditions. Member of Health and Well Being Research Center, London South Bank University
As a nurse working in a neurocritical care, I witnessed the sudden and devastating effects of stroke on survivors and their carers.
Following my nursing career, I became a researcher specialising in stroke. Knowledge of stroke risk factors in the general public is poor, so stroke prevention is a priority for public health.
Stroke is a leading cause of death and disability in England – yet it is largely preventable. It’s often considered an older person’s illness but, although stroke risk does increase with age, it can happen at any time of life. In fact, stroke incidence is increasing among adults below the age of 55 years.
Stroke risk factors that tend to be more common among older people – such as high blood pressure (hypertension), high cholesterol, obesity, diabetes, smoking, physical inactivity and poor diet – are increasingly found in younger people. Other lifestyle risks include heavy alcohol consumption or binge drinking and recreational drugs such as amphetamines, cocaine and heroin.
Some risk factors are not modifiable such as age, sex, ethnicity, family history of stroke, genetics and certain inherited conditions. Women, for example, are particularly susceptible to strokes – and women of all ages are more likely than men to die from a stroke.
Stroke risks unique to women include pregnancy and some contraceptive pills (especially for smokers), as well as endometriosis, premature ovarian failure (before 40 years of age), early-onset menopause (before 45 years of age) and oestrogen for transgender women.
Some risk factors are social rather than biological, however. Studies have found that people with a lower income and education level are at a higher risk of having a stroke. This is due to a combination of factors. Unhealthy lifestyle habits, such as smoking, heavier drinking and lower physical activity levels are more common in people with lower incomes.
However, research also shows that people with lower socioeconomic status are less likely to receive good quality healthcare than people with higher incomes.
But, regardless of biological or social risk factors, there are things you can do – right now – to reduce your risk of having a stroke.
Essential eight
1. Stop smoking Smokers are more than twice as likely to have a stroke than non-smokers. Smoking causes damage to blood vessel walls, increases blood pressure and heart rate but reduces oxygen levels. Smoking also causes blood to become sticky, further increasing the risk of blood clots that can block blood vessels and cause a stroke.
2. Keep blood pressure in checkHigh blood pressure damages the walls of blood vessels, making them weaker and more prone to rupture or blockage. It can also cause blood clots to form, which can then travel to the brain and block blood flow, leading to a stroke. If you’re over 18 years of age, get your blood pressure checked regularly so, if you do show signs of developing high blood pressure, you can nip it in the bud and make appropriate changes to your lifestyle to help reduce your risk of stroke.
3. Keep an eye on your cholesterol According to the UK Stroke Association your risk of a stroke is nearly three and a half times higher if you have both high cholesterol and high blood pressure. To lower cholesterol, aim to keep saturated fat – found in fatty meats, butter, cheese, and full-fat dairy – below 7% of your daily calories, stay active and maintain a healthy weight.
4. Watch your blood sugar High blood glucose levels are linked to an increased risk of stroke. This is because high blood sugar damages blood vessels, which can lead to blood clots that travel to the brain. To reduce blood glucose levels, try to take regular exercise, eat a balanced diet rich in fibre, drink enough water, maintain a healthy weight, and try to manage stress.
5. Maintain a healthy weight Being overweight is one of the main risk factors for stroke. It is associated with almost one in five strokes, and increases your stroke risk by 22%. Being obese raises that risk by 64%. Carrying too much weight increases your risk of high blood pressure, heart disease, high cholesterol and type 2 diabetes, which all contribute to higher stroke risk.
6. Follow a Mediterranean diet One way to eat a fibre-rich balanced diet and maintain a healthy weight is to follow a Mediterranean diet. This has been shown to reduce the risk of stroke, especially when supplemented with nuts and olive oil.
7. Sleep well Try to to get seven to nine hours of sleep daily. Too little sleep can lead to high blood pressure, one of the most important modifiable risk factors for stroke. Too much sleep, however, is also associated with increased stroke risk, so try to stay as active as possible so you can sleep as well as possible.
8. Stay active The NHS recommends that people should avoid prolonged sedentary behaviour and aim for at least 150 minutes of moderate intensity activity or 75 minutes of vigorous intensity activity a week. Exercise should be spread evenly over four to five days a week, or every day. Do strengthening activities, usually more than two days per week.
The good news is that while the effects of stroke can be devastating and life-changing, it is largely preventable. Adopting these eight simple lifestyle changes can help to reduce stroke risk and optimise both heart and brain health.
Siobhan Mclernon does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Speaking on BBC One’s Sunday With Laura Kuenssberg, Wes Streeting, the UK health secretary, expressed concerns that some mental health conditions were overdiagnosed. The Conversation asked two experts to comment on Streeting’s claim. Is the health secretary right?
Mental distress is under-diagnosed – but over-medicalised
Susan McPherson, Professor in Psychology and Sociology, University of Essex
A year ago, the UK’s then prime minister, the Conservative Rishi Sunak, announced “sick note culture” had gone too far. His work and pensions secretary claimed “mental health culture”, Mel Stride, had gone too far.
These statements merged concern about affordability of disability benefits with ideas about overdiagnosis of mental illness. This appeared to be in response to a report from the Resolution Foundation, a thinktank.
The report said that people in their 20s were more likely to be out of work than people in their 40s. The report attributed this to an increase in young people reporting mental distress (from 24% in 2000 to 34% in 2024).
A year on, the UK now has a Labour government. Wes Streeting, the secretary of state for health and social care, is facing criticism for appearing to echo conservative tropes. In an interview about government plans to reduce benefits for disabled people, he agreed that overdiagnosis accounts for an increase in people on benefits due to mental illness. This appears to mirror those media stereotypes about work-shy millennials.
If that is what Streeting meant, then the evidence is not on his side. Ten years ago, a UK national survey of psychiatric symptoms found that a third of people whose psychological symptoms were severe enough to merit a diagnosis, did not have a diagnosis.
More recent research using the UK Longitudinal Household Study grouped people according to whether they do or do not have a psychiatric diagnosis and whether they do or do not have psychological symptoms severe enough to merit a diagnosis. The study found 12 times as many people in the “undiagnosed distress” category (with severe symptoms but no diagnosis) than the overdiagnosed category.
The study also identified significant inequalities. People living with a disability had nearly three times the risk of undiagnosed distress compared with people without a disability.
Women had 1.5 times the risk of undiagnosed distress compared with men. Lesbian, gay or bisexual people were 1.4 times more likely to have undiagnosed distress compared with heterosexual people. People aged 16-24 had the highest risk compared with all other age groups.
This all suggests inequalities in undiagnosed distress are a much bigger problem than overdiagnosis in the UK. Given that many forms of support in the UK depend on having a diagnosis, undiagnosed distress probably means people are not getting the support they need.
However, Streeting also said that too many people “just aren’t getting the support they need. So if you can get that support to people much earlier, then you can help people to either stay in work or get back to work.”
Given this nod towards prevention and the importance of non-medical support, it is conceivable that Streeting’s sentiment may have been about “over-medicalisation” of mental distress rather than overdiagnosis. The difference is important.
The term “diagnosis” reflects a medical model of mental illness. Many would agree that the medical idea of “diagnose and treat” does not serve people with mental distress well. This is because there is a lot of evidence suggesting the underlying causes of mental distress are social, economic, environmental or a result of past trauma.
If Streeting had said “over-medicalised”, he would have been in tune with a growing global concern about over-medicalisation and over-use of medication to treat mental distress, a position advocated by the UN and the World Health Organization.
Despite UK guidelines recommending psychological treatments as first line interventions for depression, antidepressant prescribing has risen 46% over the last seven years with over 85 million prescriptions in 2022-23. This alongside an increase in long-term use of psychiatric medication with no reduction in mental distress at the population level. If Streeting had said “over-medicalised”, the evidence would have been on his side.
A mental health diagnosis is just a label – and usually an unhelpful one
Joanna Moncrieff, Professor of Critical and Social Psychiatry, UCL
There has been a dramatic escalation in the number of people seeking treatment for mental health problems in recent years. In the year from April 2023 to 2024, 3.8 million people were in contact with mental health services in England alone, which is 40% higher than before the COVID pandemic. The figures include 1 million children. One in five 16-year-old girls is in contact with services.
The statistics reveal a tendency to over-medicalise a variety of human problems that was supercharged by the pandemic and is likely to result in harmful effects on physical and mental health.
What many people don’t realise about a mental health diagnosis is that it is nothing like the diagnosis of a physical condition. It doesn’t name an underlying biological state or process that can explain the symptoms someone is experiencing, as it does when someone gets a diagnosis of cancer or rheumatoid arthritis, for example.
A mental health diagnosis doesn’t explain anything. It is simply a label that can be applied to a certain set of problems. The process by which this label is conferred is not scientific or objective and is influenced by commercial, professional and political interests.
In most situations, giving people with mental health problems a diagnostic label is unhelpful. It convinces people they have a biological defect, it leads to ineffective and often harmful medical treatment, and most of the time, it misses the actual problems.
Because getting a diagnosis implies you have a medical condition, it misleads people into thinking that they have an underlying biological abnormality, such as a chemical imbalance, even though there is no good evidence that mental disorders are caused by underlying brain or bodily dysfunctions. Research has shown this makes people pessimistic about their chances of recovery and less likely to improve.
Being diagnosed often leads to being prescribed a psychiatric drug, such as an antidepressant. About 8.7 million people in England now take an antidepressant, half of them on a long-term basis.
Prescriptions for other drugs, such as stimulants (prescribed for a diagnosis of ADHD), are also rising fast, even leading to medication shortages. Yet the evidence that any of these drugs improve people’s wellbeing or ability to function is minimal. Moreover, like all substances that alter our normal biological make-up, particularly those that interfere with brain function, they cause side-effects and health risks.
Antidepressants can cause severe and prolonged withdrawal symptoms, sexual dysfunction (which may persist) and emotional numbing or apathy, among other unwanted effects. Stimulants can cause cardiovascular problems and neurological conditions. The widespread, unwarranted prescribing of these drugs will adversely affect the health of the population.
Giving people a diagnosis can also obscure the nature of the person’s underlying problems and prevent these from being addressed.
Mental health problems are often meaningful reactions to stressful circumstances, such as financial, housing and relationship problems and experiences of abuse, trauma, loneliness and lack of meaning. Reducing over-medicalisation doesn’t necessarily mean fewer services. What we need is different services that provide appropriate support for people’s actual problems, not treatment for medical labels.
We also need ways to excuse people from responsibilities when necessary, without making them feel like they have to take on a “sick” role that implies they are forever ill and helpless.
Much of today’s employment is poorly paid, insecure, boring, exploitative and pressurising. It shouldn’t surprise us that some people find it hard to endure. We need to improve working conditions for everyone, but we also need to support people who find these conditions especially challenging, without having to label them as sick.
Joanna Moncrieff is or has been a co-investigator on grants funded by the UK’s National Institute of Health Research and the Australian government Medical Research Future Fund for studies exploring methods of antidepressant discontinuation. She is co-chair person of the Critical Psychiatry Network, an informal and unfunded group of psychiatrists
Susan McPherson receives funding from NIHR Applied Research Collaboration East of England. She is affiliated with the Labour Party.
A graph I saw in high school appeared to show the Earth breathing.
It was a graph that plotted carbon dioxide in the atmosphere over the course of the 20th century and into the 21st. CO₂ had risen steadily, and then more rapidly, but it hadn’t gone up in a straight line. Each year it had fallen sharply before rising to a new peak, increasing over time in an upwards zig-zag.
What explained this annual, temporary fall in CO₂, the gas that is overwhelmingly responsible for climate change? The answer was photosynthesis, my physics teacher explained – the miracle by which plants turn light and CO₂ into food.
This is how our planet has regulated atmospheric carbon for longer than our species has existed. Fossil fuels are disrupting this equilibrium in several ways.
Spring is dawning in the northern hemisphere, where most of the planet’s green land is situated. Trees are unfurling leaves that will soak up carbon in the air and turn it into new bark, roots and branches. On a global scale, it’s like a gigantic inhalation of carbon. In autumn, when trees shed their leaves, Earth will exhale again.
The air we all breathe is increasingly polluted by fossil fuels. That includes products of fossil fuels, like plastic, which is now so ubiquitous that research suggests simply breathing can introduce microscopic fragments into your brain.
Something similar is happening in plants – and it could have global consequences.
Plants are losing their appetite
“Microplastics are hindering photosynthesis, the process by which plants convert energy from the sun into the fruit and vegetables we eat,” says Denis J. Murphy, an emeritus professor of biotechnology at the University of South Wales.
These are the conclusions of a recent study by researchers in China, Germany and the US. Murphy wasn’t involved, but his own research with plant cells – which the tiniest microplastics can infiltrate, and damage the organs involved in photosynthesis – has him worried.
“Given the potential (albeit speculative) risk to global food production, more priority should be given to rigorous scientific research of microplastics and their effects on both crops and the marine life that supports fish and seafood stocks,” he says.
Not so long ago, people wondered if our fossil fuel habit might actually benefit plant photosynthesis. After all, plants eat CO₂. Flooding the atmosphere with more of it each year could only whet their appetites, right?
“The amount of CO₂ used by photosynthesis and stored in vegetation and soils has grown over the past 50 years, and now absorbs at least a quarter of human emissions in an average year,” say ecologists Amanda Cavanagh (University of Essex) and Caitlin Moore (University of Western Australia).
Most of this extra carbon absorption has come from crops and young trees, the pair say, less from mature forests where a lot of the world’s carbon is stored. Cavanagh and Moore say this carbon pump is slowing down, as the other necessary ingredients for photosynthesis – soil nutrients and water – have fallen or stayed the same.
Microplastics could slow the rate at which plants remove carbon further. And then there are the effects of climate change, like drought, fires and floods, which will intensify as long as we continue burning fossil fuels.
After monitoring forests and shrublands in Australia for 20 years, Moore and a team of six colleagues concluded that these ecosystems are at risk of losing their ability to bounce back, and continue absorbing carbon, after successive climate disasters.
We may have done plenty to reduce global photosynthesis, but a team of scientists at the University of Oxford and the Fraunhofer Society in Germany is trying to turn things around. How? By hacking plants to help them get more out of the process.
“You would be forgiven for thinking nature has perfected the art of turning sunlight into sugar,” say Jonathan Menary, Sebastian Fuller and Stefan Schillberg.
“But that isn’t exactly true. If you struggle with life goals, it might reassure you to know even plants haven’t yet reached their full potential.”
The team say that plants tend to convert less than 5% of sunlight into new tissue – often as little as 1%. That’s because of a mistake plants regularly make, in which an enzyme involved in photosynthesis latches on to oxygen instead of CO₂.
Cyanobacteria are Earth’s most ancient photosynthesisers. Menary, Fuller and Schillberg say these microscopic organisms could possess useful genes for better sunlight management that might benefit crops like rice and potato plants. Another technique involves helping plants recover from high light exposure quicker.
More efficient photosynthesis, with the help of gene editing and other tools, is not “a silver bullet”, the team stress. Certainly not while fossil fuels continue to drown our green planet in carbon it cannot metabolise.
However, this work is likely to prove useful as farmers seek to grow more in an increasingly volatile environment, while sparing enough land for nature.
“This research is about making sure we can grow enough food to feed ourselves,” the team say.
Source: The Conversation – UK – By Natalya Chernyshova, Senior Lecturer in Modern European History, Queen Mary University of London
Germany’s ex chancellor, Angela Merkel, and France’s former president, François Hollande, were key to brokering the Minsk agreements.Sodel Vladyslav / Shutterstock
The Russian president, Vladimir Putin, has agreed to pause attacks on Ukrainian energy infrastructure for 30 days following a phone call with his American counterpart, Donald Trump. On social media, Trump said the call was “very good and productive” and came “with an understanding that we will be working quickly to have a complete ceasefire”.
This optimism is misplaced. The White House did not mention that Putin issued additional conditions for a ceasefire. The Kremlin demands that Ukraine be effectively disarmed, leaving it defenceless against a Russian takeover. Such terms would be unacceptable to Ukraine and its European partners.
At this juncture, Trump and his negotiators would do well to ponder why previous attempts to restrain Russia and secure a lasting peace for Ukraine did not succeed.
This war did not start when shells began to rain on Kyiv in February 2022. Russia had already been waging an undeclared war on its neighbour for nearly eight years in eastern Ukraine’s Donbas, where pro-Russian proxy forces have been stoking up trouble in the border regions of Luhansk and Donetsk.
Attempts to end the fighting there were made in September 2014 and February 2015, when Russia and Ukraine signed ceasefire agreements during negotiations in Minsk, Belarus.
Both sets of Minsk agreements proved to be non-starters. The fighting in the region rumbled on until it culminated in Moscow’s full-scale invasion of Ukraine in 2022. The accords stored problems for the future.
Russia-backed separatists have controlled the south-eastern Ukrainian regions of Donetsk and Luhansk since 2015. Viacheslav Lopatin / Shutterstock
Minsk-1 and Minsk-2
The first Minsk protocols were signed in 2014 by Russia, Ukraine, separatists from Donbas and representatives from the Organization for Security and Co-operation in Europe (OSCE). The agreement provided for an immediate ceasefire monitored by the OSCE, the withdrawal of “foreign mercenaries” from Ukraine and the establishment of a demilitarised buffer zone.
But Moscow also insisted that Kyiv grant temporary “special status” to the Donetsk and Luhansk People’s Republics, the two separatist regions in Donbas. Instead of helping Ukraine regain control over its eastern territories, the agreement allowed the Russia-backed rebels to hold local elections and legalised them as a party to the conflict.
The ceasefire collapsed within days of signing. The provisions that sought to demarcate the lines of the conflict and give Ukraine back control over its eastern border were not observed by the rebels, and fighting intensified during the winter.
With the death toll rising, the leaders of France and Germany rushed to broker a fresh round of negotiations in February 2015. The resulting accords, which were known as Minsk-2, also failed to bring peace.
Russia and its proxy militants in Donbas immediately and repeatedly violated its terms. Astonishingly, Minsk-2 did not even mention Russia, despite it signing the protocols. Moscow continued to deny its involvement in eastern Ukraine, while stepping up armed assistance to the rebels.
Kyiv was saddled with peace terms that were impossible to implement unless Ukraine was prepared to throw away its sovereignty. Minsk-2 stipulated that the “special status” of the eastern separatist regions was to become permanent, and that the Ukrainian constitution was to be amended to allow for “decentralisation” of power from Kyiv to the rebel regions.
These regions were to be granted autonomy in financial matters, responsibility for their stretch of the border with Russia, and the right to conclude foreign agreements and hold referenda. To undercut Ukrainian independence further, a neutrality clause inserted into its constitution would effectively bar the country’s entry into Nato.
Understandably, no one in Kyiv rushed to implement these self-destructive terms. In an interview with German magazine Der Spiegel in 2023, Volodymyr Zelensky said that when he became Ukraine’s president in 2019 and examined Minsk-2, he “did not recognise any desire in the agreements to allow Ukraine its independence”.
Russia-backed separatists in Sloviansk, a city in Donetsk Oblast, in 2014. Fotokon / Shutterstock
Zelensky’s comment points to the fundamental flaw of the Minsk-2 agreement. Its western brokers failed to recognise that Russian war aims were irreconcilable with Ukrainian sovereignty. Moscow’s objective from the start was to use Donbas to destabilise the government in Kyiv and gain control over Ukraine.
Western peacemakers searched for a compromise, but the Kremlin used Minsk-2 to advance its goals. As Duncan Allan of the Chatham House research institute noted in 2020: “Russia sees the Minsk agreements as tools with which to break Ukraine’s sovereignty.” The war in Donbas raged on and, by 2020, had claimed 14,000 lives, with 1.5 million people becoming refugees.
Germany’s ex-chancellor, Angela Merkel, a key broker, subsequently defended the Minsk agreements. She said they bought Kyiv time to arm itself against Russia. It was a costly purchase. Minsk-2 froze the conflict in one locality rather than ended it. And it encouraged Russia, paving the way for a full-scale invasion.
Emphasising Ukrainian sovereignty
The existential differences between Ukraine and Russia that plagued the Minsk agreements remain today. Ukraine has demonstrated its resolve to defend its sovereignty, while Russia’s invasion in 2022 testifies to its determination to squash Ukrainian resolve. The timing of the attack so close to the seventh anniversary of Minsk-2 adds grim emphasis to that point.
This clash of objectives must be addressed head-on in any peace negotiations. The only way to secure lasting peace in Europe is to avoid rewarding the aggressor and punishing its victim.
The Kremlin has already openly declared that it sees Trump-led brokerage as the west’s acknowledgement of Russian strategic superiority. It needs to be disabused of this notion. As argued by Nataliya Bugayova, a fellow at the Institute for the Study of War, the war is not lost yet. Russia is far from invulnerable, and it can be made to accept defeat.
But for any agreement to be effective, there can be no ambiguity or middle ground on the subject of Ukrainian sovereignty. It must be protected and backed by security guarantees.
So far, the Trump administration has shown little understanding of this. But ten years down the line from Minsk-2, Europeans have finally grasped it.
Finland’s president, Aleksander Stubbs, told reporters on March 19 that Ukraine must “absolutely” not lose sovereignty and territory. And, on the day Trump and Putin had their discussion, Germany’s parliament voted for a massive boost in defence spending – another indicator that Europeans are no longer taking Putin on trust.
Natalya Chernyshova received funding from the British Academy during 2020-2022.
Even before the recent protest by a group of well-known musicians at the UK government’s plans to allow AI companies to use copyright-protected work for training, disquiet around artists’ rights was already growing.
In early February, an open letter from artists around the world called on Christie’s auction house to cancel a sale of art created with the assistance of generative AI (GenAI). This is a form of artificial intelligence that creates content – including text, images, or music – based on the patterns learned from colossal data sets.
Without giving specific examples, the letter suggested that many of the works included in the sale, which was entitled “Augmented Intelligence” were “known to be trained on copyrighted work without a licence” and suggested that such sales further “incentivises AI companies’ mass theft of human artists’ work”.
This article is part of our State of the Arts series. These articles tackle the challenges of the arts and heritage industry – and celebrate the wins, too.
If we think about Dall-E, Midjourney, and Stable Diffusion, all of which use text prompts to generate images and are trained on data sets harvested from online sources, the letter raised significant issues about the nature of artistic creativity and how the legal concept of “fair use” and originality is applied in such cases.
These are complex debates, encompassing perennial misgivings about machine automation, intellectual property (IP), and the cherished ideal that ingenuity and originality remain the sole preserve of humanity.
How to think from within GenAI
The impact of AI on the creative industries has become a major issue in the UK and elsewhere, so much so that we are faced with an existential question: how do we understand the evolving impact of AI on human creativity today?
The scope of this enquiry reveals a simple fact: we need to develop more accessible and inclusive ways to think from within AI image processing models. This is exactly what my latest research, produced in collaboration with the acclaimed artist and photographer Trevor Paglen, proposes.
How, this research asks, do we better understand the mechanisms behind the collation and labelling of the data sets that are used to train AI? And how, in turn, can we create new ways for understanding the extent to which AI image-production models inform our experience the world?
It is, I argue, through the development of interdisciplinary research methods that draw upon the arts and humanities that we can critically engage with these concerns.
Although the open letter addressed to Christie’s alluded to these topics, it did not, perhaps unsurprisingly, observe the degree to which some of the more prominent artists in the Augmented Intelligence sale had actively engaged in providing visual methods and insights into how GenAI functions.
It is notable that Holly Herndon and Mat Dryhurst’s work xhairymutantx scrutinises how the data sets used in AI models of image production both define and transform images. For example, if you type the word “Holly Herndon” into Midjourney, it will produce images that are based on data sets derived from Herndon’s online presence.
To draw attention to, and simultaneously disrupt, this process, the artists generated their own data sets of images and labelled them “Holly Herndon”. The images in these data sets had been previously manipulated to emphasise certain qualities associated with Herndon (her red hair, for example). Once fed back into the AI image processing model, the ensuing images of “Holly Herndon” became evermore outlandish and exaggerated.
This clearly shows that AI image processing is a highly inconsistent and selective procedure that can be manipulated with ease.
Reflecting upon aerial photography in his work Machine Hallucinations – ISS Dreams, artist and data visualisation pioneer Refik Anandol used a data set of 1.2 million images collated by the International Space Station (ISS). Alongside other satellite images of Earth, he produced an AI-generated composition.
Employing generative adversarial networks (GANs) – an AI model that trains neural networks to recognise, classify and, crucially, generate new images – Anandol effectively produced a unique landscape that changes over time and never seems to repeat itself.
In both these examples, artists are not simply engaging in either “mass theft” or using AI models that have been trained on large data sets to mechanically produce images. They are explicitly drawing attention to how the data sets used to train AI can be both strategically engineered and actively disrupted.
In our recent book (to which I contributed as editor and author), Trevor Paglen, whose work was not in the Christie’s sale, reveals how data sets regularly produce disquieting, hallucinatory allegories of our world.
Given that GANs are trained on specific data sets and do not experience the world as such, they often produce hallucinatory and uncanny versions of it. Although often considered to be a fault or a glitch in the system, the event of hallucination, as Paglen demonstrates, is nevertheless central to GenAI.
In images such as Rainbow, which was produced using a data set created and labelled by Paglen, we see a ghostly image of our world that discloses the inner, latent mechanics of image production in GANs.
Paglen’s practice, alongside that of Dryhurst, Herndon and Anandol, defines a clear distinction between those artists who casually use AI to generate yet more images and those who critically investigate the operative logic of AI. The latter approach is precisely what is needed when it comes to thinking through GenAI and rendering it more accountable as a technology that has evolved to define significant aspects of our lives.
If we allow that the internal workings of AI are opaque to users and programmers alike, it is all the more crucial that we explore how art practices – and the humanities more broadly – can encourage us to think from within these unaccountable systems. In doing so we could significantly improve levels of understanding and engagement with a technology that is defining the future and our relationship to it.
Anthony Downey does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
If you are trying to stop smoking, you may have heard of nicotine patches or gum to help reduce cravings. But how about nicotine pouches? Small, tobacco-free sachets containing a powder made up of nicotine, flavourings and other additives, nicotine patches are placed between the upper lip and gum to release a nicotine buzz without the damage to lungs.
Nicotine pouches were first introduced to the UK market in 2019. Common brands in the UK include ZYN, Velo and Nordic Spirit. Nicotine pouches are similar to snus – loose tobacco in a pouch that is used in the same way as nicotine pouches. Although snus has been used for many years in Scandinavia, it was banned in the UK in 1992. Today’s generation of nicotine pouches are marketed as a way to get the benefits of nicotine without the harmful effects of cigarettes or vapes.
So, are they a helpful tool for those trying to kick the habit?
Nicotine replacement therapy
Nicotine replacement therapy (NRT) is available to buy over-the-counter in the UK. Common brands include Nicorette and Niquitin. NRT comes in different forms such as patches, lozenges and chewing gum. Nicotine pouches haven’t been approved for use as NRT – so why are they becoming a popular alternative to smoking and vaping?
Pouches are heavily marketed on social media and, unlike NRTs, they’re readily available from supermarkets and shops from as little as £5 per box. Social media influencers are sponsored to promote nicotine pouches as “clean”, discreet and convenient. They come in a wide range of flavours, from cinnamon to citrus, which attracts younger consumers.
Recent research found that approximately 1% of adults and 1.2% of youths aged 11-18 years-old reported currently using nicotine pouches. However, over 5% of adults and more than 3% of youths said they had used these pouches at some point. Although these are relatively low figures, data shows nicotine pouches are becoming increasingly popular in the UK and US.
Instead, nicotine pouches are governed by the General Product Safety Regulations, which means they are not regulated as stringently as NRT. Companies producing NRTs must apply for a marketing license because medicinal products have to undergo extensive testing to show they are safe and effective. This is not the case for nicotine pouches.
‘Healthy’ nicotine?
Nicotine acts on receptors in the brain, releasing chemical messengers including the “happy hormone” dopamine. These chemical messengers are responsible for the pleasurable feelings and addictive behaviour that people often experience when using tobacco or nicotine products. The faster a drug is absorbed and activates brain receptors, the higher the addiction potential.
Research shows that nicotine is released more slowly from pouches compared to cigarettes, so it may be less addictive than cigarettes. However, pouches can also vary in the amount of nicotine they contain – evidence shows some have very high levels, higher than cigarettes and NRT.
Pouches can be marketed as a “clean” form of nicotine consumption – but, although they are smoke-free, they can contain other chemical ingredients such as pH adjusters like sodium carbonate, which allow nicotine to be absorbed in the mouth more easily. Pouches do not contain tobacco, which contains many chemicals and cancer-causing agents. However, nicotine on its own can still be harmful.
Common side effects of nicotine pouch use include nausea, vomiting, headaches and heart palpitations. Nicotine causes the body to release of chemicals such as adrenaline and noradrenaline. Studies show increased levels of these can raise heart rate and blood pressure and the heart’s need for oxygen.
Animal studies suggest that nicotine use during teenage years can cause long-term changes in the brain and behaviour as well as an increased likelihood of using other drugs, lower attention levels and mood problems.
Young people have more nicotine receptors in the areas of the brain related to reward. This makes nicotine’s effects stronger in teenagers than in adults.
Currently there is not enough evidence to confirm nicotine pouches are harmful to oral health but dentists are concerned about their potential effects. Last year, a review found that oral side effects include dry mouth, sore mouth, blisters on the gums and sometimes changes in the gum area – such as receding gumline – where the pouches were placed. This is similar to side effects of oral NRT. Unlike NRT, which is normally used for a three-month course, pouches may be used for longer – potentially raising the risk of side effects.
Belgium and the Netherlands have banned nicotine pouches because of the potential risks. In the UK, the new Tobacco and Vapes bill will allow the government to regulate the use of nicotine pouches so that they can only be sold to people aged 18 and older. Advertising will be banned and the content and branding regulated.
This could be a welcome move for those concerned that nicotine pouch brands are targeting young people who’ve never smoked. But, for current smokers looking for a product to help them quit, it might be wise to opt for the regulated NRTs – even if the flavours aren’t as appealing.
Dipa Kamdar does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Source: The Conversation – UK – By Siobhan Mclernon, Senior Lecturer, Adult Nursing and co-lead, Ageing, Acute and Long Term Conditions. Member of Health and Well Being Research Center, London South Bank University
As a nurse working in a neurocritical care, I witnessed the sudden and devastating effects of stroke on survivors and their carers.
Following my nursing career, I became a researcher specialising in stroke. Knowledge of stroke risk factors in the general public is poor, so stroke prevention is a priority for public health.
Stroke is a leading cause of death and disability in England – yet it is largely preventable. It’s often considered an older person’s illness but, although stroke risk does increase with age, it can happen at any time of life. In fact, stroke incidence is increasing among adults below the age of 55 years.
Stroke risk factors that tend to be more common among older people – such as high blood pressure (hypertension), high cholesterol, obesity, diabetes, smoking, physical inactivity and poor diet – are increasingly found in younger people. Other lifestyle risks include heavy alcohol consumption or binge drinking and recreational drugs such as amphetamines, cocaine and heroin.
Some risk factors are not modifiable such as age, sex, ethnicity, family history of stroke, genetics and certain inherited conditions. Women, for example, are particularly susceptible to strokes – and women of all ages are more likely than men to die from a stroke.
Stroke risks unique to women include pregnancy and some contraceptive pills (especially for smokers), as well as endometriosis, premature ovarian failure (before 40 years of age), early-onset menopause (before 45 years of age) and oestrogen for transgender women.
Some risk factors are social rather than biological, however. Studies have found that people with a lower income and education level are at a higher risk of having a stroke. This is due to a combination of factors. Unhealthy lifestyle habits, such as smoking, heavier drinking and lower physical activity levels are more common in people with lower incomes.
However, research also shows that people with lower socioeconomic status are less likely to receive good quality healthcare than people with higher incomes.
But, regardless of biological or social risk factors, there are things you can do – right now – to reduce your risk of having a stroke.
Essential eight
1. Stop smoking Smokers are more than twice as likely to have a stroke than non-smokers. Smoking causes damage to blood vessel walls, increases blood pressure and heart rate but reduces oxygen levels. Smoking also causes blood to become sticky, further increasing the risk of blood clots that can block blood vessels and cause a stroke.
2. Keep blood pressure in checkHigh blood pressure damages the walls of blood vessels, making them weaker and more prone to rupture or blockage. It can also cause blood clots to form, which can then travel to the brain and block blood flow, leading to a stroke. If you’re over 18 years of age, get your blood pressure checked regularly so, if you do show signs of developing high blood pressure, you can nip it in the bud and make appropriate changes to your lifestyle to help reduce your risk of stroke.
3. Keep an eye on your cholesterol According to the UK Stroke Association your risk of a stroke is nearly three and a half times higher if you have both high cholesterol and high blood pressure. To lower cholesterol, aim to keep saturated fat – found in fatty meats, butter, cheese, and full-fat dairy – below 7% of your daily calories, stay active and maintain a healthy weight.
4. Watch your blood sugar High blood glucose levels are linked to an increased risk of stroke. This is because high blood sugar damages blood vessels, which can lead to blood clots that travel to the brain. To reduce blood glucose levels, try to take regular exercise, eat a balanced diet rich in fibre, drink enough water, maintain a healthy weight, and try to manage stress.
5. Maintain a healthy weight Being overweight is one of the main risk factors for stroke. It is associated with almost one in five strokes, and increases your stroke risk by 22%. Being obese raises that risk by 64%. Carrying too much weight increases your risk of high blood pressure, heart disease, high cholesterol and type 2 diabetes, which all contribute to higher stroke risk.
6. Follow a Mediterranean diet One way to eat a fibre-rich balanced diet and maintain a healthy weight is to follow a Mediterranean diet. This has been shown to reduce the risk of stroke, especially when supplemented with nuts and olive oil.
7. Sleep well Try to to get seven to nine hours of sleep daily. Too little sleep can lead to high blood pressure, one of the most important modifiable risk factors for stroke. Too much sleep, however, is also associated with increased stroke risk, so try to stay as active as possible so you can sleep as well as possible.
8. Stay active The NHS recommends that people should avoid prolonged sedentary behaviour and aim for at least 150 minutes of moderate intensity activity or 75 minutes of vigorous intensity activity a week. Exercise should be spread evenly over four to five days a week, or every day. Do strengthening activities, usually more than two days per week.
The good news is that while the effects of stroke can be devastating and life-changing, it is largely preventable. Adopting these eight simple lifestyle changes can help to reduce stroke risk and optimise both heart and brain health.
Siobhan Mclernon does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
A longstanding question in evolutionary biology is how sexual selection influences how entire genomes develop. Sexual selection is where individuals with certain traits have higher reproductive success, leading to the spread of those traits throughout a species.
A study by me and my colleagues at the Milner Centre for Evolution has uncovered a significant link between the difference in body size between males and females – known as sexual size dimorphism (SSD) – and genetic changes in mammals. These findings provide new insights into how sexual selection shapes the structure and function of the genome.
Sexual selection is a powerful evolutionary force that influences reproductive traits. It typically acts through mate choice (intersexual selection) and competition among individuals of the same sex (intrasexual selection). Over time, these constant pressures shape genome architecture, driving rapid evolution in genes associated with reproductive success.
This may affect the voice, body size, plumage or other feature of a species over time. In fact, such pressures may be behind a rise in height in male humans compared with females.
Recent work highlights how sexual selection contributes to changes in the genetic blueprint (genome) and genes actively used (transcriptome).
Many sexually dimorphic traits arise through sex-specific differences in gene expression. This allows a single shared genome to produce distinct male and female types.
Males and females differing in body size is a common outcome of sexual selection. Some examples are the southern elephant seal (Mirounga leonina), domestic ferret (Mustela putorius furo) and northern fur seal (Callorhinus ursinus), where males are more than 250% heavier than females. In contrast, species such as the natal long-fingered bat (Miniopterus natalensis), humans and wombats (vombatus ursinus) show lower SSD, with males weighting less than 50% more than females.
Male sumatran orangutans (left) are much larger than female ones (right). wikipedia, CC BY-SA
A large difference often correlates with intense male-male competition, leading to the evolution of traits that enhance reproductive success, such as tall stature. However, while the impact of this difference on physical traits is well documented, its influence on genome evolution has remained largely unexplored.
Sense of smell versus brain size
We analysed groups of related genes called gene families across 124 mammalian species. Our study provides compelling evidence that SSD is associated with major shifts in the sizes of such families.
Specifically, species with high SSD have an expansion of gene families linked to sense of smell. At the same time, their gene families related to brain development tend to contract.
This suggests that in species with strong male competition, investment in traits that aid in reproductive success, such as olfactory cues for mate recognition, is prioritised over cognitive development.
Conversely, species with low SSD show an expansion of brain-related gene families. This pattern suggests that in these mammals, natural selection may favour cognitive abilities and complex social behaviours rather than traits driven by sexual competition.
Sexual conflict, where selection acts in opposing directions in males and females, plays an important role in genome evolution. This may involve males evolving brighter colours and outstanding features, as seen in peacocks (Pavo cristatus) and guppies (Poecilia reticulata). While these traits enhance male success by attracting females, they might also increase the risk of being spotted by predators.
Many sex differences arise due to selection acting differently on shared genetic material, creating evolutionary tension. This can lead to sex-biased gene expression, allowing genes to function differently in males and females. This is the case for genes controlling bright colouration in guppies, for example.
Studies have suggested that genes under strong sexual selection tend to evolve rapidly, particularly those associated with male reproductive traits, such as body size or colour. Additionally, genomic features, such as the duplication of genes, can help the evolution of sex-specific traits, helping to alleviate conflicts between the sexes.
Our findings support these ideas by demonstrating that SSD influences gene family evolution, shaping molecular pathways critical for sexual and cognitive development.
Evolutionary give and take
Sexual selection does not act in isolation. It interacts with other evolutionary forces, such as natural selection and ecological pressures, to shape diversity. For example, larger body size in males may confer advantages in physical competition. But it can also increase metabolic demands and the risk of being caught by predators.
Similarly, large brains and complex social structures may be favoured in species where cognitive abilities play a role in reproductive success, such as humans. But this comes at the cost of slower development and greater energy expenditure.
This interplay between sexual selection and other evolutionary pressures highlights the complexity of genome evolution. Traits that provide reproductive advantages may not always align with those that enhance survival. This leads to give-and-take situations that shape species diversity over time.
By examining the genetic underpinnings of SSD, our study provides new perspectives on how these situations play out at the molecular level. Our findings ultimately refine our understanding of how sexual selection influences genome evolution among mammals.
Future research should explore in depth how these genomic changes influence behaviour and cognitive abilities in different species. These findings will open exciting new avenues for research, helping to answer fundamental questions about how evolution shapes biodiversity at the genetic level.
Benjamin Padilla-Morales does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Source: The Conversation – UK – By Christina Philippou, Associate Professor in Accounting and Sport Finance, University of Portsmouth
The women’s rugby side Gloucester-Hartpury have had a pretty good season. On March 16 they won their third Premiership Women’s Rugby Championship in a row, beating Saracens 31-19 in the final.
But the sport as a whole is enjoying an impressive run too. Fellow Premiership side Harlequins broke the world attendance record for a women’s rugby club game at the Allianz Stadium (Twickenham) in December 2024, with a crowd of 18,055. And ticket sales for the Women’s Rugby World Cup in August (hosted by England) have already broken records.
There has also been a surge in commercial interest. Research I was involved in suggests that rugby is following a trend seen in other women’s sports, including football and basketball, where brands previously not associated with sport are finally joining the party. The skincare brand Clinique is now a key sponsor of Premiership Women’s Rugby (PWR), for example.
And despite issues with financial sustainability across rugby union clubs generally, some clubs are showing a clear appetite for commercial growth. Leicester Tigers’ women’s side, for example, is currently seeking a “principal partner” to sign up to a “six-figure annual commitment” of investment and sponsorship – in return for naming rights of a planned new stadium.
Broadcasting interest (and income) has increased too. PWR and TNT Sports have a multi-year deal to show live matches, while BBC Sport had live access to four key games this year, starting with Harlequins against Bristol Bears in February and ending with the PWR final. For the national teams, the 2025 Women’s Six Nations tournament will also be shown on the BBC.
Overall then, women’s rugby in England is winning more coverage, higher attendances, and greater involvement from commercial brands just in time for the World Cup. And the effects are already visible for the tournament, with “unprecedented demand” for tickets an early indicator of financial success. A number of matches already have limited availability.
That said, any large sporting event carries risks, and research shows that the aftermath (for sporting involvement) can be disappointing and the effects on the domestic game limited. A proper legacy depends on the support of national governing bodies.
Star power
So women’s rugby still faces barriers. But without wishing to place further weight on her shoulders, the sport has a not-so-secret weapon in the form of a player who has elevated the sport to new levels in a very short space of time.
Ilona Maher, 28, has 3.5 million followers on Tiktok, more than any other rugby player in the world, of any gender. She represented the US rugby sevens national team at the Paris Olympics (they came third) and her appearance on the US dance competition show Dancing With the Stars (where she finished in second place) made her even more famous. Next on her list it playing for her country in this year’s World Cup.
To do so, she needed to bolster her experience in the 15-a-side game – so ended up signing for PWR side Bristol Bears.
This was a commercially shrewd deal for both sides. Maher is getting semi-professional experience, and Bristol Bears have already seen a financial boost. They doubled their attendance record (to 9,240) on Maher’s debut weekend in January 2025, having moved venue to accommodate the surge in ticket sales. The club is also selling more merchandise.
Nor is it just Bristol Bears which have benefited from the Ilona Maher effect. Interest in the league as a whole has increased, both in the UK and abroad, bringing new audiences to the sport just in time for the international competition.
Those audiences can hopefully look forward to an entertaining and exciting World Cup in England this summer. And if the current momentum behind the sport continues, a bright future for women’s rugby.
Christina Philippou does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Source: The Conversation – UK – By Francesco Grillo, Academic Fellow, Department of Social and Political Sciences, Bocconi University
How much would it really cost the European Union to defend itself against aggression? In the immediate term, that question, of course makes us think of Russia, but we can no longer exclude multiple other possibilities, including the potential need to defend territory – say, Greenland – from a former ally.
How much would it cost to defend Europe if we added in the need to defend the UK, Norway, Turkey or even Canada – and any other Nato country willing to pool resources to fill the void left by US disengagement? Is there an intelligent way to avoid painful trade-offs between this and, say, spending on healthcare or education?
It looks like EU institutions are finally “doing something” (as former Italian prime minister Mario Draghi recently asked them to do). They may even break the taboo of raising common debt in order to increase spending on joint defence procurements.
Yet, it also seems they are about to launch a plan that could change the very nature of the European Union without even tackling the question of its financial feasibility. The answer to how joint defence can be paid for certainly doesn’t come from the plan that the European Commission has unveiled on “rearming Europe”. At the very last line of that statement, a figure of €800 billion is posited, but it is not clear how the sum was calculated and quite a few critical qualifications are missing.
The debate over how much it costs to prevent a war (which is a very different notion from fighting one), has been dominated by what I would call “the fallacy of the percentage of GDP”.
In 2014 (at the time of Russia’s annexation of Crimea), the leaders of Nato countries agreed to spend at least 2% of their GDP on defence (specifying that retirement benefits to veterans should be included). Yet by 2022, the overall ratio for Nato defence spending had, in fact, shrunk from 2.58% of GDP to 2.51% (thanks to the sharp reduction in the percentage of GDP contributed by the US). And, according to the European Defence Agency, the EU is spending around €279 billion, which is 1.6% of its GDP. Most likely, the €800 billion figure that European Commission president Ursula von der Leyen was citing in her communique is simply an estimate of how much it would yield to increase that spending up to 2% of GDP for each of the next ten years.
Politicians sometimes need to make back-of-the-envelope calculations, but I would argue that here it points to a much broader problem. Europe hasn’t yet bothered to try to develop a strategy for how this additional money would be spent. A proper strategy should, in fact, start from three key technical considerations. To which I would add a no-less important political one.
1. Spending smart is better than spending big
Technologies (including AI) are radically changing the equation. The conflicts in Ukraine and Gaza demonstrate that cheap drones are now the key to modern warfare – not super expensive F35 strike fighters. Why spend billions designing, building and maintaining 2,500 F35s when a drone the size of a mobile phone can cross enemy lines unnoticed?
In a world in which data is a weapon, and a large-scale attack can be mounted by taking remote control of pagers, what generals call “supremacy” doesn’t necessarily belong to the biggest spender.
Israel’s military budget is one-third that of Saudi Arabia, yet it dominates the Middle East because its perpetual state of conflict forces innovation. Russia spends less than half of the 27 EU member states, but it has much more experience in hacking other countries’ infrastructures. The EU spends as much as China, but China invests more than twice in research and development and is the world’s largest exporter of drones as a result.
2. Spending together is better value
The European parliament estimates that merging the 27 member states’ defence budgets would free up €56 billion (which is a third of what the defence bonds proposed by the Commission would raise).
Yet the trend is to spend more alone than together. According to the European Defence Agency, the bloc has more than doubled its expenditure on new digital technologies; yet the percentage of that going into joint projects between member states fell from 11% before Ukraine’s invasion to 6.5% in 2023.
Any common defence would also have to rely on “buying European” as much as possible. The F35 fighter jet is another good example here. Denmark agreed to buy 27 of them (to the tune of around €3 billion) with an idea to station four of them in Greenland. The problem is that, according to the former president of the Munich security conference Wolfgang Ischinger, they cannot even take off if remotely disabled by the US. Again, Europe is not walking the walk. The share of equipment that European nations import from the US has massively increased in the last five years.
A new era for the union
Defence is probably the most important issue when talking about the Europe of the future. It provides a concrete opportunity to fill a technological gap out of the necessity to do so. Spending on defence in the interests of self-protection may have longer-term benefits beyond the military arena. It has been often the case that military research leads to major breakthroughs that can applied in public services. Who knows. Military innovations with drone or AI technology on today’s battlefields could lead to beneficial uses in peace time.
The historic opportunity to transform the way we protect ourselves may even force a radical rethinking of not just the EU treaties but of the nature of the EU. The idea of the “coalition of the willing” may, indeed, push Europe towards an alliance which does not include some of its members (such as Hungary) but does include non-members like the UK, Norway and even Turkey. New arrangements will need to be pragmatically flexible.
Europeans need much more strategy, whereas we now largely have rhetorical announcements with little substance. And we need much more democracy. After all, defence is one of the defining dimensions of the state. Having a common defence policy in Europe could make people feel more like European citizens. But that cannot happen without engaging citizens in an intelligent debate.
Francesco Grillo is affiliated with the think tank Vision.
Source: The Conversation – UK – By Natalya Chernyshova, Senior Lecturer in Modern European History, Queen Mary University of London
Germany’s ex chancellor, Angela Merkel, and France’s former president, François Hollande, were key to brokering the Minsk agreements.Sodel Vladyslav / Shutterstock
The Russian president, Vladimir Putin, has agreed to pause attacks on Ukrainian energy infrastructure for 30 days following a phone call with his American counterpart, Donald Trump. On social media, Trump said the call was “very good and productive” and came “with an understanding that we will be working quickly to have a complete ceasefire”.
This optimism is misplaced. The White House did not mention that Putin issued additional conditions for a ceasefire. The Kremlin demands that Ukraine be effectively disarmed, leaving it defenceless against a Russian takeover. Such terms would be unacceptable to Ukraine and its European partners.
At this juncture, Trump and his negotiators would do well to ponder why previous attempts to restrain Russia and secure a lasting peace for Ukraine did not succeed.
This war did not start when shells began to rain on Kyiv in February 2022. Russia had already been waging an undeclared war on its neighbour for nearly eight years in eastern Ukraine’s Donbas, where pro-Russian proxy forces have been stoking up trouble in the border regions of Luhansk and Donetsk.
Attempts to end the fighting there were made in September 2014 and February 2015, when Russia and Ukraine signed ceasefire agreements during negotiations in Minsk, Belarus.
Both sets of Minsk agreements proved to be non-starters. The fighting in the region rumbled on until it culminated in Moscow’s full-scale invasion of Ukraine in 2022. The accords stored problems for the future.
Russia-backed separatists have controlled the south-eastern Ukrainian regions of Donetsk and Luhansk since 2015. Viacheslav Lopatin / Shutterstock
Minsk-1 and Minsk-2
The first Minsk protocols were signed in 2014 by Russia, Ukraine, separatists from Donbas and representatives from the Organization for Security and Co-operation in Europe (OSCE). The agreement provided for an immediate ceasefire monitored by the OSCE, the withdrawal of “foreign mercenaries” from Ukraine and the establishment of a demilitarised buffer zone.
But Moscow also insisted that Kyiv grant temporary “special status” to the Donetsk and Luhansk People’s Republics, the two separatist regions in Donbas. Instead of helping Ukraine regain control over its eastern territories, the agreement allowed the Russia-backed rebels to hold local elections and legalised them as a party to the conflict.
The ceasefire collapsed within days of signing. The provisions that sought to demarcate the lines of the conflict and give Ukraine back control over its eastern border were not observed by the rebels, and fighting intensified during the winter.
With the death toll rising, the leaders of France and Germany rushed to broker a fresh round of negotiations in February 2015. The resulting accords, which were known as Minsk-2, also failed to bring peace.
Russia and its proxy militants in Donbas immediately and repeatedly violated its terms. Astonishingly, Minsk-2 did not even mention Russia, despite it signing the protocols. Moscow continued to deny its involvement in eastern Ukraine, while stepping up armed assistance to the rebels.
Kyiv was saddled with peace terms that were impossible to implement unless Ukraine was prepared to throw away its sovereignty. Minsk-2 stipulated that the “special status” of the eastern separatist regions was to become permanent, and that the Ukrainian constitution was to be amended to allow for “decentralisation” of power from Kyiv to the rebel regions.
These regions were to be granted autonomy in financial matters, responsibility for their stretch of the border with Russia, and the right to conclude foreign agreements and hold referenda. To undercut Ukrainian independence further, a neutrality clause inserted into its constitution would effectively bar the country’s entry into Nato.
Understandably, no one in Kyiv rushed to implement these self-destructive terms. In an interview with German magazine Der Spiegel in 2023, Volodymyr Zelensky said that when he became Ukraine’s president in 2019 and examined Minsk-2, he “did not recognise any desire in the agreements to allow Ukraine its independence”.
Russia-backed separatists in Sloviansk, a city in Donetsk Oblast, in 2014. Fotokon / Shutterstock
Zelensky’s comment points to the fundamental flaw of the Minsk-2 agreement. Its western brokers failed to recognise that Russian war aims were irreconcilable with Ukrainian sovereignty. Moscow’s objective from the start was to use Donbas to destabilise the government in Kyiv and gain control over Ukraine.
Western peacemakers searched for a compromise, but the Kremlin used Minsk-2 to advance its goals. As Duncan Allan of the Chatham House research institute noted in 2020: “Russia sees the Minsk agreements as tools with which to break Ukraine’s sovereignty.” The war in Donbas raged on and, by 2020, had claimed 14,000 lives, with 1.5 million people becoming refugees.
Germany’s ex-chancellor, Angela Merkel, a key broker, subsequently defended the Minsk agreements. She said they bought Kyiv time to arm itself against Russia. It was a costly purchase. Minsk-2 froze the conflict in one locality rather than ended it. And it encouraged Russia, paving the way for a full-scale invasion.
Emphasising Ukrainian sovereignty
The existential differences between Ukraine and Russia that plagued the Minsk agreements remain today. Ukraine has demonstrated its resolve to defend its sovereignty, while Russia’s invasion in 2022 testifies to its determination to squash Ukrainian resolve. The timing of the attack so close to the seventh anniversary of Minsk-2 adds grim emphasis to that point.
This clash of objectives must be addressed head-on in any peace negotiations. The only way to secure lasting peace in Europe is to avoid rewarding the aggressor and punishing its victim.
The Kremlin has already openly declared that it sees Trump-led brokerage as the west’s acknowledgement of Russian strategic superiority. It needs to be disabused of this notion. As argued by Nataliya Bugayova, a fellow at the Institute for the Study of War, the war is not lost yet. Russia is far from invulnerable, and it can be made to accept defeat.
But for any agreement to be effective, there can be no ambiguity or middle ground on the subject of Ukrainian sovereignty. It must be protected and backed by security guarantees.
So far, the Trump administration has shown little understanding of this. But ten years down the line from Minsk-2, Europeans have finally grasped it.
Finland’s president, Aleksander Stubbs, told reporters on March 19 that Ukraine must “absolutely” not lose sovereignty and territory. And, on the day Trump and Putin had their discussion, Germany’s parliament voted for a massive boost in defence spending – another indicator that Europeans are no longer taking Putin on trust.
Natalya Chernyshova received funding from the British Academy during 2020-2022.
Microplastics are hindering photosynthesis, the process by which plants convert energy from the sun into the fruit and vegetables we eat. This threatens massive losses in crop and seafood production over the coming decades that could mean food shortages for hundreds of millions of people.
So concludes an alarming new study. The authors combined more than 3,000 observations of the effects of microplastics on plants from 157 separate scientific reports, and then extrapolated the results using machine learning, a type of computer model that trains AI to spot patterns in data.
Microplastic exposure, they found, reduces photosynthesis in land plants and marine and freshwater algae by 7% to 12%. The authors calculated that this could eventually reduce yields of staple crops such as rice, wheat and maize by between 4% and 14%.
How realistic is this scenario? While the new study does not fully support such dramatic conclusions, it does draw attention to the possible future risks from microplastics.
The complexities of microplastics
Plastics are useful and versatile products. But they are also difficult to recycle and during 2025 alone, will probably account for 360 million tonnes of solid waste.
More insidious are the trillions of tiny fragments these plastic products break up into, now found everywhere from the deep sea to your brain. These microplastics are less than 5mm in size and some of them are as small as 1 micron (micro-metre), meaning that 10,000 of them could easily fit inside an average plant or animal cell.
More microplastics are formed as larger plastic waste breaks down in the environment. Chayanuphol/Shutterstock
Scientists have estimated that about 11 million tonnes of these microplastics, including 51 trillion individual particles, are released into the ocean each year.
Researchers increasingly use AI models to analyse complex datasets. The results can often be useful. My colleagues and I used similar methods to analyse massive molecular datasets and determine the chemical composition of palm oil in different regions of the tropics.
In that case, it was difficult to analyse one group of compounds across a relatively small geographic region. The risks of misleading conclusions are many times greater when trying to analyse microplastics and their different effects globally, as in this new study.
Indeed, the authors of the new study sought to answer questions that are orders of magnitude more complex, involving vast quantities of microplastics in the entirety of the global biosphere. Other scientists have expressed concern about the limited data used by the current model, that could lead to overspeculation about the possible consequences for food supplies.
Despite these concerns, the new study is useful for highlighting the growing body of scientific data on the deleterious effects of microplastics, found in ecosystems from the Arctic to the Amazon. Over the past 20 years, evidence of the potential risk of microplastics has steadily accumulated.
More research is needed
The main conclusions of the new study are based on extrapolations that may not apply on a global scale. The reality is that there are many thousands of types of microplastics, that differ significantly in their chemical composition, size, environmental distribution and biological effects. The new study did not discriminate between them. This means that it is difficult to study their effects on individual processes within human or plant health.
Larger microplastics accumulate in the soil while much smaller microplastics can be present in the air and can be directly absorbed into plant cells. In some cases, the smaller microplastics can damage the cellular bodies, called chloroplasts, involved in photosynthesis.
Previous studies have shown that exposing some algae to microplastics can reduce photosynthesis and increase stress, leading to cell damage similar to the effects of ageing in people. Other studies on crop plants such as tobacco have concluded that the effects of microplastics on photosynthesis vary with the type and dose, exposure duration and plant species. In other words, there is no single approach for comparing the effects on plants as different as a lettuce and an apple tree.
Given the potential (albeit speculative) risk to global food production, more priority should be given to rigorous scientific research of microplastics and their effects on both crops and the marine life that supports fish and seafood stocks.
The World Economic Forum has labelled microplastics as a top ten threat and recommends urgent action. In its latest analysis, it also reported that the average person could ingest between 78,000 and 211,000 of these particles each year. It is estimated that the emission of microplastic particles is likely to more than double in the next 15 years, possibly over 40 million tonnes annually.
Despite growing concern among scientists and civil society, several of the larger public bodies involved in microplastics research in the US and Europe are considering radical cuts to both environmental research funding and regulatory oversight.
While poorly understood, the threat of microplastics could rival other serious threats, including climate change and biodiversity loss.
Don’t have time to read about climate change as much as you’d like?
Denis J. Murphy does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
The global trade landscape is shifting, and not in the way free traders had hoped. For decades, the belief that economic openness could foster peace and stability reigned supreme. Trade, it was argued, could transform authoritarian regimes into more peaceful players. But Russia’s invasion of Ukraine has shattered this way of thinking. Rather than mourning the end of a multilateralism based on states’ commitments to jointly agreed trade rules, we should see it as a necessary adjustment to a world where economic security takes precedence over market efficiency, and resilience over cost minimization.
The World Trade Organization (WTO), which has constrained protectionism since its inception in 1995, is no longer the linchpin of global trade it once was. Multilateral trade talks have stagnated, and the WTO’s dispute settlement system is in paralysis. The US, once a champion of rules-based trade, now finds strategic advantage in a world where power dynamics outweigh legal frameworks. Years of negotiations on agriculture and fisheries subsidies have yielded little progress, underscoring the difficulty of reaching consensus among increasingly divergent national interests.
A weekly e-mail in English featuring expertise from scholars and researchers. It provides an introduction to the diversity of research coming out of the continent and considers some of the key issues facing European countries. Get the newsletter!
Consider the Uruguay Round negotiations in the 1990s that led to the establishment of the WTO – a rare moment when 123 countries found common ground on liberalizing trade in goods, services and intellectual property. That success stemmed from a broad agenda that offered enough variety to create win-win scenarios for all. Today, narrow negotiation agendas make compromise far harder to achieve.
Free trade agreements are emerging less frequently: the average number of new trade agreements per year since 2020 is less than half the average of the previous decade. Meanwhile, protectionist measures have proliferated: there were about five times as many in 2023 as in 2015. Regardless of US President Donald Trump’s tariff frenzy, governments are erecting trade barriers and adopting policies that favour domestic industries, driven by the need to secure critical supply chains.
The trend is clear: trade liberalization is no longer the top priority for most countries. Instead, security concerns are reshaping trade policy, echoing the arguments of the 18th-century philosopher Adam Smith. In The Wealth of Nations, Smith argued that national defence is more valuable than economic wealth. (“Defence,” he wrote, “is of much more importance than opulence”). This idea feels particularly relevant today. In a world of geopolitical conflict, trade is often yielding to strategic concerns.
The United Nations, despite its mission to maintain peace, has struggled to prevent conflict. If international law cannot deter aggression, economic policy must step in.
Security-driven trade
For the EU, this translates into using its trade policy instruments, especially vis-à-vis China, on the basis of a careful dependency analysis that identifies strategic commodities and products. As the European Commission sets self-sufficiency benchmarks for green technologies following the bloc’s Net-Zero Industry Act, it errs if it sees the substitution of domestic products for imports as the right way to reduce dependencies. In most cases, reducing import concentration will require diversifying suppliers rather than European self-production.
Security-driven trade requires shifting away from fragile multilateralism toward more selective, regional alliances. These “trade clubs” would align economic interests with shared security priorities. The EU’s strengthening ties with the South American Mercosur states, a group of non-hegemonic countries reliant on open trade, exemplify this approach. Intensifying trade with targeted countries could be the best response to Trump’s tariffs, avoiding the lose-lose outcome of tit-for-tat tariff wars. The goal of autonomy from an unpredictable US offers a good framework for crafting new bilateral relationships.
Another example is the idea of a “climate club”, which policy-makers have discussed for some time. Climate clubs would consist of countries that agree on joint strategies to reduce carbon emissions while fostering energy security and protecting their economies from competitors without adequate carbon pricing.
The challenge is to distinguish between “legitimate” and “illegitimate” security claims. The latter refer to countries’ growing abuse of the national security card to justify trade policies. WTO dispute settlement panels ruled against the “self-judging” character of national security claims, hence subjecting them to legal scrutiny, but this “rule of law” approach has only heightened rejection of the WTO system on the US side. To limit abuse, the EU should seek alignment with the US on issues of common concern, such as responding to industrial overcapacity or preventing technology leaks. A joint approach could avert nationalist unilateralism.
A new focus for the WTO
Some worry this shift away from multilateralism could disadvantage poorer nations, leaving them vulnerable to the whims of powerful ones. However, regional trade alliances can empower smaller states. For example, the African Continental Free Trade Area (AfCFTA) gives African nations collective bargaining power they might lack individually. Since its inception with 22 signatories, AfCFTA has grown to include 48 countries, enhancing the continent’s influence in global trade.
Abandoning multilateralism doesn’t mean sidelining the WTO entirely. Instead, the WTO can refocus on smaller, “plurilateral” agreements among like-minded countries. This “coalition of the willing” approach has already proven effective in areas like e-commerce and investment facilitation. The WTO can remain a forum for building consensus, but its future lies in fostering flexible partnerships rather than pursuing grand, all-encompassing trade deals. In a fragmented world, these smaller agreements could yield the most meaningful progress. Nascent but promising plurilateral efforts are under way to tackle fossil fuel subsidies and environmentally sustainable plastics trade.
The golden age of global free trade may be over, but that doesn’t spell disaster. As nations grapple with security challenges, trade policy must evolve to reflect new priorities. Strategic alliances, diversified supply chains and targeted trade agreements will shape the future of global commerce. Rather than lament the decline of multilateralism, we should embrace this shift as a necessary response to a more volatile world. In doing so, we can craft a trade policy that prioritizes resilience and security, safeguarding both economic stability and national interests.
Armin Steinbach ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’a déclaré aucune autre affiliation que son organisme de recherche.
Source: The Conversation – Canada – By Isaac Garcia-Sitton, Associate Faculty, School of Education and Technology, Royal Roads University
In early 2024, the federal government imposed a two-year cap on new study permits.(Shutterstock)
For decades, international students have contributed to Canada’s research enterprise, workforce development and economic growth.
Now, as Canada navigates strained relations and an escalating trade war with its largest economic partner, it’s important policymakers stop overlooking international education that could be a critical factor in bolstering Canada’s resilience.
Unlike volatile trade agreements and fragile supply chains, international education provides a stable, long-term economic and social advantage.
By 2022, that figure had grown to $37.3 billion. This represented just over 23 per cent of Canada’s total service exports and around five per cent of total merchandise exports. The economic contributions from international education outpaced economic contributions from other industries — such as softwood lumber and auto parts.
International students also serve as vital ambassadors — diversifying trade connections and expanding Canada’s global reach.
Despite their undeniable value, recent policy shifts risk undermining Canada’s position as a top destination for global talent. In early 2024, the federal government imposed a two-year cap on new study permits. The cap would mean approximately 360,000 study permits would be approved in 2024 — a decrease of 35 per cent from the previous years.
However, institutions fell well below the imposed cap. This wasn’t due to a lack of demand but because of the rushed, poorly managed roll-out that amplified disruption beyond expectations. In fall 2024, the number of permits granted was on track to drop by 45 per cent compared to the previous year.
While a cap may have been necessary to moderate the sector’s growth, its rollout created uncertainty for institutions and students. This damaged Canada’s reputation for high-quality education. The impact to our global standing as a top destination for international students will take years to repair.
The government plans cap student visa approvals at 427,000 by 2026. (Shutterstock)
This policy shift is especially concerning given Canada’s ongoing innovation and productivity challenges. A recent report from U15 research institutions shows Canada lags behind its peers in the Organization for Economic Co-operation and Development (OECD). It’s mainly falling behind in research and development intensity, private sector innovation and technology adoption.
In 2022, Canada’s research and development spending stood at just under two per cent of GDP. This is well below the OECD average of around three per cent.
Many small and medium-sized businesses rely on university partnerships for research and development. Cutting international graduate student numbers disrupts these collaborations — hindering innovation at a time when Canada can least afford it.
With unemployment at around six-and-a-half per cent and youth unemployment at 13.6 per cent, concerns about job competition are valid. Yet newcomers and international students face significant barriers in finding jobs in their fields.
In 2024, the unemployment rate for recent immigrants reached 11 percent. This is nearly double the unemployment rate for Canadian-born workers. Despite holding advanced degrees, two-thirds of foreign-trained professionals remain underemployed. This may be due to employers undervaluing international credentials and prioritizing “Canadian experience.”
This trend extends to international student graduates who remain less likely than their Canadian peers to find jobs that match their level of education. In 2023, just over 36 per cent of international graduates with a bachelor’s degree secured roles requiring a university-level qualification, compared to just under 59 per cent of Canadian graduates. International student graduates also earn significantly lower salaries, despite having similar levels of job satisfaction.
International student graduates face barriers in findings employment. (Shutterstock)
Like many newcomers, I personally faced this Canadian experience barrier when I entered the workforce over 15 years ago as a permanent resident. Despite my education, multilingual abilities and professional skills, I submitted hundreds of applications and secured only a handful of interviews before landing my first opportunity. This frustrating, unnecessary and economically wasteful struggle remains just as prevalent today.
These barriers not only limit individual potential but also weaken Canada’s ability to harness the talent it attracts.
Addressing systemic issues
International students are more than workers — they’re entrepreneurs, innovators and future job creators.
For instance, as of 2022, nearly 180 of the U.S.’s billion-dollar companies were founded by former international students. Each of these companies created an average of 800 jobs and made up nearly a quarter of all dollar companies.
Canada risks losing similarly bright minds to more welcoming countries if clear pathways for them to stay, contribute and build businesses aren’t established. This would cost the country both talent and billions in economic potential.
If Canada is serious about building a stronger, more competitive economy, it must address the systemic issues that stand in the way of international student success.
This includes modernizing credential recognition so employers can fairly assess international experience and qualifications, expanding co-op programs, internships and mentorships so international students can gain relevant Canadian experience before graduation and protect them from misinformation and questionable recruitment practices.
Employers need to be educated about immigration pathways to reduce hiring hesitancy. The government also must create a stable and predictable immigration policy framework to give businesses confidence in hiring international graduates.
As Canada continues to face labour shortages and growing economic and political volatility, international education remains a strategic asset. It fuels research, diversifies trading partners, supports innovation and supplies the workforce Canada needs for long-term prosperity.
The future of Canada’s economy depends on its ability to attract and retain the thinkers, creators, and innovators who will define the next generation of progress. At this critical moment, Canada must decide if it will invest in the talent that fuels innovation, or close the door on opportunity.
Isaac Garcia-Sitton is affiliated with the Canadian Bureau for International Education (CBIE), the Council of Ontario Universities (COU), and the Council of International Schools (CIS)
Emergency alerts may amplify distress in people who already have anxiety.(Shutterstock)
When there’s a disaster, it’s helpful to know what’s going on — and know whether you’re truly at risk. But as essential as emergency alert systems are, they can leave many of us feeling anxious — even when the alert may be a false alarm or test.
This is because emergency alerts, whether real or tests, can activate the same neural circuits involved in real danger. This can trigger stress, confusion and anxiety.
Our nervous systems are constantly processing information from both our bodies and our environment, trying to distinguish between warnings that demand action and those that can be safely ignored.
But over time, the stress associated with being on constant alert can have lasting effects on mental health. Chronic stress can contribute to the risk of developing anxiety disorders and depression, and even physical disorders such as heart disease. This is especially true for people who live in war-torn or natural disaster-prone areas.
In people who already have anxiety, being unable to distinguish between real and perceived threats can be particularly debilitating. This can amplify their distress, making it difficult to navigate a world filled with both real and perceived threats.
Similarly, neurological conditions such as migraines, Parkinson’s disease and Alzheimer’s disease can be exacerbated by chronic stress responses. This can lead to a worsening of symptoms and lower quality of life.
The constant barrage of information we’re exposed to — from daily news alerts to “doomscrolling” on social media — highlights a broader challenge we all face: learning to navigate a world increasingly filled with real and perceived threats that can further exacerbate anxiety.
The body’s interoceptive system — the brain’s ability to sense and interpret internal physiological signals — plays a crucial role in determining which environmental signals warrant our attention.
This systems helps us detect when our heart is racing from actual danger, versus when it’s simply responding to stress or uncertainty. But when interoception is disrupted, as it often is during heightened anxiety states, distinguishing between true and false alarms becomes increasingly difficult.
Nervous system support
Thankfully, there are things we can do to help better support our nervous systems in making these critical distinctions.
It’s helpful to be conscious and deliberate about what we expose ourselves to in our internal and external environment. Creating a daily schedule with set times for exercise, sleep and social connection can be effective. Practising mind-body approaches such as mindfulness, breath work, yoga and tai chi might also help to facilitate an inward focus. Sustaining this inward focus can help reset our interoceptive system.
Spending time with friends and sharing your concerns with them can also be helpful when dealing with perceived threats. This can also enhance social connection, which can buffer stress. It can be very comforting to feel connected to others who are experiencing a similar trauma. Limiting time with people who increase your anxiety is also key.
Stepping away from information streams might also help. Finding ways to temporarily turn off or physically separate from digital devices such as laptops, cellphones and smart-watches for set periods of time can effectively facilitate a break from media. This can allow our minds to settle and reset our attention on priorities that are meaningful to us.
A novel strategy that has recently been studied for reducing anxiety and resetting the interoceptive nervous system is flotation tank immersion, also known as float therapy or flotation-REST. This involves lying in a shallow bath of warm water filled with concentrated levels of Epsom salt. When combined with reduced visual and auditory stimulation, this is thought to enhance the body’s interoceptive signals.
Float therapy may be helpful for mental health. (Shutterstock)
Ultimately, understanding the brain’s role in processing internal and external threats is vital to improving our mental and physical wellbeing.
Using our interoceptive nervous system as a way of developing resilience involves learning to be proactive rather than reactive. Sensing when our body is getting the preliminary cues of anxiety or stress that can mount into full-blown disarray can help. Not reacting to these cues, and consciously and deliberately choosing alternative actions, can help to unwind the anxiety from these cues. This may also potentially even help us avoid an episode of panic.
Being more in tune with our nervous system can help us better equip ourselves to face the challenges ahead — whether they’re true threats or false alarms.
Sahib Khalsa receives funding from the National Institute of Mental Health. He is an associate editor of several journals, Biological Psychology and JMIR Mental Health. He is a board member of several nonprofit organizations, the International Society for Contemplative Research and the Float Research Collective, which are non-compensated positions.
Indu Subramanian does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
A fresh round of Israeli airstrikes on Gaza which has killed more than 400 Palestinians has destroyed any hope that the ceasefire negotiated in January would hold. A statement from the child rights group Defence for Children Palestine claimed that 174 children had been killed in the bombing, claiming: “Today is one of the deadliest days for Palestinian Children in history.”
The renewed bombing follows repeated violations of the ceasefire terms by Israel and comes days after a report commissioned by the United Nations said Israel is “deliberately inflicting conditions of life calculated to bring about the physical destruction of Palestinians as a group”. The March 13 report from the UN Independent International Commission of Inquiry on the Occupied Palestinian Territory examines what it calls Israel’s “systematic use of
sexual, reproductive and other forms of gender-based violence
since 7 October 2023”.
The report alleges deliberate acts have been aimed against mothers and children, including the destruction of Gaza’s main fertility clinic, Basma IVF clinic, which it said amounted to “a genocidal act under the Rome Statute and Genocide Convention”. It concluded that “this was done with the intent to destroy the Palestinians in Gaza as a group, in whole or in part, and that this is the only inference that could reasonably be drawn from the acts in question”.
The International Court of Justice (ICJ) has yet to rule on a case brought by South Africa in December 2023 accusing Israel of committing genocide in Gaza. In January 2024 it issued a ruling saying that Palestinians in Gaza had “plausible rights to protection from genocide” and set out provisional measures that Israel should follow to prevent genocide. There is no evidence that Israel has heeded this advice.
Addressing the UN human rights committee in October 2024, special rapporteur Francesca Albanese said she believed it is important to “call a genocide as a genocide”. While noting the legal position according to the ICJ, we agree with her on the grounds that a post-hoc judgement of genocide does nothing to prevent it from occurring.
Francesca Albanese addresses the United Nations, October 2024.
The commission’s report is not the first time that international organisations and lawmakers have called attention to Israel’s violence against Palestinian mothers and children. In March 2024, Philippe Lazzarini, the commissioner-general of the UN agency Unrwa, wrote on X: “This is a war on children. It is a war on their childhood and their future.” The numbers are “staggering” he said. More children had been killed in Gaza in four months than in all global conflicts in the previous four years.
This has continued throughout Israel’s assault on Gaza. Between October 7 2023 and January 15 2025, children made up at least 18,000 of the 46,707 Palestinians killed in Gaza, according to data collected by the Gaza health ministry. Both figures are likely to be underestimates, as so many bodies remain buried under the rubble.
Most children have been killed by direct military strikes. Israel has dropped an estimated 85,000 tonnes of explosives on Gaza, killing Palestinians through direct hits, biolding collapses, fires and inhalation of toxic substances. Doctors have also reported evidence of children being killed in drone attacks and by snipers, including by shots to the head and chest.
On March 2 Israel blocked the entry of humanitarian aid into Gaza, using starvation and dehydration as military strategy. On March 15 a Unicef report claimed that 31% of children under two years of age in the north of the Strip were acutely malnourished. There has also been a “dramatic increase in child deaths due to acute malnutrition”.
Israel’s destruction of medical and other infrastructure in the strip has resulted in “indirect deaths” by communicable illness and noncommunicable conditions. In April 2024, a report published in science journal Frontiers found that more than 90% of children in Gaza were affected by infectious diseases. There have also been multiple infant deaths from hypothermia as displaced families attempt to survive winter conditions.
But the problem with these arguments is that they make child mortality rates in Gaza appear as a simple reflection of natural factors. They are not. They are a direct consequence of Israel’s military aggression in Gaza.
Israel has systematically used powerful explosives in densely populated areas and, through AI tracking systems such as “Where’s Daddy?”, deliberately targeted Palestinians in their family homes. Given the deep evidence base about childhood health, the logical outcome of using starvation as a method of war, actively denying aid, and destroying infrastructures that enable life is that children will die disproportionately.
Palestinian children are being killed by design. This has been explicitly articulated by the Israeli state.
But children represent their community’s dreams for their futures. Killing large number of children in Gaza is not simply forcible depopulation. It is an effort to destabilise communities and crush their hopes for liberation and the right of return as mandated by the UN.
Palestinian children in Gaza have been telling their stories to a global audience. The killing, injury and starvation they are testifying to has proved a powerful counternarrative to the idea that Israel is simply “defending itself”. International humanitarian law states that: “Children affected by armed conflict are entitled to special respect and protection.”
But in Gaza, children are being killed in their thousands.
Rachel Rosen receives funding from Independent Social Research Foundation. She is affiliated with BDS @ UCL.
Mai Abu Moghli is a policy member at Al- Shabaka: the Palestinian Policy Network.
“It’s one thing to say the economy is not doing well and we’ve got a fiscal challenge … but cutting the benefits of the most vulnerable in our society who can’t work, to pay for that, is not going to work. And it’s not a Labour thing to do.”
So says former Labour big beast turned centrist-dad podcaster Ed Balls about the government’s welfare reform proposals. Cue furious nods from all those who were hoping and expecting better – or at least not this – from Keir Starmer and Rachel Reeves.
Reactions like these are wholly understandable. After all, the Labour party has long viewed support for the welfare state as both a flag around which the party can rally, and a stick with which to beat the Conservatives.
But while that may have been the case in opposition, in office things have been a little more complicated.
Going all the way back to the MacDonald and Attlee governments, through the Wilson era, and into the Blair and Brown years, Labour governments have often seen fit to talk and act tough to prove to voters, the media and the markets that they have a head as well as a heart. And if that means upsetting some of their MPs, their grassroots members and their core supporters in the electorate, then so be it.
Want more politics coverage from academic experts? Every week, we bring you informed analysis of developments in government and fact check the claims being made.
Welfare encompasses a raft of policies that are as much symbolic as they are substantive. Choosing between them has tangible implications for those directly affected. But those choices also say something – and are intended to say something – about those politicians and parties making that choice.
For Labour governments – and in particular Labour chancellors – cuts in provision, even (indeed perhaps especially) if they involve backtracking on previous commitments, have always been a means of communicating their determination to deal with the world as it supposedly is, not as some of their more radical colleagues would like it to be.
On every occasion, those decisions have provoked outrage: a full-scale split in the 1930s, the resignation of three ministers (including Harold Wilson and leftwing titan Nye Bevan) in the 50s, parliamentary rebellions and membership resignations in the 60s, more generalised despair in Labour and trade union ranks the 70s, and yet another Commons rebellion in the 90s.
But what we need to appreciate is that the fallout is never merely accidental. Rather, it is a vital part of the drama. For the measures to have any chance of convincing sceptical markets and media outlets (as well as, perhaps, ordinary voters) their authors have to be seen to be committing symbolic violence against their party’s own cherished principles.
The proof that sacred cows really are being sacrificed is the anger (ideally impotent anger) of those who cherish them most – Labour’s left wingers. Their reaction is not merely predictable (and expect, by the way, to see Labour’s right wingers employ that term pejoratively in the coming days), it is also functional.
The cruelty is the point
Away from the Labour party itself, both those directly affected by the changes to sickness and disability benefits and those who campaign on their behalf, are – rightly or wrongly – already labelling those changes as cruel. But, likewise (and to put it at its most extreme) the cruelty, to coin a phrase, is the point.
The government will naturally be hoping that, in reality, as few people as possible will be significantly hurt by what it is doing. But the impression that it is prepared to run that risk in pursuit of its wider aim is, in many ways, vital to its success.
As to what that wider aim is? Labour’s essential problem is that, for all its social democratic values, it understandably aspires to become the natural party of government in what is an overwhelmingly liberal capitalist political economy.
It has all too often sought to achieve that, not so much by creating expectations among certain key groups and then rewarding them, as it has by aiming to demonstrate a world-as-it-is governing competence. That, in the view of its leaders (if not necessarily its followers), is the master key to the prolonged success experienced by the Conservative party – a party which has traditionally enjoyed the additional advantage of being culturally attuned to the market and media environment in which governing in the UK has to be done.
So, no, Ed Balls, you’re wrong: for good or ill, this week’s announcement is very much “a Labour thing to do”.
Tim Bale received funding from the ESRC for the PhD upon which the book, “Sacred Cows and Common Sense: The Symbolic Statecraft and Political Culture of the British Labour Party” is based.
After weeks of speculation, Liz Kendall, work and pensions secretary, has unveiled her plans to reform welfare and cut the country’s ballooning benefits bill. The proposals include:
stricter eligibility requirements for Personal Independence Payments (Pip), the main disability benefit
scrapping the work capability assessment for universal credit
freezing or cutting the incapacity benefit “top-up” to universal credit for new claimants
reducing incapacity benefits for under-22s
increasing the standard rate of universal credit for claimants seeking work
introducing a “right to try”, so that people can try work without automatically losing benefits or being reassessed.
Kendall, along with her fellow Labour ministers, has tried to sell the proposals as a “moral mission”. Prime Minister Keir Starmer has repeatedly framed the cuts as a “moral duty”.
Cabinet office minister Ellie Reeves argues it is the party’s “moral obligation” to prevent “a lost generation” of young people being consigned to long-term worklessness.
I research the impact of how the media and politicians talk about welfare (and people who claim it) on public attitudes and benefit recipients themselves. In recent weeks, I’ve asked myself: what exactly is “moral” about welfare reform? Do ministers see it as morally wrong to leave working-aged people “on the scrap heap”? Or are they more concerned with demonstrating their moral duty to taxpayers – by cutting benefits for people they claim could be working?
The proposals do contain measures that back up ministers’ claims to genuinely want to help people, rather than simply cut costs. The “right to try” guarantee should allow those outside the labour market to give work a go without losing benefits if this doesn’t work out.
But if ministers are being driven by morality, I would argue they have approached the problem the wrong way round. The first priority should be not to cut the benefit bill, but to introduce proper support. This, of course, will likely push costs up in the short term. Savings will follow, but only if help translates into meaningful, dignified work.
Want more politics coverage from academic experts? Every week, we bring you informed analysis of developments in government and fact check the claims being made.
Starmer has pledged to stop a “wasted generation” of school leavers not in education, employment or training (Neets) missing out on the “the dignity of work”.
But by hammering home this message with the uncompromising pro-worker slogan “this is the Labour party”, he aligns himself with a specific moral orthodoxy. This affirms the moral superiority of his government’s defining shibboleth, “working people”, by defending hardworking taxpayers who feel it is “unsustainable, indefensible and unfair” to keep footing a “spiralling bill” for welfare.
The moral crusade to promote the virtues of honest toil is doubtless fuelled by surveys suggesting tough talk on benefits remains popular with socially conservative voters the party fears losing to Reform UK.
However, many polls are nuanced. A new Ipsos survey identifies a “benefits paradox”, wherein 37% of Britons agree that “ensuring everyone who needs health-related benefits” should be “prioritised, even if it means some who could work do not”. The same survey had just 23% favouring tougher eligibility requirements.
Moral mission or moral panic?
As my own research shows, when “welfare reform” agendas are couched in the language of “moral missions”, what is really happening is moral panic. We are witnessing escalating alarm at a perceived threat to the moral order that is disproportionate to the true scale of the problem.
True, the number of people inactive due to sickness or disability is higher than before the pandemic, but suggestions that overall inactivity has reached record levels are wrong. Although a higher percentage of 16- to 64-year-olds was inactive during 2024 than in Germany or Ireland, this was lower than the previous year’s rate (down from 22% to 21.5%), and fell further in early 2025, according to the Office for National Statistics.
Britain’s 2024 inactivity rate was also beneath those of 15 other European countries (including France and Spain), the US and the EU average. The true high point of UK inactivity came in 1983, when more than a quarter of working-aged adults were inactive.
Kendall has distanced herself from the language of “scroungers” I analysed in my book on welfare discourse under the 2010-15 coalition government. But connotations can be just as stigmatising as overt labels.
In endlessly employing the mantra “those who can work should work,” ministers channel timeworn tropes distinguishing between the deserving and undeserving poor.
The new proposals include a ‘right to try’ work without fear of losing benefits. SeventyFour/Shutterstock
There is a moral case for offering tailored, sensitive support to disabled people who want to work but face significant barriers – including inflexible employers and the pressure of caring for others.
But this should not come at the cost of impoverishing people unable to work – as some unlikely critics of the government’s proposals point out.
Tony Blair’s onetime Cabinet Secretary Gus O’Donnell told Radio 4 it would be “immoral” to damage people with severe disabilities “who don’t have any option but to be on benefits”. And Blairite former work and pensions secretary Lord Hutton warned that sweeping benefit cuts would “drive millions and millions of people into penury”.
The government says its reforms are a moral mission, but they are already having immoral effects. Just how moral is it to terrify people already struggling to afford basic essentials with the prospect of being driven into deeper poverty? Or to encourage young people into work that is likely to be low-paid and insecure?
If there’s one message we can take from the unseemly spectacle of leaks and briefings leading to this week’s announcement, it may be this: we’ve been watching a government on the brink of losing its moral compass.
James Morrison receives funding from the Arts and Humanities Research Council for a project entitled Voices from the Periphery: (De)Constructing and Contesting Public Narratives about Post-Industrial Marginalisation (VOICES).
When the New Scientist revealed that it had obtained a UK government minister’s ChatGPT prompts through a freedom of information (FOI) request, many in journalism and politics did a double take. Science and technology minister Peter Kyle had apparently asked the AI chatbot to draft a speech, explain complex policy and – more memorably – tell him what podcasts to appear on.
What once seemed like private musings or experimental use of AI is now firmly in the public domain – because it was done on a government device.
It’s a striking example of how FOI laws are being stretched in the age of artificial intelligence. But it also raises a bigger, more uncomfortable question: what else in our digital lives counts as a public record? If AI prompts can be released, should Google searches be next?
Britain’s Freedom of Information Act was passed in 2000 and came into force in 2005. Two distinct uses of FOI have since emerged. The first – and arguably the most successful – is FOI applied to personal records. This has given people the right to access information held about them, from housing files to social welfare records. It’s a quiet success story that has empowered citizens in their dealings with the state.
The second is what journalists use to interrogate the workings of government. Here, the results have been patchy at best. While FOI has produced scoops and scandals, it’s also been undermined by sweeping exemptions, chronic delays and a Whitehall culture that sees transparency as optional rather than essential.
Tony Blair, who introduced the Act as prime minister, famously described it as the biggest mistake of his time in government. He later argued that FOI turned politics into “a conversation conducted with the media”.
Successive governments have chafed against FOI. Few cases illustrate this better than the battle over the black spider memos – letters written by the then Prince (now King) Charles to ministers, lobbying on issues from farming to architecture. The government fought for a decade to keep them secret, citing the prince’s right to confidential advice.
When they were finally released in 2015 after a Supreme Court ruling, the result was mildly embarrassing but politically explosive. It proved that what ministers deem “private” correspondence can, and often should, be subject to public scrutiny.
The ChatGPT case feels like a modern version of that debate. If a politician drafts ideas via AI, is that a private thought or a public record? If those prompts shape policy, surely the public has a right to know.
Are Google searches next?
FOI law is clear on paper: any information held by a public body is subject to release unless exempt. Over the years, courts have ruled that the platform is irrelevant. Email, WhatsApp or handwritten notes – if the content relates to official business and is held by a public body, it’s potentially disclosable.
The precedent was set in Dublin in 2017 when the Irish prime minister’s office released WhatsApp messages to the public service broadcaster RTÉ. The UK’s Information Commissioner’s Office has also published detailed guidance confirming that official information held in non-corporate channels such as private email, WhatsApp or Signal is subject to FOI requests if it relates to public authority business.
The ongoing COVID-19 inquiry has shown how WhatsApp groups – once considered informal backchannels – became key decision-making arenas in government, with messages from Boris Johnson, Matt Hancock and senior advisers like Dominic Cummings now disclosed as official records.
In Australia, WhatsApp messages between ministers were scrutinised during the Robodebt scandal, an illegal welfare hunt that ran from 2016-19, while Canada’s inquiry into the “Freedom Convoy” protests in 2022 revealed texts and private chats between senior officials as crucial evidence of how decisions were made.
The principle is simple: if government work is being done, the public has a right to see it.
AI chat logs now fall into this same grey area. If an official or minister uses ChatGPT to explore policy options or draft a speech on a government device, that log may be a record — as Peter Kyle’s prompts proved.
This opens a fascinating (and slightly unnerving) precedent. If AI prompts are FOI-able, what about Google searches? If a civil servant types “How to privatise the NHS” into Chrome on a government laptop, is that a private query or an official record?
The honest answer is: we don’t know (yet). FOI hasn’t fully caught up with the digital age. Google searches are usually ephemeral and not routinely stored. But if searches are logged or screen-captured as part of official work, then they could be requested.
Similarly, what about drafts written in AI writing assistant Grammarly or ideas brainstormed with Siri? If those tools are used on official devices, and the records exist, they could be disclosed.
Of course, there’s nothing to stop this or any future government from changing the law or tightening FOI rules to exclude material like this.
FOI, journalism and democracy
While these kinds of disclosures are fascinating, they risk distracting from a deeper problem: FOI is increasingly politicised. Refusals are now often based on political considerations rather than the letter of the law, with requests routinely delayed or rejected to avoid embarrassment. In many cases, ministers’ use of WhatsApp groups was a deliberate attempt to avoid scrutiny in the first place.
There is a growing culture of transparency avoidance across government and public services – one that extends beyond ministers. Private companies delivering public contracts are often shielded from FOI altogether. Meanwhile, some governments, including Ireland and Australia, have weakened the law itself.
AI tools are no longer experiments, they are becoming part of how policy is developed and decisions are made. Without proper oversight, they risk becoming the next blind spot in democratic accountability.
For journalists, this is a potential game changer. Systems like ChatGPT may soon be embedded in government workflows, drafting speeches, summarising reports and even brainstorming strategy. If decisions are increasingly shaped by algorithmic suggestions, the public deserves to know how and why.
But it also revives an old dilemma. Democracy depends on transparency – yet officials must have space to think, experiment and explore ideas without fear that every AI query or draft ends up on the front page. Not every search or chatbot prompt is a final policy position.
Blair may have called FOI a mistake, but in truth, it forced power to confront the reality of accountability. The real challenge now is updating FOI for the digital age.
Tom Felle does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Did you know that in the UK period products are regulated under the same consumer legislation as candles? For 15 million people who menstruate each month, these items are used internally or next to one of the most sensitive parts of the body for extended times.
Consumers should be entitled to know what is in their period products before choosing which ones to buy. Yet, because of the current lack of adequate regulation and transparency, manufacturers are not required to disclose all materials. And only basic information is available on brand websites. Campaigners are now calling for better regulation.
Meanwhile, reusable period products are promoted by aid charities as a way to tackle period poverty and reduce waste. But independent tests by organisations such as Which? have found harmful chemicals inside both single-use and reusable period products.
I work as a women’s health researcher at the University of Bristol’s Digital Footprints Lab alongside a team of data scientists. We harness digital data, such as shopping records, to study public health issues. My research looks at how things like education affect which menstrual products people choose.
In collaboration with the charity Women’s Environmental Network, I am exploring intersections between gender, health, equity and environmental justice – especially among marginalised women and communities. But social stigma prevents open discussions about menstruation and how best to improve period product regulation.
Menstrual stigma influences everything from the information and support people who menstruate receive to the types of products we use and how we dispose of them. In a study of menstrual education experiences in English schools, my colleague and I found evidence of teacher attitudes perpetuating menstrual stigma.
Lessons typically lacked content about the health or environmental consequences of period products. Our study showed that just 2.4% of 18- to 24-year-olds surveyed were taught about sustainable alternatives to single-use tampons and menstrual pads.
An environmenstrual workshop hosted bythe charity, Women’s Environmental Network. Women’s Environmental Network / Sarah Larby, CC BY-NC-ND
For decades, period product adverts portrayed menstrual blood as a blue liquid. The social taboos around periods, largely created and reinforced by period brands over decades of fear-based marketing, has left its mark.
For example, in response to customer’s anxieties about supposed menstrual odour, manufacturers are increasingly using potentially environmentally harmful antimicrobials like silver and anti-odour additives in period products. This is despite there being no evidence that period products such as menstrual pants or pads transmit harmful bacteria that need sanitising. The silver also washes out after a couple of washes.
The role of regulation
In New York state, the Menstrual Products Right To Know Act means that a period product cannot be sold unless the labelling includes a list of materials. In Scotland, a government initiative provides free period products to anyone who needs them.
Catalonia in Spain has introduced a groundbreaking law that ensures access to safe and sustainable period products, while also working to reduce menstrual stigma and taboos through education.
A new European “eco label” is a step forward, but companies don’t have to use it. This voluntary label, which shows a product is good for the environment, doesn’t cover period underwear.
Now, campaigners at the Women’s Environmental Network are calling for the UK government to adopt a Menstrual Health, Dignity and Sustainability Act, backed by many charities, academics and environmentalists. This will enable equal access to sustainable period products, improved menstrual education, independent testing, transparent product labelling and stronger regulations.
The regulation of period products is currently being considered as part of the product regulation and metrology bill and the use of antimicrobials in period products is being included in the consumer products (control of biocides) bill introduced by Baroness Natalie Bennett. By tackling both health implications and environmental harms, period products can be produced in a safer way, for both people and planet.
Poppy Taylor’s PhD is funded by the University of Bristol and the Health Foundation.
Poppy Taylor is a member of the Women’s Environmental Network.
The latest deadline for countries to submit plans for slashing the greenhouse gas emissions fuelling climate change has passed. Only 15 countries met it – less than 8% of the 194 parties currently signed up to the Paris agreement, which obliges countries to submit new proposals for eliminating emissions every five years.
Known as nationally determined contributions, or NDCs, these plans outline how each country intends to help limit average global temperature rise to 1.5°C above pre-industrial levels, or at most 2°C. This might include cutting emissions by generating more energy from wind and solar, or adapting to a heating world by restoring wetlands as protection against more severe floods and wildfires.
Each new NDC should outline more stringent emissions cuts than the last. It should also show how each country seeks to mitigate climate change over the following ten years. This system is designed to progressively strengthen (or “ratchet up”) global efforts to combat climate change.
The February 2025 deadline for submitting NDCs was set nine months before the next UN climate change conference, Cop30 in Belém, Brazil.
Without a comprehensive set of NDCs for countries to compare themselves against, there will be less pressure on negotiators to raise national ambitions. Assessing how much money certain countries need to decarbonise and adapt to climate change, and how much is available, will also be more difficult.
While countries can (and some will) continue to submit NDCs, the poor compliance rate so far suggests a lack of urgency that bodes ill for avoiding the worst climate outcomes this century.
Who submitted?
The 15 countries that submitted NDCs on time include the United Arab Emirates, the UK, Switzerland, Ecuador and a number of small states, such as Andorra and the Marshall Islands.
Cop30 host Brazil submitted a pledge to reduce greenhouse gas emissions by 59-67% by 2035, compared to 2005 levels. This is up from its previous commitment, a 37% reduction by 2025 and 43% by 2030. Unfortunately, Brazil is not on track to meet its 2025 target and has set a more recent emissions baseline that will make any reductions more modest than they’d otherwise be.
Japan aims to reduce greenhouse gas emissions by 60% in 2035 and 73% in 2040, compared to 2013 levels. Japan’s previous target was for a 46% reduction by 2030. This demonstrates how the ratchet system is supposed to work.
The UK’s NDC, which pledges to reduce all greenhouse gas emissions by at least 81% by 2035, compared to 1990 levels, was described by independent scientists as “compatible” with limiting global heating to 1.5°C.
The US submitted a plan to reduce net greenhouse gas emissions by 61-66% below 2005 levels by 2035. However, this was before Donald Trump pulled the US out of the Paris agreement (for the second time), so the commitment of one of the world’s largest polluters is in doubt.
Who didn’t submit?
Some of the world’s largest emitters failed to submit new NDCs, including China, India and Russia.
India pledged to reduce its emissions by 35% below 2005 levels by 2030 at the signing of the Paris agreement. All of the country’s subsequent NDCs have been rated as “insufficient” by independent scientists. India’s recent national budget announcement offered scant additional funding for climate mitigation and adaptation measures.
China also made big promises in 2015 with its aim to lower its CO₂ emissions by 65% by 2030, from a 2005 baseline. However, China has been responsible for over 90% of global CO₂ emissions growth since the Paris agreement was signed. China and the US also suspended formal discussions on climate change in 2022. Increased economic competition between these two nations has resulted in export control restrictions and tariffs which have made green technologies like electric vehicles more expensive, which is certain to slow down the shift from fossil fuels.
Russia joined the Paris agreement in 2019. Its first NDC was labelled “critically insufficient” by scientists, and its follow-up in 2020 did not include increased targets. Russia is maximising the extraction of resources such as oil, gas and minerals and its 2035 strategy for the Arctic included plans to sink several oil wells on the continental shelf.
The European Union could have positioned itself as a leader of global climate action, in lieu of US involvement. But the EU, which submits NDCs as a bloc alongside individual country submissions, also failed to submit on time.
Global shifts
The failure of most nations to submit new emission plans suggests that the era of cooperation on climate change is over. The largest and most powerful of these nations are growing their military and diplomatic presence around the world, particularly in countries with large reserves of critical minerals for electric vehicles and other technology relevant to decarbonisation. The lack of NDCs from these nations may be less a matter of middling green ambitions, more an attempt to disguise their planned exploitation of other countries’ resources.
If countries keep failing to submit enhanced NDCs, or even withdraw from their commitments entirely, scientists warn that global heating could reach a catastrophic 4.4°C by 2100. This scenario assumes the continued, unabated use of fossil fuels, with little regard for the climate.
In a more optimistic scenario, countries could limit warming to around 1.8°C by 2100. This will require global cooperation and significant investment in green technology, and entail a transition to net zero emissions by mid-century. This is a process that must include everyone. Simply having the most powerful nations decarbonise by exploiting and hoarding resources will imperil this critical target.
The actual outcome will probably fall somewhere between these two scenarios, depending on forthcoming NDCs and how quickly and thoroughly they are implemented. All of the scenarios envisaged by climate scientists will involve warming continuing for decades.
The effects of this warming will vary, however, based on the path we choose today.
Don’t have time to read about climate change as much as you’d like?
Doug Specht does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Fans of the video game franchise Assasin’s Creed have been pining for a game set in feudal Japan for decades. In theory, it looked like a match made in heaven.
The series (which started in 2007 and has sold over 200 million copies) uses historical settings, such as ancient Greece, the Italian Renaissance or the American Revolution, to tell its fictional epic story of a battle between the Order of Assassins and the Knights Templar. What better scenario, then, than the Japanese civil war (1477-1600), where samurai and ninjas (known as shinobi) were fighting each other?
Yet when the premiere trailer for Assassin’s Creed Shadows dropped on May 15 last year, it unleashed a torrent of criticism from fans around the world. By June, a Japanese-language petition had gathered over 100,000 signatures, claiming the game “insults Japanese culture and history” and “could be tied to anti-Asian racism”.
The publisher of the franchise Ubisoft issued a public apology, delaying the game’s release multiple times. With other Ubisoft titles under-performing, Shadows rescheduled release on March 20 has become a high-stakes endeavour.
Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.
So what exactly had fans so enraged? Online, amateur historians highlighted what they saw as copious historical inaccuracies in the promotional material.
However, none was deemed as damaging as the fact that one of the two playable characters in the game was based on the historical figure of Yasuke. Yasuke was a formerly enslaved black man from Mozambique who became a retainer of the Japanese warlord Oda Nobunaga (1534-1582).
While the historical existence of Yasuke stands without question, some gamers took offence at the notion that Yasuke was being portrayed as a “black samurai”. That’s because the historical sources are not clear on whether Yasuke was considered a “samurai” by his contemporaries.
The trailer for Assassin’s Creed Shadows.
Some gamers argue that focusing on Yasuke, rather than a more typical Japanese-born warrior, represents a misguided attempt at diversity, equity and inclusion. Especially since the second playable character is a fictional female ninja named Naoe.
To critics, highlighting these two characters allegedly overwrites the history of male Japanese samurai, injecting a “foreignness” they believe distorts the setting.
White samurai in popular media
Despite the uproar over Assassin’s Creed: Shadows, it’s not the first piece of media to depict a non-Japanese samurai.
In James Clavell’s 1975 novel Shōgun, English navigator John Blackthorne (based on the real-life William Adams) becomes a samurai in the rank of hatamoto of the warlord Toranaga (based on Tokugawa Ieyasu).
Historians also debate whether the real Adams was a true samurai, yet his “white samurai” image endures in adaptations like the 2024 FX series Shōgun, which garnered praise from critics across the ideological spectrum.
Another famous instance is Nathan Algren (played by Tom Cruise) who in the movie The Last Samurai (2003) joins the Satsuma Rebellion of 1877 led by the charismatic Katsumoto (played by Ken Watanabe and based on Saigō Takamori).
Katsumoto represents in the movie the “true” samurai spirit of male honour, duty, loyalty and principles. In the end, he dies in a final showdown against modern weaponry, but Tom Cruise’s character survives and reminds the emperor that Japan needs to honour its past despite the modernisation.
The movie follows the formula of films like Dances with Wolves (1990), and later the first James Cameron Avatar movie (2009), in which a white character joins a minority population to “save” said people from their doom. This is also known as the “white savior complex”.
Accuracy v authenticity
Why, then, is Yasuke’s portrayal as a black samurai so contentious when white foreigners in similar roles have been widely accepted?
Racism is one answer, but audience expectations about historical authenticity also play a key role. It’s critics claim that Shadows teems with historical inaccuracies, yet other celebrated titles, such as Ghost of Tsushima (2020) are just as historically inaccurate.
Ghost of Tsushima is set during the 13th-century Mongol invasion. Yet the game developers decided to base their protagonists on the heavily idealised and romanticised samurai of 1950s Akira Kurosawa movies, which have little in common with their historical 13th-century counterparts.
However, since these samurai conform to audience expectations of Japanese warriors with two swords that follow the largely fictional honour code of bushido, the game feels authentic even though it is historically inaccurate. By contrast, Yasuke’s presence in Shadows challenges a deeply ingrained notion of a xenophobic or sealed-off Japan – an anachronistic concept that overlooks evidence of foreign influence in the 16th century.
While Ubisoft has taken creative liberties and introduced historical inaccuracies, this is consistent with what has been done in other Assassin’s Creed titles and historically inspired games in general. Yet while predominantly white (and even Japanese) cultures seem quick to forgive depictions of white samurai figures, the same leniency does not seem to extend to a black character.
Fynn Holm receives funding from the German Research Foundation (Deutsche Forschungsgemeinschaft).
Source: The Conversation – UK – By Suzy White, Post-Doctoral Research Assistant, Ecology and Evolutionary Biology, University of Reading
Piecing together the story of Europe’s earliest settlers is a challenge, largely
because relevant human fossils are scarce. On March 12, researchers announced the discovery of a new fossil from the excavation site of Sime del Elefante, near Burgos in Spain.
Known as ATE7-1, the new fossil consists of a partial face belonging to an ancient hominin, a biological classification that includes living humans and our closest extinct relatives, such as Neanderthals and Homo erectus. Nicknamed “Rosa” after one of her discoverers, the fossil includes part of the upper jaw, cheek and eye from an adult, and dates to between 1.1 and 1.4 million years ago. As such, she represents the oldest known partial face of a hominin from western Europe.
Rosa is also a crucial piece of the puzzle explaining how and when humans first entered western Europe – and which species of hominin made those pioneering journeys.
Hominins evolved in Africa. The first species to occupy multiple continents was Homo erectus, and the first fossil evidence we have of them beyond Africa comes from Dmanisi in Georgia. These fossils are around 1.8 million years old. However, stone tools from Grăunceanu (Romania) indicate that hominins had expanded further north even earlier than the Dmanisi finds – 1.95 million years ago.
However, fossils from western Europe remain conspicuously absent until 1.4 million
years ago. By contrast, we have more evidence of hominins moving into Asia during
this time. They had reached Indonesia by 1.6 million years and descendants of these populations seem to have survived there until relatively recently. Early fossils from Asia are also more numerous and more complete, while their European counterparts are limited to an isolated tooth, a fragment of jaw and a partial skull cap.
Despite being just a small part of the face, Rosa provides key insights into these
elusive early European populations. The researchers compared Rosa’s facial
features to Homo erectus fossils from Africa, Indonesia and Dmanisi. They also
examined Rosa’s similarities to Homo antecessor, a later European species from Gran
Dolina, a site close to Sima del Elefante.
The evidence of settlement at Gran Dolina has been dated to about 860,000 years ago. While Rosa shares her delicate build with Homo antecessor, overall she has more affinities with the Homo erectus fossils – although not enough to confidently place her within this group.
Rosa may therefore provide support for a hypothesis that the occupation of Europe
by hominins was discontinuous, at least for the first million or so years. This means that hominins settled there, then went locally extinct and were replaced by other groups of hominins later on.
Our closest relatives were not able to survive in Europe over long periods of time until much later. But why might that be? What made Europe harder to successfully inhabit than Asia? To begin to answer such questions, we have to combine the evidence from Rosa with what we already know about early human forays beyond their ancestral home continent of Africa.
Smaller brains, longer legs
The Dmanisi hominins are notable for their relatively small brains and basic tools.
This challenged the idea that advanced tools and large brains were necessary for
expansion beyond Africa. The tools from Grăunceanu are also relatively basic,
despite the temperate and seasonal climate their makers would have experienced.
The Dmanisi hominins also have relatively long legs, which would have allowed them
to move more efficiently over long distances. Perhaps, then, efficient movement,
rather than brain size or technology, was the driving factor allowing the initial
expansion. But did the basic stone technology used by early Europeans prevent their long term occupation of the continent?
It is likely that we will, in time, find even earlier fossils from western Europe. Further fossils from Sima del Elefante could reveal how variable Rosa’s group was, and enable us to either place her within an existing species, or create a new one.
But, given the sparse information we have for now, the differences between Rosa, the Dmanisi hominins, and Homo antecessor fit within a model of short-term expansions into western Europe. These expansions were probably followed by a retreat of hominin populations into so-called refugia (locations where the environment and climate were more stable), as well as extinctions of local populations. This would have been driven by changing climatic conditions. For now, which and how many species ventured west into Europe is still unknown.
Much else also remains unknown. Did early western Europeans survive long enough
to give rise to later species such as Homo antecessor? And how was Homo
antecessor related to later European species? The European fossil record becomes
more continuous from around 600,000 years ago, first with the appearance of
a hominin species called Homo heidelbergensis, and then with the appearance of early Neanderthals (Homo neanderthalensis). In fact, these two species appear to have coexisted in Europe for some time.
Later Europeans were also able to venture further north, with evidence of footprints of a mystery hominin at Happisburgh in the UK by 900,000 years ago. Nevertheless, as with Rosa’s species and Homo antecessor, the Neanderthals and Homo heidelbergensis eventually went extinct – along with all other species of humans globally, except our own.
The changing climate and northern latitudes of western Europe presented a clear challenge for earlier hominins. As Europe’s climate continues to change, will Homo sapiens be the first hominin capable of long term survival here?
Suzy White receives funding from the Leverhulme Trust, and has previously received funding from the Arts and Humanities Research Council.