In this age of artificial intelligence, data tampering and genetic manipulation, it seems that the nature of fraud and deception in competitive sport is becoming increasingly sophisticated. So, it seems almost surprising to see cheating in sport take a relatively old-fashioned form of late: tampering with equipment.
Yet that’s precisely what unfolded last month in ski jumping, a winter sport whereby athletes soar down a ramp, take flight and aim to maximise both distance and technique. Over the last few months, several ski jumpers and their management have been suspended from the sport due to the intentional illegal tampering and modification of the suits they wear.
The case first came to light during the 2025 FIS Nordic World Ski Championships held in Trondheim in March. Two Norwegian athletes, Marius Lindvik and Johann Andre Forfang, were subsequently disqualified from the men’s large hill event due to allegations of illegal ski jump suit manipulation with the intention of improving their performance.
Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.
A subsequent investigation revealed that their ski suits had been illegally altered. In response, the International Ski and Snowboard Federation (FIS) provisionally suspended the two athletes, along with three Norwegian national team officials – including the head coach and their equipment manager. Both athletes ultimately admitted the illegal alterations.
The scandal then intensified as FIS expanded its investigation which then subsequently led to the suspension of three other Norwegian ski jumpers. Several members of the team were all found to have been involved in the decision to modify the suits for the championships.
This wasn’t the sport’s first brush with controversy surrounding its suits. At the 2022 Winter Olympics, several jumpers there were disqualified for wearing suits that were deemed too large, again raising concerns about fairness.
What did the cheating intend to achieve?
A successful ski jump can be divided into several phases: in-run, take-off, early flight, stable flight, landing preparation, and landing. The suit contributes to enhancing the performance in all of these phases by directly affecting the aerodynamics and flight characteristics of the athlete. As a result, the size and shape of the suit is heavily regulated.
In the case of this scandal, the Norwegian Ski Federation general manager told a news conference that a reinforced thread or an extra seam had been put in the jumpsuits of the first two athletes that were suspended.
This additional material was inserted into the crotch area of the suits, increasing the surface area and stiffness, potentially providing extra lift during a jump’s flight phases. This extra lift would essentially translate into an increase in flight time and therefore a potential increase in the jumping distance. These modifications were not detectable through standard visual inspection and were only discovered upon detailed examination of the suits by then tearing them open.
Of course, cheating in sport is not a new phenomenon. However, in some cases, such controversies are not cheating per se, but merely new technologies emerging that challenge our perceptions of a sport and its values.
Some examples of this were the use of full-body swimsuits at the Sydney Olympics in 2000, or the potential use of prosthetic legs in track athletics at the Beijing Olympics in 2008.
However, sometimes cheating can occur whereby sports equipment is intentionally modified physically to provide a competitive advantage. A recent example of this is the Australian cricket ball tampering scandal in 2018 where balls were intentionally scuffed by players to change their behaviour when bowled.
Improving a piece of sports equipment to increase its performance is the field of mechanical ergogenics, or, when illicitly performed, colloquially known as “technodoping”.
Some consider that the physical capabilities of athletes in some sports have now plateaued to the extent that any future improvements in performance will need to rely predominantly on technological innovation. So perhaps it can be understood why the suits were targeted in this particular sport.
In April 2025, the FIS decided to lift the provisional suspensions of the five Norwegian athletes under investigation for suspected involvement in suit tampering because it is the competitive off-season.
However, the ban for the officials involved remains in place. In the wake of the scandal, FIS has now implemented stricter regulations to prevent future instances of equipment manipulation. These key measures included limiting athletes to a single, pre-approved suit for the year’s competitions, and the FIS storing and inspecting all suits.
These reforms aim to uphold the integrity of ski jumping and will hopefully restore confidence in the sport itself. The 2025 scandal stands as a clear reminder that in the pursuit of victory, sports must remain vigilant – because when innovation outpaces fair play, integrity is the first casualty.
Bryce Dyer does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
A US teenager was recently reported to have developed the oddly named medical condition “popcorn lung” after vaping in secret for three years. Officially known as bronchiolitis obliterans, popcorn lung is a rare but serious and irreversible disease that damages the tiny airways in the lungs, leading to persistent coughing, wheezing, fatigue and breathlessness.
The term “popcorn lung” dates back to the early 2000s when several workers at a microwave popcorn factory developed lung problems after inhaling a chemical called diacetyl – the same ingredient used to give popcorn its rich, buttery taste.
Diacetyl, or 2,3-butanedione, is a flavouring agent that becomes a toxic inhalant when aerosolised. It causes inflammation and scarring in the bronchioles (the smallest branches of the lungs), making it increasingly difficult for air to move through. The result: permanent, often disabling lung damage.
While diacetyl is the most infamous cause, popcorn lung can also be triggered by inhaling other toxic chemicals, including volatile carbonyls like formaldehyde and acetaldehyde – both of which have also been detected in e-cigarette vapours.
The scariest part? There’s no cure for popcorn lung. Once the lungs are damaged, treatment is limited to managing symptoms. This can include bronchodilators, steroids, and in extreme cases, lung transplantation. For this reason, prevention – not treatment – is the best and only defence.
And yet, for young vapers, prevention isn’t so straightforward.
E-liquids may contain nicotine, but they also include a chemical cocktail designed to appeal to users. Many of these flavouring agents are approved for use in food. That doesn’t mean they’re safe to inhale.
Here’s why that matters: when chemicals are eaten, they go through the digestive system and are processed by the liver before entering the bloodstream. That journey reduces their potential harm. But when chemicals are inhaled, they bypass this filtration system entirely. They go straight into the lungs – and from there, directly into the bloodstream, reaching vital organs like the heart and brain within seconds.
That’s what made the original popcorn factory cases so tragic. Eating butter-flavoured popcorn? Totally fine. Breathing in the buttery chemical? Devastating.
Vaping’s chemical complexity
With vaping, the situation is even murkier. Experts estimate there are over 180 different flavouring agents used in e-cigarette products today. When heated, many of these chemicals break down into new compounds – some of which have never been tested for inhalation safety. That’s a major concern.
Diacetyl, though removed from some vape products, is still found in others. And its substitutes – acetoin and 2,3-pentanedione – may be just as harmful. Even if diacetyl isn’t the sole culprit, cumulative exposure to multiple chemicals and their byproducts could increase the risk of popcorn lung and other respiratory conditions.
This was tragically echoed in the story of the American teen who developed the disease. Her case is reminiscent of the 2019 Evali crisis (e-cigarette or vaping product use-associated lung injury), which saw 68 deaths and over 2,800 hospitalisations in the US. That outbreak was eventually linked to vitamin E acetate – a thickening agent in some cannabis vape products. When heated, it produces a highly toxic gas called ketene.
More recent studies are raising alarm bells about vaping’s impact on young people’s respiratory health. A multi-national study found that adolescents who vape report significantly more respiratory symptoms, even when adjusting for smoking status. Certain flavour types, nicotine salts, and frequency of use were all linked to these symptoms.
So, what does this all mean?
It’s clear that history is repeating itself. Just as workplace safety rules were overhauled to protect popcorn factory workers, we now need similar regulatory urgency for the vaping industry – especially when it comes to protecting the next generation.
Learning from the past, protecting the future
Popcorn and vaping might seem worlds apart, but they’re connected by a common thread: exposure to inhaled chemicals that were never meant for the lungs. The danger lies not in what these chemicals are when eaten, but in what they become when heated and inhaled.
If we apply the lessons from industrial safety to today’s vaping habits – particularly among young people – we could avoid repeating the same mistakes. Regulations, clear labelling, stricter ingredient testing, and educational campaigns can help minimise the risks.
Until then, stories like that of the American teen serve as powerful reminders that vaping, despite its fruity flavours and sleek designs, is not without consequence. Sometimes, what seems harmless can leave damage that lasts a lifetime.
The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.
If you’ve ever wondered how farming spread far and wide, our research on past human societies offers one explanation: contact between different groups often drives change.
In a recent paper, together with our colleagues Enrico R. Crema, Stephen Shennan and Oreto García-Puchol among others, we used a mathematical model to analyse what happens when communities with different cultures interact.
We used a model from predator-prey equations that usually describe how animal populations compete. Our results, published in Proceedings of the National Academy of Sciences, showed that when one group of foragers and another group of farmers share the same space, their interaction can determine the speed at which agriculture is adopted.
In many parts of the world, people lived by hunting, fishing and gathering until groups of farmers arrived. This date varies depending on region. For instance, farming arrived at around 1000BC in Japan but at around 5600BC in Iberia.
Archaeologists have long debated whether farming spread because local foragers took it up themselves or because farmers from elsewhere moved in and outnumbered or replaced them.
Our model builds on the view that in some cases locals might have adopted farming from newcomers either through exchange or intermarriage but in other cases they might have been displaced or killed by the incoming farmers.
We tested simulated data against real data from Eastern Iberia, Denmark and the island of Kyushu (Japan) to see which explanations fit best. Considering a period of 1,000 years, we combined equations for population growth, mortality resulting from species’ competition, migration and something called an assimilation parameter, which represents how many foragers became farmers in each time step.
This allowed us to assess the role of competition and collaboration between groups during the transition to farming.
To check whether this theory makes sense in real life, we looked at three regions where farming was introduced to local foragers.
1. Eastern Iberia (Spain)
Agriculture seems to have arrived around 5600-5500BC in this area and took hold relatively quickly, within about 300-400 years. Small groups of farmers probably arrived by sea, which meant weaker ties to their original communities.
As a result, they had only two options: perish or expand, since they could not rely all that much on the support of their original groups. Their attempt to expand farming may have failed if they didn’t integrate with or eliminate locals.
This opens the door to potential “failed attempts”, not captured by the archaeological record. There are recorded “failed” attempts at farming in other areas throughout the world in the archaeological record.
2. Denmark
Further north, the process was slower, taking up to 600-800 years. Farmers and foragers appear to have lived close to one another for centuries before the rapid turnover, with a stable “frontier” between the two groups for centuries.
3. Kyushu (Japan)
Wet rice farming was introduced by multiple waves of migrants from the Korean peninsula around 1,000BC. We found that, although the farming population grew at a modest rate, mixing with locals was limited. Foragers did, however, decline faster and grow slower than in the other two areas.
Our findings show how human interaction can drive the adoption of farming. Our approach considers that small-scale human relationships can have big consequences.
Imagine a small community of farmers setting up near a river that local hunter-gatherers frequently visit. Soon they start trading, and a few foragers learn how to cultivate plants. Over time, more people see the benefits of a stable crop supply and switch from hunting to farming.
Likewise, picture groups of farmers clearing woods to create spaces for husbandry and agriculture. In doing so, they can (even inadvertently) ruin hunting spots during the process, forcing the hunter-gatherers to move elsewhere.
These scenarios might seem obvious, but considering them pushes us to look for more nuanced explanations further than environmental drivers. While such drivers can play a role, our findings suggest that the demographic makeup, how many farmers there are compared to foragers, and how likely foragers are to jump ship, can be crucial in the spread of farming.
The same dynamics might explain other moments in human history where two groups interacted. For instance, sometimes early humans migrating into Neanderthal territory mixed with the local populations.
On the other hand, the spread of horse-riding groups over Eurasia from 3000BC provoked a major demographic turnover. People adapt to their ever-changing contexts, which causes a snowball effect.
Perhaps the biggest takeaway is that human connectivity is key for cultural and technological change. Our approach isn’t meant to exclude other explanations like climate fluctuations. But it does remind us to think about how simple social exchanges; marriages, friendships or alliances, as well as conflicts, can shape communities.
Today we think nothing of adopting a new app or gadget once enough people around us use it, in the same way that we often stick to our good ol’ way of doing things, despite being aware of better alternatives.
Ancient groups might have shown similar patterns on a massive scale during the spread of farming. Seeing these parallels helps us understand how humans behave in groups, whether in a prehistoric village, or a modern metropolis.
Alfredo Cortell receives funding from the European Commission: MSCA-IF ArchBiMod project H-2020-MSCA-IF-2020 actions (Grant No. 101020631) and The Humboldt Foundation (Grant ID: 1235670). This work has received funding from the following projects: ERC-StG project ENCOUNTER (Grant No. 801953); Synergy Grant project COREX: From Correlations to Explanations: towards a new European Prehistory (Grant Agreement No. 95138). The projects PID2021-127731NB-C21 EVOLMED “Evolutionary cultural patterns in the contexts of the neolithization process in the Western Mediterranean,” MCIN/AI/10.13039/ 501100011033 ERDF A way of making Europe are funded by the Spanish Government, and Prometeo/2021/007 NeoNetS “A Social Network Approach to Understanding the Evolutionary Dynamics of Neolithic Societies (C. 7600–4000 cal. BP)” is funded by the Generalitat Valenciana. Open access funding has been provided by the Max Planck Society.
Javier Rivas does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
In our study, the majority of participants had tried changing their diet to improve endometriosis pain.Perfect Wave/ Shutterstock
Endometriosis affects nearly 200 million people worldwide. This chronic condition is characterised by tissue resembling the lining of the womb growing outside of the uterus.
This common condition has devastating impacts on patients’ wellbeing. It causes chronic pain (particularly during their periods), infertility and symptoms similar to irritable bowel syndrome, including bloating, constipation, diarrhoea and pain during bowel movements.
While there are ways of managing endometriosis, these treatments can be invasive and often don’t work for everyone. This is why many patients seek out their own ways of managing their symptoms.
A frequent question we get from patients is, “Can you recommend a diet that will help me manage my pain and gut symptoms?” While ample advice exists online, there’s little information from clinical studies to adequately answer whether or not diet can have an effect on endometriosis symptoms.
So, we conducted an international online survey, inviting people with endometriosis to share their experiences of how diet has affected their endometriosis pain symptoms.
Diet and pain
Before publishing the survey online, we collaborated with a local Scottish endometriosis patient support group to come up with appropriate questions.
The final survey included multiple-choice and free-text questions about the participant’s demographics, their pain, their use of diet in managing symptoms and their sources of dietary advice. It was promoted online through social media and patient support groups. The survey received 2,599 responses from 51 countries. The age of participants ranged from 16-71.
Most respondents reported experiencing pelvic pain (97%) and frequent abdominal bloating (91%). This highlighted how common these symptoms are in people with endometriosis.
Participants were also asked to rate the average level of their abdominal and pelvic pain over the past month, on a scale from zero to ten. The responses highlighted a wide range of pain experiences, though the majority of respondents either rated their average pain a four (can mostly be ignored but with difficulty) or a seven (makes it difficult to concentrate, interferes with sleep and takes effort to function as normal).
The majority (83%) of respondents also reported making dietary changes to control symptoms. Around 67% noted this had a positive effect on pain.
The survey listed 20 different diets (plus “other”), allowing participants to select all the diets they’d tried and explain which had affected their pain symptoms. Some of the most popular diets patients had tried included: reducing alcohol intake, going gluten-free, going dairy-free, drinking less caffeine and reducing intake of processed foods and sugar.
Giving up processed and sugary foods was a common diet change many women with endometriosis made. Tatjana Baibakova/ Shutterstock
Around half of participants reported improvements in their pain after adopting at least one of these diets. For the most popular diets, a reduction in pain was reported by 53% who reduced alcohol consumption, 45% who went gluten-free and dairy-free and 43% who reduced caffeine intake.
Reducing inflammation
This survey, which was the largest of its kind to date, was only conducted in English. This might have limited participation. Additionally, the observed changes were all self-reported, meaning we cannot confirm that the dietary modifications directly caused the changes in pain.
Still, our findings show diet may be an important tool in managing the pain caused by endometriosis. Importantly, no specific diet benefits everyone, so it may take some trial and error to figure out what works best. It’s also worth noting that diet changes appeared to be less beneficial for those with the most severe symptoms.
Research into why people with endometriosis experience pain has identified excess inflammation as a key factor. Inflammation is the body’s mechanism for fighting off an infection or recovering from an injury. In people with endometriosis, it’s thought that the inflammatory response is overstimulated – triggering sensitisation of nerves and amplifying the perception of pain.
Certain foods may also promote inflammation in the body. For instance, it’s thought that gluten and dairy could promote inflammation due to the way they interact with the cells lining the gut and the by-products they produce when broken down by the gut microbes. These by-products have the potential to move around the body and cause more widespread inflammation. Alcohol is also known to be pro-inflammatory.
Reducing intake of certain foods may therefore help reduce overall inflammation levels in people with endometriosis. This may explain why the participants in our study, and others, reported seeing improvements in their symptoms as a result of cutting out inflammatory foods.
Moving forward, we need properly controlled clinical studies that monitor food intake, real-time recording of pain and IBS-like symptoms, and precise measurement of inflammation in the body, in order to understand the reasons why diet may help people with endometriosis.
This is something our research team is already working on. We’re launching a large-scale study with more than 1,000 people who have endometriosis. Each participant will donate stool and blood samples, record food intake details and report on the use of pain medications, supplements, prebiotics, probiotics and dietary modifications. The long-term goal with this project is to support a more holistic and personalized approach to caring for people with endometriosis.
Philippa Saunders has received funding from The Medical Research Council. She is a Fellow of the Academy of Medical Sciences and sits on the Scientific Advisory Group of the Royal College of Obstetrics and Gynaecology.
Andrew Horne reports receiving grants from the National Institute for Health and Care Research, Chief Scientist Office, Wellbeing of Women, Roche Diagnostics, and European Union, receiving consultancy and lecture fees from Theramex, Roche Diagnostics and Gedeon Richter, and having patents issued for a UK patent application No. 2217921.2 and international patent application No. PCT/GB2023/053076 outside the submitted work. He is President-elect of the World Endometriosis Society and Trustee to Endometriosis UK. He is Specialty Advisor to the Scottish Government’s Chief Medical Officer for Obstetrics and Gynaecology.
Francesca Hearn-Yeates does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Source: The Conversation – France – By Maxime Massey, Docteur en Sciences de Gestion & Innovation – Chercheur affilié à la Chaire Improbable, ESCP Business School
The 2024 arrest and subsequent release of activist Paul Watson, the founder of the NGO Sea Shepherd that fights to protect ocean biodiversity, highlighted a division between two opposing camps. There are those who want to stay true to the NGO’s DNA by continuing to practice strong activism against poaching states, and those who believe there is too much at stake in remaining confrontational and advocate instead for more measured actions to institutionalize the NGO. This opposition reflects the dilemma faced by many “pirate organizations”, a concept introduced by scholars Rudolph Durand and Jean-Philippe Vergne.
What are pirate organizations?
Pirate organizations are defined by three key characteristics.
they develop innovative activities by exploiting legal loopholes
they defend a “public cause” to support neglected communities, who in turn support them
by introducing innovations that address specific social needs, they disrupt monopolies and contribute to transforming economic and social systems
A weekly e-mail in English featuring expertise from scholars and researchers. It provides an introduction to the diversity of research coming out of the continent and considers some of the key issues facing European countries. Get the newsletter!
However, to do these things effectively, pirate organizations must become legitimate. An organization is considered legitimate when its various audiences (customers, media, the state, etc.) perceive its actions as desirable according to prevailing values, norms and laws. Legitimacy is built through a process known as legitimation. For pirate organizations, this is particularly challenging, as they are often viewed as both illegal and illegitimate by the state and established industry players. These actors apply pressure to hinder legitimation. So how do pirate organizations build their legitimacy? We examined this question through the emblematic case of Heetch.
This business model, based on the principles of the “sharing economy”, encroached on the monopoly of taxis and the regulated sector of professional chauffeur-driven vehicles (VTCs). Despite challenges, Heetch gradually built its legitimacy through three distinct phases, responding to pressures in different ways.
Stage 1: ‘clandestine pragmatism’ (2013-2015)
When Heetch launched in 2013, a conflict was brewing in the urban transport sector. On one side, there were new applications for VTC services (such as Uber) and for private driver platforms (such as UberPop and Heetch); on the other, there were traditional taxis and their booking departments (such as G7). The latter, along with government authorities, began exerting pressure to shut down the apps, with Uber receiving most of the media attention.
During this phase, Heetch adopted a strategy of “clandestine pragmatism”. The start-up avoided direct confrontations and stayed “under the radar” of the media. This approach is similar to “bootlegging” – concealing an innovative activity during its early stages. Heetch built a pragmatic legitimacy among its immediate audience using informal techniques such as word-of-mouth. However, its legitimacy remained limited, because it operated outside media scrutiny and without state approval.
Heetch reacted by engaging in “subversive activism”. The founders spoke out in the media to defend their service, emphasizing its public utility, particularly for young suburban residents needing nighttime mobility. The start-up generated buzz by releasing a satirical video featuring altered images of political figures in their youth. Heetch leveraged its pragmatic legitimacy, already established within its community, to gain media legitimacy among a broader audience of people, including journalists and policymakers. The organization gained public recognition, but also faced increasing legal battles.
Stage 3: ‘tempered radicalism’ (2017-present)
In March 2017, a court ruled against Heetch, deeming its operations illegal. Heetch temporarily suspended its service but relaunched two weeks later with a new business model employing professional drivers. Two months later, Heetch attempted to reintroduce private drivers, but, after facing additional legal action, it abandoned this approach after six months to focus exclusively on legal transportation services.
During this phase, Heetch practised “tempered radicalism”. The company integrated into the system while continuing its “fight” in a more moderate manner, avoiding direct confrontation with the state and industry players. It adopted three key strategies:
compliance – respecting the law
compromise – balancing its transportation service with its public mission
manipulation – lobbying to influence regulations
Through this approach, Heetch secured regulatory legitimacy while strengthening its existing pragmatic and media legitimacy. The company was recognized by the French government and included in the French Tech 120 and Next 40 programmes for the country’s most promising start-ups. It also became the first ride-hailing platform to attain “mission-driven company” status.
Is ‘piracy’ a growth accelerator?
Ultimately, our study highlights the value of piracy as a strategy for kickstarting the growth of an organization that serves a public cause. By embracing this approach, a pirate organization can drive systemic change to address social or environmental challenges.
That said, piracy carries an inherent risk: at some point, it will likely face a legitimacy crisis triggered by resistance from monopolies or public authorities. The recent struggles of Paul Watson serve as testament. As he aptly puts it: “You can’t change the world without making waves”.
Les auteurs ne travaillent pas, ne conseillent pas, ne possèdent pas de parts, ne reçoivent pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’ont déclaré aucune autre affiliation que leur organisme de recherche.
Source: The Conversation – UK – By Ed Turner, Reader in Politics, Co-Director, Aston Centre for Europe, Aston University
The far-right Alternative for Germany (AfD) has topped a national poll for the first time, prompting the popular Bild newspaper to carry the headline: “AfD breaks the magic barrier”. The poll put the AfD on 26% and the Christian democratic CDU/CSU on 25%.
This is just one opinion poll, but since February’s early federal election, the direction of travel has been clear. Governments sometimes become unpopular mid-term, but Germany isn’t mid-term. The federal election was just two months ago, and the new government hasn’t yet been formed (this routinely takes months in Germany). Nor has CDU leader Friedrich Merz become chancellor; the date pencilled in for that is May 6.
Democracy in decline? The risk and rise of authoritarianism
Democracy is under pressure around the world in 2025. But is this part of a larger historical cycle or does it signal a deeper, more fundamental shift? Join us for a free event in central London on May 8 to discuss these important questions. Come for a panel discussion and stay for food, drinks and conversation.
So these clear polling shifts (with the CDU/CSU down about 3% on the federal election, the AfD up about 5%) are striking. They owe little to any finesse by the party that has taken the lead, the AfD, and much more to the unusual circumstances in which Germany’s mainstream parties have found themselves. They also pose a salutary warning about possible future developments.
Following the recent election, the AfD has a record 152 parliamentarians and is currently embroiled in an argument about whether, given its expanded size, it can take over a meeting room currently occupied by the SPD – a sensitive topic as it is named after Otto Wels, a social democrat who opposed Hitler’s seizure of power.
So far, its approach has been to attack the political mainstream it brands “cartel parties”. In the new Bundestag’s first meeting, the AfD’s Stephan Brandner took to insulting other parties (the SPD and Greens were “political dwarf Germans”, mainstream parties were “lying” and “cheating”). None of this seems likely to have driven the party’s poll surge – although the AfD does find some traction when accusing Merz of betraying conservative voters.
What has, however, affected the polls is Merz himself. The CDU leader presented himself as a fiscal hawk during the federal election campaign, but within days of his win, he performed a volte-face. He agreed to relax Germany’s constitutional restrictions on debt so defence spending above 1% of GDP would no longer be counted, likewise a new €500 billion fund for infrastructure.
The change also meant Germany’s states could also run a modest deficit. These moves owed much to pressure from the social democrat SPD – the infrastructure demand in particular was a key condition from Merz’s only possible coalition partner. But there was also a clear need to spend more on defence (given global developments) and infrastructure, with no other funds being available.
Early April’s Politbarometer poll showed just 36% thinking it “good” if Merz became chancellor (59% “not good”). On a scale of 5 to -5, respondents rate Merz -0.8. Even though the public backs the changes to debt rules he has made, there is a sense that Merz was not honest with them in the election campaign.
These poor ratings are in spite of coalition talks between CDU/CSU and SPD having gone reasonably well. Not only did they agree on the debt rule reform, but a coalition treaty is now being voted on by SPD members. The CDU will agree it at the end of the month while the Bavarian CSU has already given the green light.
It includes significant tightening of migration policy (at the outer reaches of what the SPD would agree to), some cuts to VAT and corporation tax, and nods in the direction of income tax cuts for lower and middle earners and a higher minimum wage. That said, there has already been public argument between CDU/CSU and SPD about how binding these commitments are – not a good omen for future co-operation.
Pressure on both sides
So while this poll doesn’t change the fact that Merz will almost certainly be voted in as chancellor leading a CDU/CSU coalition with the SPD, it does show that the coalition is already facing an age-old problem for “grand coalitions” between centre-left and centre-right parties.
The risk is always that they will end up strengthening support for parties to their left and right. The SPD faces a serious threat from the Greens and the resurgent Left Party amongst those who would favour a more open attitude to immigration and higher taxes for top earners, for example.
No matter how far Merz goes on immigration and tax cuts, the AfD will accuse him of betraying core conservative values and may continue to gain ground as a result. Some leading CDU politicians have suggested treating the AfD as a more “normal” opponent (for instance in allowing it to chair parliamentary committees). But that would hardly be a game-changer.
Merz’s difficulties are heightened by the global economic situation: Germans are already deeply pessimistic about economic developments, and the impacts and instability generated by US tariffs, whether implemented or potential, put the country in the eye of the storm, making the job of governing more difficult still.
A clear majority of German voters still rejects any prospect of the AfD joining the government, but they may have to get used to it being ahead in opinion polls.
Ed Turner receives funding from the German Academic Exchange Service.
In the highly anticipated judgment announced April 17, the court ruled that the definition of “sex”, “man” and “woman” in the Equality Act refers to “biological sex”. It found that this does not include those who hold a gender recognition certificate (trans people who have had their chosen gender legally recognised). In simple terms, “women” does not include transgender women.
It is important to note that the court’s remit was focused on interpretation of existing laws, not creating policy. The court affirmed that trans people should not be discriminated against, nor did they intend to provide a definition of sex or gender outside of the application of the Equality Act.
The prime minister has said he welcomes the “real clarity” brought by the ruling. But while it may bring some legal clarity, questions remain about the practical implementation. The judgment also raises new questions about the operation of the Gender Recognition Act, and what it now means to hold a gender recognition certificate.
What was the court case?
The gender-critical feminist group For Women Scotland challenged the Scottish government’s guidance on the operation of the Equality Act in relation to a Scottish law that sets targets for increasing the proportion of women on public boards.
The definition of a “woman” for the purposes of that law included trans women who had undergone, or were proposing to undergo, gender reassignment.
The issue that the court had to address was whether a person with a full gender recognition certificate (GRC) which recognises that their gender is female, is a “woman” for the purposes of the Equality Act 2010. The act gives protection to people who are at risk of unlawful discrimination.
The court’s decision was that the meaning of “sex” was biological and so references in the act to “women” and “men” did not, therefore, apply to trans women or trans men who hold GRCs.
What has changed with this ruling?
Prior to the ruling, there were contested views as to whether trans people could access certain single-sex spaces – some of the most contentious being prisons, bathrooms and domestic abuse shelters.
The ruling does not require services to exclude trans people from all single-sex spaces. It does, however, clarify that if a service operates a single-sex space, for example a gym changing room, then exclusion is based on biological sex and not legal sex. Neither the court nor the government has said how “biological sex” would be defined or proven.
A service provider may operate a single-sex space on the basis of privacy or safety of users. To base this on biological sex must be a proportionate means of achieving a legitimate aim – for example, the safety of women in a group for abuse survivors. This means that service providers may still operate trans-inclusive policies, but they may open themselves to legal challenge.
What does this mean for the Gender Recognition Act?
The Gender Recognition Act 2004 introduced gender recognition certificates (GRCs), which certify that a person’s legal gender is different from their assigned gender at birth. A trans person can apply for a GRC in order to change their gender on their birth certificate. For legal purposes, they are then recognised as their acquired gender.
The ruling does not strike down or affect the operation of the Gender Recognition Act. But it does give the impression that the GRA – and holding a GRC – is now less effective.
The ruling clarifies that a trans woman who has a GRC and is recognised legally in her acquired gender can be excluded from single-sex spaces on the ground of biological sex, as would a trans woman without a GRC. Before the ruling, a trans person with a GRC would have been able to access many single-sex spaces and services that match the gender on their GRC.
In order to be granted a GRC, a person must show that they have lived in their acquired gender for at least two years and that they intend to live in that gender until death. Their application must be approved by two doctors, but – in what was a world-first at the time it was introduced – does not require any medical transition.
The Supreme Court states that trans people (with or without a GRC) will still be protected from discrimination. Sex and gender reassignment are both protected characteristics under the Equality Act. This means that trans people may still rely on the law to protect them from direct or indirect discrimination levelled at them on the basis of being trans, or because of their perceived sex.
The court uses the example that a trans woman applying for a job being denied that job on the basis of being trans would still be entitled to sue for discrimination.
How will single-sex services operate?
The key question now, both for service providers and trans people, is what spaces trans people will be able to use. It is not the Supreme Court’s job to issue guidance on this – and the judgment is notably silent on the practical implementation of the ruling.
Service providers may choose to offer unisex spaces, for example gender neutral bathrooms. British Transport Police have already confirmed that strip searches of those arrested on the network would be conducted based on biological sex, and other services will likely follow.
It is up to service providers, employers and healthcare providers to interpret the ruling and decide how to apply it. The government has said that further guidance will be issued by the Equality and Human Rights Commission. But how the ruling is implemented in practice, and what it means for other laws like the Gender Recognition Act, will likely be debated for some time.
Alexander Maine does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
The 1972 concert film Pink Floyd Live at Pompeii, back in cinemas this week, remains one of the most unique concert documentaries ever recorded by a rock band.
The movie captured the band on the brink of international stardom, released seven months before their breakout album Dark Side of the Moon, which would go on to sell 50 million copies and spend 778 weeks on the Billboard charts.
The film was the first time a rock concert took place in the ruins of an archaeological site. This intermingling of art and archaeology would change the way many thought of Pompeii.
Constructed around 70 BCE, it was one of the first permanent constructed amphitheatres in Italy, designed to hold up to 20,000 spectators.
From graffiti and advertisements, we know it was used in antiquity for gladiatorial fights and displays and hunts of wild beasts and athletic contests.
Famously we are told by Roman historian Tactius in 59 CE a deadly brawl occurred between Pompeiians and residents of the nearby town of Nuceria during games, resulting in a ten-year ban on gladiatorial contests at the venue. The amphitheatre was destroyed by the eruption of Vesuvius in 79 CE.
There is a long tradition of authors, artists, filmmakers and designers taking inspiration from the site and its destruction. A 13-year-old Mozart’s visit to the Temple of Isis at the site inspired The Magic Flute in 1791.
In the rock music era, Pompeii has inspired numerous artists, especially around themes of death and longing. Cities in Dust (1985) by Siouxsie and the Banshees was perhaps the most famous until Bastille’s 2013 hit Pompeii. In The Decemberists’ Cocoon (2002), the destruction of Pompeii acts as a metaphor for the guilt and loss in the aftermath of the September 11 attacks.
Since 2016, the amphitheatre has hosted concerts – with audiences this time. Appropriately, one of the first was a performance by Pink Floyd’s guitarist David Gilmour. His show over two nights in July 2016 took place 45 years after first playing at the site.
But how did Pink Floyd come to play at Pompeii in 1972?
Rethinking rock concert movies
It was the peak era of rock concert documentaries. Woodstock (1970) and The Rolling Stone’s Gimme Shelter (1970), and other documentaries of the era, placed the cameras in the audience, giving the cinema-goer the same perspective as the concert audience.
As a concept, it was getting stale.
Filmmaker Adrian Maben had been interested in combining art with Pink Floyd’s music. He initially pitched a film of the band’s music over montages of paintings by artists such as Rene Magritte. The band rejected the idea.
Maben returned to them after a holiday in Naples, realising the ambience of Pompeii suited the band’s music. A performance without an audience provided the antithesis of the era’s concert films.
Roger Waters during the film Pink Floyd Live at Pompeii. Sony Music
The performance would become iconic, particularly the scenes of Roger Waters banging a large gong on the upper wall of the amphitheatre, and the cameras panning past the band’s black road case to reveal the band in the ancient arena.
It was as far away from Woodstock as possible.
The performance was filmed over six days in October 1971 in the ancient amphitheatre, with the band playing three songs in the ancient venue: Echoes, A Saucerful of Secrets, and One of These Days.
Ancient history professor Ugo Carputi of the University of Naples, a Pink Floyd fan, had persuaded authorities to allow the band to film and to close the site for the duration of filming. Besides the film crew, the band’s road crew – and a few children who snuck in to watch – the venue was closed to the public.
In addition to the performance, the four band members were filmed walking over the volcanic mud around Boscoreale, and their performances in the film both were interspersed with images of antiquities from Pompeii.
The movie itself was fleshed out with studio performances in a Paris TV studio and rehearsals at Abbey Road Studios.
Marrying art and music
Famously the Pink Floyd film blends images of antiquities from the Naples Archaeological Museum with the band’s performances.
Roman frescoes and mosaics are highlighted during particular songs. Profiles of bronze statues meld with the faces of band members, linking past and present.
Later scenes have the band backdropped by images of frescoes from the famed Villa of the Mysteries and of the plaster casts of eruption victims.
The band’s musical themes of death and mystery link with ancient imagery, and it would have been the first time many audience members had seen these masterpieces of Roman art.
Pink Floyd Live at Pompeii marked a brave experiment in rock concert movies.
Watching it more than 50 years later, it is a timepiece of early 70s rock and a remarkable document of a band on the brink of fame.
Because of their progressive rock sound, sonic experimentation and philosophical lyrics, it was often said by Pink Floyd’s fans that they were “the first band in space”. They even eventually had a cassette of their music played in space.
But many are not aware of their earlier roots in the dust of ancient Pompeii. The re-release of the film gives an opportunity to enjoy the site’s unlikely role in music history.
Pink Floyd at Pompeii – MCMLXXII is in cinemas from Thursday.
Craig Barker does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Carole Cusack does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
The right to social security is enshrined in several international agreements on human rights. But the UK’s system – even before the disability benefits cuts announced earlier this year – falls way below these standards.
For a new report published today, Amnesty International asked my colleague Lyle Barker and me to review the evidence about the state of the UK’s social security in relation to international human rights law.
The UK has signed and ratified a number of international agreements on human rights. One of these is the 1966 International Covenant on Economic, Social and Cultural Rights (ICESCR), which lays out the right to social security. An accompanying document defines the three key principles of this right as:
Availability A social security system established in law, administered publicly, and materially reachable by those who need it.
Adequacy Benefits must be suitable, both in amount and in duration, to realise essential socioeconomic rights.
Accessibility Everyone should be covered by the social security system, paying particular attention to disadvantaged and marginalised individuals and groups.
The conclusion of our study for Amnesty International is crystal clear: even disregarding the cuts announced in March, the UK’s social security system does not meet these standards.
Availability
Our review of the literature shows a widespread underclaiming of benefits. It has been estimated that in 2024, £22.7 billion in income-related benefits went unclaimed, a £4 billion increase from the previous year.
Gaps in official data hinder a clear understanding of why many people are missing out on the support they are entitled to. But qualitative evidence suggests this is largely due to fear, stigma, bureaucratic and digital hurdles, and eligibility cliff edges for means-tested benefits.
In recent years, the UK government has adopted a contentious and punitive stance toward benefit recipients. Media and political rhetoric have portrayed those who claim benefits as idle or undeserving scroungers.
This stigma harms the mental health and self-esteem of people experiencing poverty. It can result in shame and secrecy, and create barriers to people accessing support they are entitled to.
Our research for Amnesty International concludes that UK claimants do not get enough information and support about their rights to benefits. Combined with the stigma of claiming, the UK is falling far short of making benefits “available” in line with international standards.
Adequacy
Since the austerity policies of the 2010s, the UK’s social security system has become significantly less adequate in supporting vulnerable people and families. The basic rate of universal credit (the main benefit for working-age people on a low income) is at 40-year low in real terms amid a cost of living crisis.
Restrictive policies, such as the benefit cap (introduced in 2013 to set a maximum limit to the total benefits received by a household) and the two-child limit have curtailed access to essential benefits. Although inflation adjustments in the last two years provided some relief, many benefits still fail to keep up with rising living costs.
The two-child limit is the cruellest expression of the inadequacy of the UK’s social security system. Introduced by the Conservative government in 2017, the two-child limit restricts financial support through universal credit to two children. It is likely to be the most significant single cause of child poverty in the UK, including in families where adults work but do not earn enough to make ends meet.
When Labour returned to power, there was much speculation about whether they would reverse the two-child limit. But despite pleas from experts and people with direct experience, the government has persisted in retaining it.
Our study lays out the many barriers to accessibility in the UK’s system. For example, the bureaucratic hurdles in the assessment process, and the disproportionate impact of punitive sanctions on lone mothers and on minority ethnic claimants.
The UK operates a benefits sanction regime, which imposes penalties on claimants who fail to meet certain conditions. These include attending jobcentre appointments or accepting job offers. In general, sanctions and the fear of sanctions erode the trust between benefit claimants and the social security system.
Benefits sanctions are just one of the barriers to accessing social security. 1000words/Shutterstock
As it did in its previous review in 2016, in February the UN Committee on Economic, Social and Cultural Rights recommended that the UK review the use of benefit sanctions to ensure they are used proportionately and are subject to prompt and independent dispute resolution mechanisms.
Another accessibility concern is the shift to a digital-by-default system in the 2010s. While intended to make accessing benefits more efficient, it has become an administrative barrier.
Many people, particularly the elderly and others who are less digitally literate, struggle to navigate the benefits system. It excludes people without reliable internet access, underscoring a digital divide that prevents meaningful access to social security.
Meeting standards
Given the evidence, it is no surprise that earlier this year, the UN Committee on Economic, Social and Cultural Rights urged the UK government to assess the cumulative effects of the austerity measures introduced in the 2010s.
In particular, the committee recommended reversing the two-child limit, the benefit cap and the five-week delay for the first universal credit payment, and increasing the budget allocated to social security. These recommendations were made before the changes announced in the spring statement.
To live up to the internationally recognised right to social security, the UK should recognise in law, policy and practice that social security is a human right. And, that it is essential to the fulfilment of other human rights.
Amnesty International recommends the government set up a commission with statutory powers, to produce a strategy for “wholesale reform” of the social security system. The UK must establish a minimum support level and an essentials guarantee, to ensure beneficiaries can consistently meet their basic needs. A good way to start would be abolishing the two-child limit once and for all.
Koldo Casla and Lyle Barker wrote the study underpinning Amnesty International’s report on the state of the right to social security in the UK.
Researchers and malaria programmes, however, must strengthen collaborations. This will ensure the limited resources are used in ways that make the most impact.
The numbers
Some progress has been made, but in some cases there have been reverses.
Between 2000 and 2015 there was an 18% reduction in new cases from 262 million in 2000 to 214 million in 2015. Since then, progress has stalled.
The World Health Organization estimates that approximately 2.2 billion cases have been prevented between 2000 and 2023. Additionally, 12.7 million deaths have been avoided. In 2025, 45 countries are certified as malaria free. Only nine of those countries are in Africa. These include Egypt, Seychelles and Lesotho.
The global target set by the WHO was to reduce new cases by 75% compared to cases in 2015. Africa should have reported approximately 47,000 cases in 2023. Instead there were 246 million.
Almost every African country with ongoing malaria transmission experienced an increase in malaria cases in 2023. Exceptions to this were Rwanda and Liberia.
So why is progress stagnating and in many cases reversing?
The setbacks
Effective malaria control is extremely challenging. Malaria parasite and mosquito populations evolve rapidly. This makes them difficult to control.
Africa is home to malaria mosquitoes that prefer biting humans to other animals. These mosquitoes have also adapted to avoid insecticide-treated surfaces.
It has been shown in South Africa that mosquitoes may feed on people inside their homes, but will avoid resting on the sprayed walls.
Mosquitoes have also developed mechanisms to resist the effects of insecticides. Malaria vector resistance to certain insecticides used in malaria control is widespread in endemic areas. Resistance levels vary around Africa.
Resistance to the pyrethroid class is most common. Organophosphate resistance is rare, but present in west Africa. As mosquitoes become resistant to the chemicals used for mosquito control, both the spraying of houses and insecticide treated nets become less effective. However, in regions with high malaria cases, nets still provide physical protection despite resistance.
An additional challenge is that malaria parasites continue to develop resistance to anti-malarial drugs. In 2007 the first evidence began to emerge in south-east Asia that parasites were developing resistance to artemisinins. These are key drugs in the fight against malaria.
Recently this has been shown to be happening in some African countries too. Artemisinin resistance has been confirmed in Eritrea, Rwanda, Tanzania and Uganda. Molecular markers of artemisinin resistance were recently detected in parasites from Namibia and Zambia.
Malaria parasites have also developed mutations that prevent them from being being detected by the most widely used rapid diagnostic test in Africa.
Countries in the Horn of Africa, where parasites with these mutations are common, have changed the malaria rapid diagnostic tests used to ensure early diagnosis.
The progress
Nevertheless, the fight against malaria has been strengthened by novel control strategies.
Firstly, after more than 30 years of research, two malaria vaccines – RTS,S and R21 – have finally been approved by the WHO. These are being deployed in 19 African countries.
These vaccines have reduced disease cases and deaths in the high-risk under-five-years-old age group. They have reduced cases of severe malaria by approximately 30% and deaths by 17%.
Secondly, effectiveness of long-lasting insecticide-treated nets has been improved.
New insecticides have been approved for use. Chemical components that help to manage resistance have also been included in the nets.
Thirdly, novel tools are showing promise. One option is attractive toxic sugar baits. This is because sugar is what mosquitoes naturally eat. Biocontrol by altering the native gut bacteria of mosquitoes may also prove effective.
Fourthly, reducing mosquito populations by releasing sterilised male or genetically modified mosquitoes into wild mosquito populations is also showing promise. Trials are currently happening in Burkina Faso. Genetically sterilised males have been released on a small scale. This strategy has shown promise in reducing the population.
Fifthly, two new antimalarials are expected to be available in the next year or two. Artemisinin-based combination therapies are standard treatment for malaria. An improvement to this is triple artemisinin-based combination therapy. This is a combination of this drug with an additional antimalarial. Studies in Africa and Asia have shown these triple combinations to be very effective in controlling malaria.
The second new antimalarial is the first non-artemisinin-based drug to be developed in over 20 years. Ganaplacide-lumefantrine has been shown to be effective in young children. Once available, it can to be used to treat parasites that are resistant to artemisinin. This is because it has a completely different mechanism of action.
The end game
It has been several years since the malaria control toolbox has been strengthened with novel tools and strategies that target both the vector and the parasite. This makes it an ideal time to double down in the fight against this deadly disease.
In 2020, the WHO identified 25 countries with the potential to stop malaria transmission within their borders by 2025. While none of these countries eliminated malaria, some have made significant progress. Costa Rica and Nepal reported fewer than 100 cases. Timor-Leste reported only one case in recent years.
Three southern African countries are included in this group: Botswana, Eswatini and South Africa. Unfortunately, all these countries showed increases in cases in 2023.
With the new tools, these and other countries can eliminate malaria, getting us closer to the dream of a malaria-free world.
Shüné Oliver receives funding from the National Research Foundation of South Africa and the South African Medical Research Council. She is associated with both the National Institute for Communicable Diseases and the Wits Research Institte for Malaria.
Jaishree Raman receives funding from the Gates Foundation, Global Fund, Wellcome Trust, National Research Foundation, National Institute for Communicable Diseases, South African Medical Research Council, and the Research Trust. She is affiliated with the National Institute for Communicable Diseases, the Wits Institute for Malaria Research, University of Witwatersrand, and the Institute for Sustainable Malaria Control, University of Pretoria.
Artificial intelligence (AI) is increasing productivity and pushing the boundaries of what’s possible. It powers self-driving cars, social media feeds, fraud detection and medical diagnoses. Touted as a game changer, it is projected to add nearly US$15.7 trillion to the global economy by the end of the decade.
Africa is positioned to use this technology in several sectors. In Ghana, Kenya and South Africa, AI-led digital tools in use include drones for farm management, X-ray screening for tuberculosis diagnosis, and real-time tracking systems for packages and shipments. All these are helping to fill gaps in accessibility, efficiency and decision-making.
However, it also introduces risks. These include biased algorithms, resource and labour exploitation, and e-waste disposal. The lack of a robust regulatory framework in many parts of the continent increases these challenges, leaving vulnerable populations exposed to exploitation. Limited public awareness and infrastructure further complicate the continent’s ability to harness AI responsibly.
What are African countries doing about it?
To answer this, my research mapped out what Ghana and Rwanda had in place as AI policies and investigated how these policies were developed. I looked for shared principles and differences in approach to governance and implementation.
The research shows that AI policy development is not a neutral or technical process but a profoundly political one. Power dynamics, institutional interests and competing visions of technological futures shape AI regulation.
I conclude from my findings that AI’s potential to bring great change in Africa is undeniable. But its benefits are not automatic. Rwanda and Ghana show that effective policy-making requires balancing innovation with equity, global standards with local needs, and state oversight with public trust.
The question is not whether Africa can harness AI, but how and on whose terms.
How they did it
Rwanda’s National AI Policy emerged from consultations with local and global actors. These included the Ministry of ICT and Innovation, the Rwandan Space Agency, and NGOs like the Future Society, and the GIZ FAIR Forward. The resulting policy framework is in line with Rwanda’s goals for digital transformation, economic diversification and social development. It includes international best practices such as ethical AI, data protection, and inclusive AI adoption.
Ghana’s Ministry of Communication, Digital Technology and Innovations conducted multi-stakeholder workshops to develop a national strategy for digital transformation and innovation. Start-ups, academics, telecom companies and public-sector institutions came together and the result is Ghana’s National Artificial Intelligence Strategy 2023–2033.
Both countries have set up or plan to set up Responsible AI offices. This aligns with global best practices for ethical AI. Rwanda focuses on local capacity building and data sovereignty. This reflects the country’s post-genocide emphasis on national control and social cohesion. Similarly, Ghana’s proposed office focuses on accountability, though its structure is still under legislative review.
Ghana and Rwanda have adopted globally recognised ethical principles like privacy protection, bias mitigation and human rights safeguards. Rwanda’s policy reflects Unesco’s AI ethics recommendations and Ghana emphasises “trustworthy AI”.
Both policies frame AI as a way to reach the UN’s Sustainable Development Goals. Rwanda’s policy targets applications in healthcare, agriculture, poverty reduction and rural service delivery. Similarly, Ghana’s strategy highlights the potential to advance economic growth, environmental sustainability and inclusive digital transformation.
Key policy differences
Rwanda’s policy ties data control to national security. This is rooted in its traumatic history of identity-based violence. Ghana, by contrast, frames AI as a tool for attracting foreign investment rather than a safeguard against state fragility.
The policies also differ in how they manage foreign influence. Rwanda has a “defensive” stance towards global tech powers; Ghana’s is “accommodative”. Rwanda works with partners that allow it to follow its own policy. Ghana, on the other hand, embraces partnerships, viewing them as the start of innovation.
While Rwanda’s approach is targeted and problem-solving, Ghana’s strategy is expansive, aiming for large-scale modernisation and private-sector growth. Through state-led efforts, Rwanda focuses on using AI to solve immediate challenges such as rural healthcare access and food security. In contrast, Ghana looks at using AI more widely – in finance, transport, education and governance – to become a regional tech hub.
Constraints and solutions
The effectiveness of these AI policies is held back by broader systemic challenges. The US and China dominate in setting global standards, so local priorities get sidelined. For example, while Rwanda and Ghana advocate for ethical AI, it’s hard for them to hold multinational corporations accountable for breaches.
Energy shortages further complicate large-scale AI adoption. Training models require reliable electricity – a scarce resource in many parts of the continent.
To address these gaps, I propose the following:
Investments in digital infrastructure, education and local start-ups to reduce dependency on foreign tech giants.
African countries must shape international AI governance forums. They must ensure policies reflect continental realities, not just western or Chinese ones. This will include using collective bargaining power through the African Union to bring Africa’s development needs to the fore. It could also help with digital sovereignty issues and equitable access to AI technologies.
Finally, AI policies must embed African ethical principles. These should include communal rights and post-colonial sensitivities.
Thompson Gyedu Kwarkye does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
“The Earth, our home, is beginning to look more and more like an immense pile of filth.” These aren’t the words of a radical sociologist or rogue climate scientist. They aren’t the words of a Conversation editor either. Nor are these:
“A selfish and boundless thirst for power and material prosperity leads both to the misuse of available natural resources and to the exclusion of the weak and disadvantaged.”
These are in fact quotes from Pope Francis, who died last weekend.
I never thought this job would have me writing newsletters in praise of a papal climate influencer, but here we are. You can read various obits and interesting takes on Pope Francis and what’s next for the Catholic church elsewhere on The Conversation. But here I want to focus on his thoughts on climate change and the impact he had.
Our common home
In 2015, two years after becoming pope, Francis published Laudato Si (Praise Be to You), a 183-page papal letter sent to all Catholic bishops on “care for our common home”. It was a significant intervention made just a few months before the climate summit that led to the Paris agreement.
Writing at the time, sustainability professor Steffen Böhm said that what made it so radical “isn’t just [Pope Francis’s] call to urgently tackle climate change. It’s the fact he openly and unashamedly goes against the grain of dominant social, economic and environment policies.”
For Böhm, who was then at the University of Essex but now works at Exeter, this radical message “puts him on a confrontation course with global powerbrokers and leaders of national governments, international institutions and multinational corporations”.
He quotes a section where the Pope says “those who possess more resources [and] power seem seem mostly to be concerned with masking the problems or concealing their symptoms, simply making efforts to reduce some of the negative impacts of climate change”. The Pope warns that “such effects will continue to worsen if we continue with current models of production and consumption”.
Böhm points out the Pope “might be the only person with both the clout and the desire to meaningfully deliver a message like this”.
Bernard Laurent of EM Business School in Lyon, says that in France the Pope’s message “managed to bring together both conservative currents – such as the Courant pour une Écologie Humaine (Movement for a Human Ecology), created in 2013 – and more open-minded Catholic intellectuals such as Gaël Giraud, a Jesuit and author of Produire Plus, Polluer Moins : l’Impossible Découplage? (Produce more, Pollute Less: the Impossible Decoupling?)”
Clearly, this was a unique figure able to reach people who might not listen to a Greta Thunberg or an Al Gore.
But, while it’s great the Paris agreement was signed, it was still filled with the exact sort of market logic and buck-passing – carbon credits, “emit now, clean up later”, and so on – the Pope had criticised a few months previously. And climate change itself only got worse. In the years following, Pope Francis spoke at the UN and published a series of other “exhortations” related to climate change.
Did any of this make any difference?
Celia Deane-Drummond is a theology professor at the University of Oxford and director of a research institute named after the 2015 papal letter. In a piece published the same day Pope Francis’s death was announced, she looked at his influence on the global climate movement.
Deane-Drummond notes Pope Francis’s emphasis on listening to Indigenous people for instance in his lesser-known exhortation Querida Amazonia, which means “beloved Amazonia”, from February 2020.
“This exhortation resulted from his conversations with Amazonian communities and helped put Indigenous perspectives on the map. Those perspectives helped shape Catholic social teaching in the [papal letter] Fratelli Tutti, which means ‘all brothers and sisters’, published on October 3 2020.”
A key influencer
Perhaps the Pope’s biggest influence was on activists rather than policymakers. Deane-Drummond says he was often mentioned by participants in a research project on religion, theology and climate change she was part of.
“When we asked more than 300 [religious] activists representing six different activist groups who most influenced them to get involved in climate action, 61% named Pope Francis as a key influencer.”
The 2015 papal letter also gave rise to the Laudato Si movement which Deane-Drummond points out “coordinates climate activism across the globe. It has 900 Catholic organisations as well as 10,000 of what are known as Laudato Si ‘animators’, who are all ambassadors and leaders in their respective communities.”
There are specific religious arguments he was able to make to appeal to these groups, note Joel Hodge and Antonia Pizzy of Australian Catholic University.
They write that: “Francis argued combating climate change relied on the ‘ecological conversion’ of the human heart, so that people may recognise the God-given nature of our planet and the fundamental call to care for it. Without this conversion, pragmatic and political measures wouldn’t be able to counter the forces of consumerism, exploitation and selfishness.”
It’s not an argument that will particularly work on me. But then addressing the climate crisis will require all sorts of people to be persuaded of the need for serious action, including policy wonks, tech bros, radical activists, worried parents and, yes, people motivated by their religion.
The last pope didn’t have to say anything about the climate crisis. It’s not necessarily in the job description. But it’s a good thing that Pope Francis did speak about it and, as Deane-Drummond says: “We can only hope [the next pope] will build on his legacy and influence political change for the good, from the grassroots frontline right up to the highest global ambitions.”
Source: The Conversation – Canada – By Kelley Lee, Professor and Tier 1 Canada Research Chair in Global Health Governance; Scientific Co-Director, Bridge Research Consortium, Simon Fraser University
The World Health Organization (WHO) raises awareness each year of the importance of equitable access to lifesaving and health-protecting vaccines. More than 154 million lives worldwide over the past 50 years have been saved by vaccines, excluding vaccines for COVID-19, malaria, influenza, human papilloma virus, and other deadly diseases.
Continued benefits from vaccines under threat in Canada
Supported by a universal health-care system, strong public health infrastructure, and publicly funded programs, Canada has enjoyed a century of decline in diseases such as measles, diphtheria and pertussis thanks to vaccines.
Recent trends, however, are cause for concern. A decline in vaccine confidence, worsening since the COVID-19 pandemic, challenges of access and the inclusion of vaccines in partisan political rhetoric have led to reduced vaccine uptake.
Canada must take stock of this changing landscape. Chief Public Health Officer Theresa Tam’s 2024 report, Realizing the Future of Vaccination for Public Health, sets out a clear framework for realizing the full potential of vaccination in Canada.
In addition to major investments in new vaccine development and biomanufacturing in Canada, this public health framework is designed to support a better co-ordinated national immunization system, concerted efforts to address public trust, and efforts to improve equitable access.
Need for a national immunization registry
The lack of integration of Canada’s fragmented immunization data across provinces and territories makes it more challenging to plan vaccine rollouts, identify coverage gaps or rapidly track adverse events after immunization. The Canadian Public Health Association and others have long called for a comprehensive and harmonized immunization registry as essential for a modern and responsive system.
A national framework for vaccine data collection would allow policymakers and practitioners to make evidence-informed decisions in real time.
Supporting public trust
Sustaining high vaccination coverage begins with public trust in science, government and public health. While most people still trust science and scientists, what constitutes trustworthy sources of information has become a serious problem.
Insufficient transparency around vaccine development, regulation and monitoring of adverse reactions needs addressing. Concerns about the rapid pace of scientific advances, including the 100-days mission to produce an effective vaccine for a future pandemic, must be recognized.
Initiatives during the pandemic to support equitable access — such as mobile clinics, culturally appropriate information and community-led initiatives — increased uptake. These approaches need to be extended to routine vaccination.
Moreover, building supportive environments means incorporating an “equity by design” approach, which applies regulatory tools and systems design to support vaccine equity, from discovery to rollout means that the ability to keep vaccines refridgerated cold chains or needle delivery, for example, do not contribute to disparities of access.
Bridge Research Consortium
The Bridge Research Consortium (BRC) is a multidisciplinary team of social scientists and humanities scholars established in 2024 to understand the social and behavioural factors that influence new vaccine uptake in Canada.
Bridging understandings across the “pipeline” for developing new vaccines and therapeutics, and the public health system, the BRC supports tailored and equity-informed strategies that enhance public trust and equitable access. We will hear directly from communities across the country, identify concerns in real-time, and co-develop approaches that reflect diverse perspectives. We plan to achieve this through demystifying how vaccines are developed and produced, holding deliberative dialogues that bring together diverse perspectives on challenging topics, and creating a travelling science exhibit. World Immunization Week is a timely reminder of the importance of this work to enable Canada to realize the potential benefits of vaccines.
Immunity and Society is a new series from The Conversation Canada that presents new vaccine discoveries and immune-based innovations that are changing how we understand and protect human health. Through a partnership with the Bridge Research Consortium, these articles — written by academics in Canada at the forefront of immunology and biomanufacturing — explore the latest developments and their social impacts.
Kelley Lee receives funding from the Canada’s Biomedical Research Fund, Canada Foundation for Innovation, and British Columbia Knowledge Development Fund to support the work of the Bridge Research Consortium. The BRC is one of 19 projects funded to support Canada’s Biomanufacturing and Life Sciences Strategy. She also receives funding from the Canadian Institutes of Health Research and New Frontiers in Research Fund to conduct research on pandemic preparedness and response. She currently serves as a Commissioner on the National University of Singapore-The Lancet Pandemic Readiness, Implementation, Monitoring and Evaluation (PRIME) Commission.
Ève Dubé receives funding from the Canada’s Biomedical Research Fund, Canada Foundation for Innovation, to support the work of the Bridge Research Consortium. The BRC is one of 19 projects funded to support Canada’s Biomanufacturing and Life Sciences Strategy. She also receives funding from the Canadian Institutes of Health Research and the Fonds de recherche du Québec to conduct research on vaccine acceptance.
Janice E. Graham receives funding from CIHR and PHAC.
Noni MacDonald receives funding from CIHR, CIRN grants related to immunization as well as PHAC and CPHA consultation fees related to immunization. She is a member of the Canadian Paediatric Society and the International Pediatric Society, a donor to Canadian Public Health Association and WHO, and on board of the journal Vaccine.
Harvard University took the extraordinary step of suing the Trump administration on April 21, 2025, claiming that the pressure campaign mounted on the school by the president and his Cabinet to force viewpoint diversity on campus violated the Constitution’s guarantees of free speech.
“Defendants’ actions are unlawful,” Harvard’s lawsuit states. “The First Amendment does not permit the Government to ‘interfere with private actors’ speech to advance its own vision of ideological balance.’”
Trump issued the “Executive Order Restoring Free Speech and Ending Federal Censorship” on March 21, 2019. In it, he expressed the importance of free inquiry and open debate to education and directed federal officials to use the federal government’s funding of higher education to ensure that universities promote free inquiry.
Channeling free-speech champions Benjamin Franklin and James Madison, Trump wrote that “free inquiry is an essential feature of our Nation’s democracy.”
Free speech is fundamental to human progress. Scientific, medical, technological and social advancements all rely on the free flow of information. Robust discussion and disagreement are equally important to maintaining a healthy constitutional republic.
In the words of the late U.S. Supreme Court Justice Robert Jackson, “If there is any fixed star in our constitutional constellation, it is that no official, high or petty, can prescribe what shall be orthodox in politics, nationalism, religion, or other matters of opinion or force citizens to confess by word or act their faith therein.”
On the first day of his second term in office, Trump issued another free speech executive order. It affirms the administration’s commitment to free speech, directs that tax money is not used to abridge free speech and instructs federal employees to “identify and take appropriate action to correct past misconduct by the Federal Government related to censorship of protected speech.”
In a vacuum, Trump’s orders appear to bode well for free speech.
But what is important is free speech reality, not rhetoric. Three months into his second term, where does Trump stand?
The many interconnected orders, letters, statements and actions of Trump’s White House make an assessment of any positive effects difficult. On the other hand, the Trump administration has clearly violated and chilled free speech on many occasions.
At his second inauguration, Donald Trump promised to ‘stop all government censorship’ and ‘bring back free speech.’
Repression and retaliation
Attempts to silence the president’s adversaries are developing as a pattern.
Law firms and attorneys who have sued or prosecuted Trump, or represented his adversaries, have been targeted for retribution and concessions. It began with an executive order on March 6, 2025, directed at the U.S.-based global law firm Perkins Coie, which had once represented Trump’s opponent in the 2016 presidential race, Hillary Clinton. A second order was issued on March 14, 2025, against Paul, Weiss, Rifkind, Wharton & Garrison because it once employed an attorney who investigated Trump. Subsequently, at least six other prominent law firms were also targeted.
Several law firms acceded to the president’s demands, agreeing to accept clients without regard to political beliefs, to eliminate DEI practices, and to perform pro bono work valued in the hundreds of millions of dollars for causes Trump supports.
The firms that didn’t accede to the president’s demands had their security clearances removed, access to federal buildings restricted, and were banned from working for federal agencies. A few of the firms that didn’t relent have won temporary injunctions barring the administration’s actions against them.
The nonpartisan free speech advocacy organization Foundation for Individual Rights and Expression decried the orders as threatening the foundations of justice and free speech. In one of several challenges to these orders, U.S. District Judge Beryl Howell wrote on March 12, 2025, that Trump’s order appeared motivated by “retaliatory animus” and concluded that it “runs head on into the wall of First Amendment protections.” Two other federal courts reached similar conclusions.
In the first three months of his second term, Trump withdrew Secret Service protection of several prominent critics who are former federal government officials, including John Bolton, a former Trump national security adviser. Former Secretary of State Mike Pompeo, his top aide, Brian Hook, and former high-level health official Anthony Fauci also lost their security protection.
It is hard to imagine that these decisions won’t have a profoundly chilling effect on potential critics of the president, especially since the revocations were publicly announced and each individual has been the subject of credible threats resulting from their governmental service.
Targeting the press
A similar pattern exists for journalists, where Trump is using his power to punish organizations whose reporting he doesn’t like.
AP journalists were banned from the White House and Air Force One on Feb. 11, 2025, for refusing to refer to the Gulf of Mexico as the Gulf of America, the new name Trump had ordered for the body of water. On April 9, 2025, this ban was found to violate the First Amendment by a judge nominated by Trump during his first term.
Trump effectively closed Voice of America, after 83 years of continuous broadcasting, for being “anti-Trump” and radical in its views. By charter, the broadcaster represents “America, not any single segment of American society,” with “accurate, objective, and comprehensive” news and “a balanced and comprehensive projection of significant American thought and institutions” through television, radio, internet, social media and satellite broadcasts to peoples around the world.
The Federal Communications Commission has initiated regulatory actions against the licenses of several television stations for broadcasts that have been accused by the President of being anti-Trump or biased in favor of Kamala Harris. Early in the process, the outcomes of these actions are to be determined.
Protesters in Somerville, Mass., on March 26, 2025, demand the release of Rumeysa Ozturk, a Turkish student at Tufts University, whose recent arrest by federal agents is seen as an assault on free speech. AP Photo/Michael Casey
Pressuring universities and students
Other administration actions, I believe, raise serious free speech issues.
Harvard isn’t the only university feeling pressure.
The administration is threatening to withhold federal money from universities as a way to coerce many of them to comply with administration policies in ways that implicate free speech and in some instances violate legal processes for the withholding of federal support.
Some of the Trump administration’s recent immigration enforcement efforts have targeted international students who are in the U.S. lawfully but who participated in Palestinian rights protests and disagreed with Israel’s actions during the war in Gaza.
In the past decade, the U.S. has fallen in press freedom, rule of law and democratic governance, resulting in the classification of a “flawed democracy” by the Economist Intelligence Unit, a democratic watchdog. Unsurprisingly, there has been a simultaneous rise in public support for authoritarianism. These changes make support for free speech increasingly important.
On March 4, 2025, Trump declared in a speech before a joint session of Congress that he “stopped all government censorship and brought free speech back to America.”
The record doesn’t support this claim.
Daniel Hall does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Source: The Conversation – Canada – By Ruolz Ariste, Adjunct Professor, Industrial Relations, Université du Québec en Outaouais, and Adjunct Professor, School of Public Policy and Administration, Carleton University
In 2023, Canada ranked last in access to primary health care among 10 high-income countries. (Shutterstock)
Access to physician services remains a challenge in Canada, particularly in primary care. Though this reality has been often eclipsed by the tariffs issue during the 2025 federal election, it continues to be a fundamental concern for Canadians.
Moreover, public spending on physicians has systematically risen during the first quarter of this century. The two most common proposals to improve this access are: increasing the number of physicians and/or the payment per service to physicians.
As a health economist researcher, my focus is on health workforce planning and efficiency. Given limited resources and budget constraints, what is the best way for policymakers to improve access to heath care: Paying our physicians more, or increasing their numbers?
Minding physician spending
Total spending on physicians increased to $47.5 billion in 2023, from $13.2 billion in 2000, growing an average of 5.7 per cent per year (known as the average annual growth rate (AAGR)). This includes physicians on fee-for-service (FFS) plan — those who bill for each individual service or procedure they provide to a patient — and non-FFS plan, such as salary or capitation (payment per each enrolled patient) in which physicians don’t have to bill for each individual service or procedure to get paid.
The key policy question is whether this additional spending was used to buy more services (medical consultations, visits and procedures). It is important to understand if Canada paid more for the same number of medical services or if Canadians are getting more bang for their buck.
Using an accounting approach, this increase in spending can be broken down into increase in number of services, and increase in unit cost of service.
In the 2022-23 fiscal year, physicians provided a total of 359.1 million services versus 263.8 million in 2000 (assuming that physicians on non-FFS plans have similar productivity to those on FFS plans). This translates to an average growth rate of 1.4 per cent per year.
Meanwhile, cost per service increased to $90.42 in 2023 compared to $36.66 in 2000 — an average increase of four per cent per year. This suggests that most of the increase in spending (70 per cent) was used to cover increasing costs per service.
It should be noted that average annual growth in unit cost represents sector-specific inflation. As such, it includes two components: general inflation and a “health premium” defined as inflation above and beyond general inflation. Considering that general inflation for the period (as measured by the CPI-all items) was on average 2.2 per cent per year, growth in inflation-adjusted unit cost for physicians was 1.8 per cent per year. That would be the “health premium” for physicians.
Still, some of the increase in spending was used to buy more services throughout this period. How could the access issue be explained? That’s where one needs to factor in population growth and aging: two demographic factors responsible for increases in number of services.
During this period spanning over two decades, Canada’s population grew at 1.1 per cent per year; this results in a mere 0.3 per cent growth in number of services per person per year (9.16 in 2023 from 8.65 in 2000).
Because aging impact is estimated to be at least 0.8 per cent annually, factoring it in a full demographic adjustment would result in a decline of 0.5 per cent in number of services per capita over this period; which would explain a poorer access to medical services in Canada.
Does the number of doctors affect the equation?
We consistently learn that the number of physicians has been increasing. In fact, there were 82,184 physicians providing clinical services in 2023 as opposed to 49,281 in 2000, which represents average growth of 2.2 per cent per year.
However, possibly due to shifts in the demographic composition of the workforce and better work-life balance, each of these physicians provides fewer services. For example, the number of services per physician per year in 2023 was 4,370 compared to 5,353 in 2000, a decline of 0.9 per cent per year.
Other sources have reported that trends in weekly worked hours of Canadian physicians has declined from about 53 hours before 2000 to 46 hours in recent years.
Why access seems more challenging for primary care services
Family physicians are the gatekeepers and first point of contact of the Canadian health-care system. Over the 2000-2023 period, their numbers have increased less than specialists (AAGR of 2.1 per cent and 2.4 per cent respectively). In other words, while in 2000, slightly more than half of physicians were family physicians, in 2023 the situation reversed, and slightly more than half of physicians were specialists.
Nurse practitioners emerged in the primary care setting in the last decade. This workforce grew from 3,768 in 2014 to 8,302 in 2023, increasing by an average of 9.2 per cent per year. Still, they are not enough to fully make up for the deficit.
An important consideration is that family physicians tend to benefit less from medical technological improvement than specialists. A few specific specialties, for example ophthalmology, profit the most from the huge productivity gains in the medical field. They could work fewer hours and still increase the number of services they provide and their income, which family physicians can do to a lesser extent.
In fact, for physicians who received at least $100,000 in fee-for-service payments per year, average gross FFS payments per ophthalmologist have grown almost three times more than that for a family physician between 2013 and 2023.
Implications for decision makers
Simply throwing more money into the system will not be enough to address the primary care access issue. It is important to ensure this additional money will buy mostly additional services, contrary to what we have shown in the past.
On the supply side, projections for the number of required physicians will need to account for the reduced number of hours worked. That means that more family physicians are needed just to provide the same number of services, let alone increase it.
On the demand side, the aging population translates into more services used per capita, but also increased severity of cases. The medical workforce itself is also aging, impacting both the supply and the demand sides. Policymakers need to work with institutions involved in physicians planning and training such as the Association of Faculties of Medicine of Canada, the Medical Council of Canada to ramp up training of family physicians. Extending training and scope of practice of nurse practitioners would also help.
Finally, the family physician category could be made more attractive by offering a more balanced payment scheme between family physicians and specialists.
Ruolz Ariste does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
It’s getting hard to figure out who all the US-sponsored talks over ending the conflict in Ukraine are supposed to benefit. Listening to Donald Trump over recent weeks, you could be forgiven for thinking it’s all about him.
In the past 48 hours, the US president has berated both the Ukrainian president, Volodymr Zelensky, and Russia’s Vladimir Putin for apparently dragging their heels over an agreement.
At present it’s Putin who is on the naughty step (although as we know this can change quite rapidly). After Russia launched strikes against Kyiv overnight on Wednesday, killing eight people and injuring dozens more, Trump used his TruthSocial platform to give the Russian president a piece of his mind.
TruthSocial
But hours previously, the US president had been giving Zelensky both barrels after he rejected a peace proposal that included the US recognising Crimea as part of Russia. Trump wrote: “It’s inflammatory statements like Zelenskyy’s that makes it so difficult to settle this War. He has nothing to boast about! The situation for Ukraine is dire — He can have Peace or, he can fight for another three years before losing the whole Country.”
For the past week or so, US officials, including the president and his secretary of state, Marco Rubio, have been warning that if a deal isn’t done “in a matter of days” they could just decide to walk away.
Sign up to receive our weekly World Affairs Briefing newsletter from The Conversation UK. Every Thursday we’ll bring you expert analysis of the big stories in international relations.
It’s hard to see how there is a credible pathway to peace at the moment, write Stefan Wolff and Tetyana Malyarenko, international security experts at the University of Birmingham and the National University Odesa Law Academy, respectively. They point out that even if all sides can agree to a formula for a ceasefire (remembering that Russia couldn’t even hold to the agreed truce over the Easter holiday) then a lasting peace deal that is supposed to follow is even more difficult to imagine.
And, as the abortive attempts to end the war drag on and Russia’s attritional tactics continue, at a massive cost – both economically and in human lives – there are signs that western resolve and unity is coming under pressure. Partly it’s because many of Ukraine’s allies, particularly in Europe, are already scrambling to work out how they might adjust their own security arrangements in the eventuality of a new world order developing, dominated by the US, China and Russia, in which Washington’s friends find themselves on the outside.
Then there’s the inescapable question of whether Putin can be trusted to hold to any deal he strikes, given the likelihood of the US president’s attention wandering once he has been able to boast of brokering an “end” to the war. As Wolff and Malyarenko put it: “Given Russia’s track record of reneging on the Minsk ceasefire agreements of September 2014 and February 2015, investing everything in a ceasefire deal might turn out not just a self-fulfilling but a self-defeating prophecy for Ukraine and its supporters.”
As Trump 2.0 nears the 100-day mark (more of which next week), it’s worth pausing to ask what the American public thinks about the war in Ukraine. Paul Whiteley of the University of Essex has been looking at polling on the issue over the past six months or so and concludes that the US president looks out of step with the people when it comes to what Whiteley construes as Trump’s apparently Russia-friendly approach. Whiteley quotes a recent Economist/YouGov poll which finds that far more people see Ukraine as an ally that view Russia in the same light.
Meanwhile a much larger poll taken at the time of the US election last year, found that significant numbers of people support sending humanitarian aid to Ukraine and only a slightly smaller proportion of respondents backed providing military aid.
American attitudes to policy alternatives for dealing with the Ukraine War:
“A key point is that only 23% said the US should not get involved,” Whiteley concludes. “There is not much support among Americans for abandoning Ukraine.”
Tensions are high between India and Pakistan after at least 26 people were killed in the bitterly contested Kashmir region. The atrocity in a the picturesque resport of Pahalgam, targeted tourists – specifically Hindu men. Victims were told to recite verses from the Qur’an before being killed if they couldn’t.
A hitherto relatively unknown group, the Resistance Front (TRF) has claimed responsibility for the attack. But Sudhir Selvaraj, a specialist in religious nationalism at the University of Bradford, says that TRF is actually associated with, or a front for, the notorious Lashkar-e-Taiba (lET) which carried out the 2008 Mumbai massacre in which at least 176 people were murdered.
Selvaraj says TRF has deliberately chosen a non-Islamist sounding name. “By doing so,” he writes “it supposedly aims to project a “neutral” (read as non-religious) front, rather emphasising the fight for Kashmiri nationalism.“
Coming just as the tourist season is getting under way in Kashmir, the attack has undermined the strategy of the Modi government to portray the region as a major attraction for visitors. Nitasha Kaul, an expert in Hindu nationalism at the University of Westminster, says this is mainly aimed at the Indian public as a propaganda coup to show the success of the 2019 decision to split Kashmir in two and reduce it to the status of a “union territory” run from New Delhi.
In reality, she writes Kashmiris – especially Kashmiri Muslims – have little say in their own affairs and are vulnerable to reprisals in response to any attacks by Pakistani or Pakistani-backed militants. Kashmir’s chief minister, Omar Abdullah, was actually excluded from security briefings when India’s home minister, Amit Shah, visited Kashmir after the attack.
Meanwhile some of the noisier Hindutva (Hindu nationalist) voices in politics and the media are demanding reprisals against Pakistan. It’s a very dangerous moment, Kaul concludes.
We’ve had some standout stories about the life and times of Jorge Mario Bergoglio, better known to the world’s 1.4 billion Catholics as Pope Francis I. We’ve covered his burning ambition to modernise the Catholic church, as well as his achievements in promoting women to more senior church positions than any potiff before him.
And we’ve considered his influence on the global environmental movement which, as Oxford theologian Celia Deane-Drummond writes, made her feel as if “something momentous was happening at the heart of the church”.
But the anecdote about the late pope which moved me the most was related by Sara Silvestri of City, who recalls meeting Pope Francis back in 2019. It was as part of a symposium at the Vatican at which migration, an issue she’d been deeply engaged with in her work, was the central issue for discussion. Silvestri recalls delivering a research paper and then being invited with to meet Francis in a room next to the Sistine Chapel.
“Francis made a speech and we greeted him one by one,” she recalled this week. “I had my 21 month-old daughter with me that day, thinking of the rare opportunity we would both enjoy. But I’d underestimated the length of the formalities involved. My daughter screamed ‘Open the doors, let me out!’ through the whole of the pope’s speech. I was distraught, but Francis responded very gently to the disruption.”
Francis she says, stopped what he was saying and “commented how sweet and lovely it was to hear the voice of a child. I could feel it was not just a platitude – he meant it.”
Source: The Conversation – UK – By Martin Lang, Senior Lecturer and Programme Leader in Fine Art , University of Lincoln
The Turner prize is the world’s most prestigious award for contemporary art. Named after the renowned British painter J.M.W. Turner, the prize used to be a huge media affair. After it relaunched in 1991, it had a full live feature on Channel 4 (back in the day when most people only had four television channels) presented by British art critic Matthew Collings, and the prize was announced over the years by major celebrities, such as Madonna.
Famous for courting controversy, the Turner prize shortlist was often featured on the front pages of tabloid newspapers – Tracey Emin’s “unmade bed” being a point in case. In more recent years, the prize has become less controversial and shifted towards more political themes, following certain trends such as new media and identity politics.
Originally, the prize was limited to a British artist under the age of 50, but the age limit was removed in 2017 to accommodate Lubaina Himid (then 63) who was seen as emblematic of overlooked artists (in particular women of colour).
Organised by the Tate which appoints a jury to select the shortlist, this year’s panel includes Andrew Bonacina (independent curator), Sam Lackey (director of the Liverpool Biennale), Priyesh Mistry (associate curator of modern and contemporary projects at the National Gallery, London), and Habda Rashid (senior curator of modern and contemporary art at the Fitzwilliam Museum, Cambridge).
Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.
The criteria for selection are straightforward: the artist must be based in Britain and have had an outstanding exhibition in the last 12 months. Since this exhibition could take place anywhere in the world, it’s not uncommon for the British public not to have seen it, and this is the case this year. On the 250th anniversary of J.M.W. Turner’s birthday, the shortlist for the 2025 Turner Prize was announced at Tate Britain, with four artists shortlisted: Nnena Kalu, Rene Matić, Mohammed Sami and Zadie Xa.
Nnena Kalu was selected for her show at Manifesta 15 in Barcelona, supplemented by work at the Walker Art Gallery, Liverpool. Kalu creates colourful cocoon-like hanging sculptures that are wrapped and woven, and respond to the architectural space in which they hang.
Much will be made of Kalu’s identity as a black, learning-disabled, female artist, but this doesn’t really need to come into the assessment of her work, which is really an exploration of colour, gesture and repetition.
Rene Matić was nominated for their show at CCA Berlin. Matić’s work addresses race, gender and class from personal experience, reflecting concerns that are so commonplace in contemporary art that – ironically for one of the youngest-ever Turner Prize nominees – they now seem behind the curve, like a pastiche.
Unlike Kalu, Matić’s installations and photography place identity front and centre, predictably from a personal point of view. This is supposed to make a powerful statement about the intersectionality of modern life, but is hardly an original position today.
Mohammed Sami was nominated for his exhibition at Blenheim Palace, which, while in England, was easily missed by art lovers.
Sami’s paintings depict interiors that evoke memory and loss. His use of shadows and the absence of human presence create a sinister atmosphere, adding depth to his exploration of personal and collective histories and to the genre of the interior.
Zadie Xa was nominated for her show at the Sharjah Biennial 16. Xa’s interdisciplinary approach combines sound, textiles and mural painting to delve into her Korean heritage, including themes like shamanism.
Her work pushes the boundaries of painting, integrating it with other media – such as sound, textiles and murals – to create immersive experiences.
This year’s Turner prize is notable for including painting for the first time since before the pandemic – perhaps a nod to Turner himself in this anniversary year. Sami’s oil on canvas contrast with Xa’s interdisciplinary methods, highlighting the diversity of contemporary art practices. Kalu and Matić provide installations, photography and text art diversifying the shortlist in terms of medium.
The four shortlisted artists will be exhibited together at Cartwright Hall Art Gallery, Bradford in September, and the winner will be announced on December 9. While the line-up is stronger than others in recent years, it is still somewhat predictable and lacks the excitement and controversies of years gone by.
Mohammed Sami is by far the best artist on the shortlist and is already emerging as a clear favourite to win. Although the 2017 winner Lubaina Himid’s work included elements of painting, if Sami does win, he would be the first painter to win the prize since Tomma Abts in 2006.
Martin Lang does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
In the sweltering summer of AD18, a desperate chant echoed across China’s sun-scorched plains: “Heaven has gone blind!” Thousands of starving farmers, their faces smeared with ox blood, marched toward the opulent vaults held by the Han dynasty’s elite rulers.
As recorded in the ancient text Han Shu (the book of Han), these farmers’ calloused hands held bamboo scrolls – ancient “tweets” accusing the bureaucrats of hoarding grain while the farmers’ children gnawed tree bark. The rebellion’s firebrand warlord leader, Chong Fan, roared: “Drain the paddies!”
Within weeks, the Red Eyebrows, as the protesters became known, had toppled local regimes, raided granaries and – for a fleeting moment – shattered the empire’s rigid hierarchy.
The Han dynasty of China (202BC-AD220) was one of the most developed civilisations of its time, alongside the Roman empire. Its development of cheaper and sharper iron ploughs enabled the gathering of unprecedented harvests of grain.
But instead of uplifting the farmers, this technological revolution gave rise to agrarian oligarchs who hired ever-more officials to govern their expanding empire. Soon, bureaucrats earned 30 times more than those tilling the soil.
And when droughts struck, the farmers and their families starved while the empire’s elites maintained their opulence. As a famous poem from the subsequent Tang dynasty put it: “While meat and wine go to waste behind vermilion gates, the bones of the frozen dead lie by the roadside.”
Two millennia later, the role of technology in increasing inequality around the world remains a major political and societal issue. AI-driven “technology panic” – exacerbated by the disruptive efforts of Donald Trump’s new administration in the US – gives the feeling that everything has been upended. New tech is destroying old certainties; populist revolt is shredding the political consensus.
And yet, as we stand at the edge of this technological cliff, seemingly peering into a future of AI-induced job apocalypses, history whispers: “Calm down. You’ve been here before.”
The link between technology and inequality
Technology is humanity’s cheat code to break free from scarcity. The Han dynasty’s iron plough didn’t just till soil; it doubled crop yields, enriching landlords and swelling tax coffers for emperors while – initially, at least – leaving peasants further behind. Similarly, Britain’s steam engine didn’t just spin cotton; it built coal barons and factory slums. Today, AI isn’t just automating tasks; it’s creating trillion-dollar tech fiefdoms while destroying myriads of routine jobs.
Technology amplifies productivity by doing more with less. Over centuries, these gains compound, raising economic output and increasing incomes and lifespans. But each innovation reshapes who holds power, who gets rich – and who gets left behind.
As the Austrian economist Joseph Schumpeter warned during the second world war, technological progress is never a benign rising tide that lifts all boats. It’s more like a tsunami that drowns some and deposits others on golden shores, amid a process he called “creative destruction”.
A decade later, Russian-born US economist Simon Kuznets proposed his “inverted-U of inequality”, the Kuznets curve. For decades, this offered a reassuring narrative for citizens of democratic nations seeking greater fairness: inequality was an inevitable – but temporary – price of technological progress and the economic growth that comes with it.
In recent years, however, this analysis has been sharply questioned. Most notably, French economist Thomas Piketty, in a reappraisal of more than three centuries of data, argued in 2013 that Kuznets had been misled by historical fluke. The postwar fall in inequality he had observed was not a general law of capitalism, but a product of exceptional events: two world wars, economic depression, and massive political reforms.
In normal times, Piketty warned, the forces of capitalism will always tend to make the rich richer, pushing inequality ever higher unless checked by aggressive redistribution.
So, who’s correct? And where does this leave us as we ponder the future in this latest, AI-driven industrial revolution? In fact, both Kuznets and Piketty were working off quite narrow timeframes in modern human history. Another country, China, offers the chance to chart patterns of growth and inequality over a much longer period – due to its historical continuity, cultural stability, and ethnic uniformity.
The Insights section is committed to high-quality longform journalism. Our editors work with academics from many different backgrounds who are tackling a wide range of societal and scientific challenges.
Unlike other ancient civilisations such as the Egyptians and Mayans, China has maintained a unified identity and unique language for more than 5,000 years, allowing modern scholars to trace thousand-year-old economic records. So, with colleagues Qiang Wu and Guangyu Tong, I set out to reconcile the ideas of Kuznets and Piketty by studying technological growth and wage inequality in imperial China over 2,000 years – back beyond the birth of Jesus.
To do this, we scoured China’s extraordinarily detailed dynastic archives, including the Book of Han (AD111) and Tang Huiyao (AD961), in which meticulous scribes recorded the salaries of different ranking officials. And here is what we learned about the forces – good and bad, corrupt and selfless – that most influenced the rise and fall of inequality in China over the past two millennia.
Chinese dynasties and their most influential technologies:
Black text denotes historical events in the west; grey text denotes important interactions between China and the west. Peng Zhou, CC BY-NC-SA
China’s cycles of growth and inequality
One of the challenges of assessing wage inequality over thousands of years is that people were paid different things at different times – such as grain, silk, silver and even labourers.
The Book of Han records that “a governor’s annual grain salary could fill 20 oxcarts”. Another entry describes how a mid-ranking Han official’s salary included ten servants tasked solely with polishing his ceremonial armour. Ming dynasty officials had their meagre wages supplemented with gifts of silver, while Qing elites hid their wealth in land deals.
To enable comparison over two millennia, we invented a “rice standard” – akin to the gold standard that was the basis of the international monetary system for a century from the 1870s. Rice is not just a staple of Chinese diets, it has been a stable measure of economic life for thousands of years.
While rice’s dominion began around 7,000BC in the Yangtze river’s fertile marshes, it was not until the Han dynasty that it became the soul of Chinese life. Farmers prayed to the “Divine Farmer” for bountiful harvests, and emperors performed elaborate ploughing rituals to ensure cosmic harmony. A Tang dynasty proverb warned: “No rice in the bowl, bones in the soil.”
Using price records, we converted every recorded salary – whether paid in silk, silver, rent or servants – into its rice equivalent. We could then compare the “real rice wages” of two categories of people we called either “officials” or “peasants” (including farmers), as a way of tracking levels of inequality over the two millennia since the start of the Han dynasty in 202BC. This chart shows how real-wage inequality in China rose and fell over the past 2,000 years, according to our rice-based analysis.
Official-peasant wage ratio in imperial China over 2,000 years:
The ratio describes the multiple by which the ‘real rice wage’ of the average ‘official’ exceeds that of the average ‘peasant’, giving an indication of changing inequality levels over two millennia. Peng Zhou, CC BY-SA
The chart’s black line describes a tug-of-war between growth and inequality over the past two millennia. We found that, across each major dynasty, there were four key factors driving levels of inequality in China: technology (T), institutions (I), politics (P), and social norms (S). These followed the following cycle with remarkable regularity.
1. Technology triggers an explosion of growth and inequality
During the Han dynasty, new iron-working techniques led to better ploughs and irrigation tools. Harvests boomed, enabling the Chinese empire to balloon in both territory and population. But this bounty mostly went to those at the top of society. Landlords grabbed fields, bureaucrats gained privileges, while ordinary farmers saw precious little reward. The empire grew richer – but so did the gap between high officials and the peasant majority.
Even when the Han fell around AD220, the rise of wage inequality was barely interrupted. By the time of the Tang dynasty (AD618–907), China was enjoying a golden age. Silk Road trade flourished as two more technological leaps had a profound impact on the country’s fortunes: block printing and refined steelmaking.
Block printing enabled the mass production of books – Buddhist texts, imperial exam guides, poetry anthologies – at unprecedented speed and scale. This helped spread literacy and standardise administration, as well as sparking a bustling market in bookselling.
Meanwhile, refined steelmaking boosted everything from agricultural tools to weaponry and architectural hardware, lowering costs and raising productivity. With a more literate populace and an abundance of stronger metal goods, China’s economy hit new heights. Chang’an, then China’s cosmopolitan capital, boasted exotic markets, lavish temples, and a swirl of foreign merchants enjoying the Tang dynasty’s prosperity.
While the Tang dynasty marked the high-water mark for levels of inequality in Chinese history, subsequent dynasties would continue to wrestle with the same core dilemma: how do you reap the benefits of growth without allowing an overly privileged – and increasingly corrupt – bureaucratic class to push everyone else into peril?
2. Institutions slow the rise of inequality
Throughout the two millennia, some institutions played an important role in stabilising the empire after each burst of growth. For example, to alleviate tensions between emperors, officials and peasants, imperial exams known as “Ke Ju” were introduced during the Sui dynasty (AD581-618). And by the time of the Song dynasty (AD960-1279) that followed the demise of the Tang, these exams played a dominant role in society.
They addressed high levels of inequality by promoting social mobility: ordinary civilians were granted greater opportunities to ascend the income ladder by achieving top marks. This induced greater competition among officials – and strengthened emperors’ authority over them in the later dynasties. As a result, both the wages of officials and wage inequality went down as their bargaining power gradually diminished.
However, the rise of each new dynasty was also marked by a growth of bureaucracy that led to inefficiencies, favouritism and bribery. Over time, corrupt practices took root, eroding trust in officialdom and heightening wage inequality as many officials commanded informal fees or outright bribes to sustain their lifestyles.
As a result, while the emergence of certain institutions was able to put a break on rising inequality, it typically took another powerful – and sometimes highly destructive – factor to start reducing it.
3. Political infighting and external wars reduce inequality
Eventually, the rampant rise in inequality seen in almost every major Chinese dynasty bred deep tensions – not only between the upper and lower classes, but even between the emperor and their officials.
These pressures were heightened by the pressures of external conflict, as each dynasty waged wars in pursuit of further growth. The Tang’s three century-rule featured conflicts such as the Eastern Turkic-Tang war (AD626), the Baekje-Goguryeo-Silla war (666), and the Arab-Tang battle of Talas (751).
The resulting demand for more military spending drained imperial coffers, forcing salary cuts for soldiers and tax hikes on the peasants – breeding resentment among both that sometimes led to popular uprisings. In a desperate bid for survival, the imperial court then slashed officials’ pay and stripped away their bureaucratic perks.
The result? Inequality plummeted during these times of war and rebellion – but so did stability. Famine was rife, frontier garrisons mutinied, and for decades, warlords carved out territories while the imperial centre floundered.
So, this shrinking wage gap cannot be said to have resulted in a happier, more stable society. Rather, it reflected the fact that everyone – rich and poor – was worse off in the chaos. During the final imperial dynasty, the Qing (from the end of the 17th century), real-terms GDP per person was dropping to levels that had last been seen at the start of the Han dynasty, 2,000 years earlier.
4. Social norms emphasise harmony, preserve privilege
One other common factor influencing the rise and fall of inequality across China’s dynasties was the shared rules and expectations that developed within each society.
A striking example is the social norms rooted in the philosophy of Neo-Confucianism, which emerged in the Song dynasty at the end of the first millennium – a period sometimes described as China’s version of the Renaissance. It blended the moral philosophy of classical Confucianism – created by the philosopher and political theorist Confucius during the Zhou dynasty (1046-256BC) – with metaphysical elements drawn from both Buddhism and Daoism.
Neo-Confucianism emphasised social harmony, hierarchical order and personal virtue – values that reinforced imperial authority and bureaucratic discipline. Unsurprisingly, it quickly gained the support of emperors keen to ensure control of their people, and became the mainstream school of thought in the Ming and Qing dynasties.
However, Neo-Confucianist thinking proved a double-edged sword. Local gentry hijacked this moral authority to fortify their own power. Clan leaders set up Confucian schools and performed elaborate ancestral rites, projecting themselves as guardians of tradition.
Over time, these social norms became rigid. What had once fostered order and legitimacy became brittle dogma, more useful for preserving privilege than guiding reform. Neo-Confucian ideals evolved into a protective veil for entrenched elites. When the weight of crisis eventually came, they offered little resilience.
The last dynasty
China’s final imperial dynasty, the Qing, collapsed under the weight of multiple uprisings both from within and without. Despite achieving impressive economic growth during the 18th century – fuelled by agricultural innovation, a population boom, and the roaring global trade in tea and porcelain – levels of inequality exploded, in part due to widespread corruption.
The infamous government official Heshen, widely regarded as the most corrupt figure in the Qing dynasty, amassed a personal fortune reckoned to exceed the empire’s entire annual revenue (one estimate suggests he amassed 1.1 billion taels of silver, equivalent to around US$270 billion (£200bn), during his lucrative career).
Imperial institutions failed to restrain the inequality and moral decay that the Qing’s growth had initially masked. The mechanisms that once spurred prosperity – technological advances, centralised bureaucracy and Confucian moral authority – eventually ossified, serving entrenched power rather than adaptive reform.
When shocks like natural disasters and foreign invasions struck, the system could no longer respond. The collapse of the empire became inevitable – and this time there was no groundbreaking technology to enable a new dynasty to take the Qing’s place. Nor were there fresh social ideals or revitalised institutions capable of rebooting the imperial model. As foreign powers surged ahead with their own technological breakthroughs, China’s imperial system collapsed under its own weight. The age of emperors was over.
The world had turned. As China embarked on two centuries of technological and economic stagnation – and political humiliation at the hands of Great Britain and Japan – other nations, led first by Britain and then the US, would step up to build global empires on the back of new technological leaps.
In these modern empires, we see the same four key influences on their cycles of growth and inequality – technology, institutions, politics and social norms – but playing out at an ever-faster rate. As the saying goes: history does not repeat itself, but it often rhymes.
Rule Britannia
If imperial China’s inequality saga was written in rice and rebellions, Britain’s industrial revolution featured steam and strikes. In Lancashire’s “satanic mills”, steam engines and mechanised looms created industrialists so rich that their fortunes dwarfed small nations.
In 1835, social observer Andrew Ure enthused: “Machinery is the grand agent of civilisation.” Yet for many decades, the steam engines, spinning jennies and railways disproportionately enriched the new industrial class, just as in the Han dynasty of China 2,000 years earlier. The workers? They inhaled soot, lived in slums – and staged Europe’s first symbolic protest when the Luddites began smashing their looms in 1811.
During the 19th century, Britain’s richest 1% hoarded as much as 70% of the nation’s wealth, while labourers toiled 16-hour days in mills. In cities like Manchester, child workers earned pennies while industrialists built palaces.
But as inequality peaked in Britain, the backlash brewed. Trade unions formed (and became legal in 1824) to demand fair wages. Reforms such as the Factory Acts (1833–1878) banned child labour and capped working hours.
Although government forces intervened to suppress the uprisings, unrest such as the 1830 Swing Riots and 1842 General Strike exposed deep social and economic inequalities. By 1900, child labour was banned and pensions had been introduced. The 1900 Labour Representation Committee (later the Labour Party) vowed to “promote legislation in the direct interests of labour” – a striking echo of how China’s imperial exams had attempted to open paths to power.
Slowly, the working class saw some improvement: real wages for Britain’s poorest workers gradually increased over the latter half of the 19th century, as mass production lowered the cost of goods and expanding factory employment provided a more stable livelihood than subsistence farming.
And then, two world wars flattened Britain’s elite – the Blitz didn’t discriminate between rich and poor neighbourhoods. When peace finally returned, the Beveridge Report gave rise to the welfare state: the NHS, social housing, and pensions.
Income inequality plummeted as a result. The top 1%’s share fell from 70% to 15% by 1979. While China’s inequality fell via dynastic collapse, Britain’s decline resulted from war-driven destruction, progressive taxation, and expansive social reforms.
Wealth share of top 1% in the UK
Evidence for UK inequality before 1895 is not well documented; dotted curve is conjectured based on Kuznets curve. Sources: Alvaredo et al (2018), World Inequality Database. Peng Zhou, CC BY-SA
However, from the 1980s onwards, inequality in Britain has begun to rise again. This new cycle of inequality has coincided with another technological revolution: the emergence of personal computers and information technology — innovations that fundamentally transformed how wealth was created and distributed.
The era was accelerated by deregulation, deindustrialisation and privatisation — policies associated with former prime minister Margaret Thatcher, that favoured capital over labour. Trade unions were weakened, income taxes on the highest earners were slashed, and financial markets were unleashed. Today, the richest 1% of UK adults own more 20% of the country’s total wealth.
The UK now appears to be in the worst of both worlds – wrestling with low growth and rising inequality. Yet renewal is still within reach. The current UK government’s pledge to streamline regulation and harness AI could spark fresh growth – provided it is coupled with serious investment in skills, modern infrastructure, and inclusive institutions geared to benefit all workers.
At the same time, history reminds us that technology is a lever, not a panacea. Sustained prosperity comes only when institutional reform and social attitudes evolve in step with innovation.
The American century
While China’s growth-and-inequality cycles unfolded over millennia and Britain’s over centuries, America’s story is a fast-forward drama of cycles lasting mere decades. In the early 20th century, several waves of new technology widened the gap between rich and poor dramatically.
By 1929, as the world teetered on the edge of the Great Depression, John D. Rockefeller had amassed such a vast fortune – valued at roughly 1.5% of America’s entire GDP – that newspapers hailed him the world’s first billionaire. His wealth stemmed largely from pioneering petroleum and petrochemical ventures including Standard Oil, which dominated oil refining in an age when cars and mechanised transport were exploding in popularity.
Yet this period of unprecedented riches for a handful of magnates coincided with severe imbalances in the broader US economy. The “roaring Twenties” had boosted consumerism and stock speculation, but wage growth for many workers lagged behind skyrocketing corporate profits. By 1929, the top 1% of Americans owned more than a third of the nation’s income, creating a precariously narrow base of prosperity.
When the US stock market crashed in October 1929, it laid bare how vulnerable the system was to the fortunes of a tiny elite. Millions of everyday Americans – living without adequate savings or safeguards – faced immediate hardship, ushering in the Great Depression. Breadlines snaked through city streets, and banks collapsed under waves of withdrawals they could not meet.
In response, President Franklin D. Roosevelt’s New Deal reshaped American institutions. It introduced unemployment insurance, minimum wages, and public works programmes to support struggling workers, while progressive taxation – with top rates exceeding 90% during the second world war. Roosevelt declared: “The test of our progress is not whether we add more to the abundance of those who have much – it is whether we provide enough for those who have too little.”
In a different way to the UK, the second world war proved a great leveller for the US – generating millions of jobs and drawing women and minorities into industries they’d long been excluded from. After 1945, the GI Bill expanded education and home ownership for veterans, helping to build a robust middle class. Although access remained unequal, especially along racial lines, the era marked a shift toward the norm that prosperity should be shared.
Meanwhile, grassroots movements led by figures like Martin Luther King Jr. reshaped social norms about justice. In his lesser-quoted speeches, King warned that “a dream deferred is a dream denied” and launched the Poor People’s Campaign, which demanded jobs, healthcare and housing for all Americans. This narrowing of income distribution during the post-war era was dubbed the “Great Compression” – but it did not last.
As oil crises of the 1970s marked the end of the preceding cycle of inequality, another cycle began with the full-scale emergence of the third industrial revolution, powered by computers, digital networks and information technology.
As digitalisation transformed business models and labour markets, wealth flowed to those who owned the algorithms, patents and platforms – not those operating the machines. Hi-tech entrepreneurs and Wall Street financiers became the new oligarchs. Stock options replaced salaries as the true measure of success, and companies increasingly rewarded capital over labour.
By the 2000s, the wealth share of the richest 1% climbed to 30% in the US. The gap between the elite minority and working majority widened with every company stock market launch, hedge fund bonus and quarterly report tailored to shareholder returns.
But this wasn’t just a market phenomenon – it was institutionally engineered. The 1980s ushered in the age of (Ronald) Reaganomics, driven by the conviction that “government is not the solution to our problem; government is the problem”. Following this neoliberalist philosophy, taxes on high incomes were slashed, capital gains were shielded, and labour unions were weakened.
Deregulation gave Wall Street free rein to innovate and speculate, while public investment in housing, healthcare and education was curtailed. The consequences came to a head in 2008 when the US housing market collapsed and the financial system imploded.
The Global Financial Crisis that followed exposed the fragility of a deregulated economy built on credit bubbles and concentrated risk. Millions of people lost their homes and jobs, while banks were rescued with public money. It marked an economic rupture and a moral reckoning – proof that decades of pro-market policies had produced a system that privatised gain and socialised loss.
Inequality, long growing in the background, now became a glaring, undeniable fault line in American life – and it has remained that way ever since.
Fig 5. Wealth share and income share of top 1% in the US
Sources: wealth inequality: World Inequality Database; income share: Picketty & Saez (2003). Dotted curves are conjectured based on Kuznets curve. Peng Zhou, CC BY-SA
So is the US proof that the Kuznets model of inequality is indeed wrong? While the chart above shows inequality has flattened in the US since the 2008 financial crisis, there is little evidence of it actually declining. And in the short term, while Donald Trump’s tariffs are unlikely to do much for growth in the US, his low-tax policies won’t do anything to raise working-class incomes either.
The story of “the American century” is a dizzying sequence of technological revolutions – from transport and manufacturing to the internet and now AI – crashing one atop the other before institutions, politics or social norms could catch up. In my view, the result is not a broken cycle but an interrupted one. Like a wheel that never completes its turn, inequality rises, reform stutters – and a new wave of disruption begins.
Our unequal AI future?
Like any technological explosion, AI’s potential is dual-edged. Like the Tang dynasty’s bureaucrats hoarding grain, today’s tech giants monopolise data, algorithms and computing power. Management consultant firm McKinsey has predicted that algorithms could automate 30% of jobs by 2030, from lorry drivers to radiologists.
The rise of AI isn’t just a technological revolution – it’s a political battleground. History’s empires collapsed when elites hoarded power; today’s fight over AI mirrors the same stakes. Will it become a tool for collective uplift like Britain’s post-war welfare state? Or a weapon of control akin to Han China’s grain-hoarding bureaucrats?
The answer hinges on who wins these political battles. In 19th-century Britain, factory owners bribed MPs to block child labour laws. Today, Big Tech spends billions lobbying to neuter AI regulation.
Meanwhile, grassroots movements like the Algorithmic Justice League demand bans on facial recognition in policing, echoing the Luddites who smashed looms not out of technophobia but to protest exploitation. The question is not if AI will be regulated but who will write the rules: corporate lobbyists or citizen coalitions.
The real threat has never been the technology itself, but the concentration of its spoils. When elites hoard tech-driven wealth, social fault-lines crack wide open – as happened more than 2,000 years ago when the Red Eyebrows marched against Han China’s agricultural monopolies.
To be human is to grow – and to innovate. Technological progress raises inequality faster than incomes, but the response depends on how people band together. Initiatives like “Responsible AI” and “Data for All” reframe digital ethics as a civil right, much like Occupy Wall Street exposed wealth gaps. Even memes – like TikTok skits mocking ChatGPT’s biases – shape public sentiment.
There is no simple path between growth and inequality. But history shows our AI future isn’t preordained in code: it’s written, as always, by us.
To hear about new Insights articles, join the hundreds of thousands of people who value The Conversation’s evidence-based news. Subscribe to our newsletter.
Peng Zhou does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Source: The Conversation – UK – By M. Sudhir Selvaraj, Assistant Professor, Peace Studies and International Development, University of Bradford
India is in mourning after 26 tourists were killed on April 22 in a resort in picturesque Pahalgam. The massacre is considered to be the deadliest attack on tourists in Indian-administered Kashmir since 2000.
The attack happened during peak tourist season as thousands flocked to the popular tourist destination. Most of those killed were Indians, with the exception of one Nepalese national. All the victims were men.
Pakistan has denied any involvement, but there are serious fears of escalation between the two nuclear powers. India’s defence minister, Rajnath Singh, openly accused Pakistan and threatened: “We will not only target those who carried out the attack. We will also target those who planned this act in the shadows, on our soil.”
India has shut a key border between the countries, expelled Pakistan’s diplomats and suspended the landmark Indus waters treaty which allows the sharing of water between the two countries.
The timing of these attacks is noteworthy as it coincides with major international and domestic events. The US vice-president, J.D. Vance, had arrived the day before with his Indian-American wife Usha and their three children, seeking closer India-US relations against the backdrop of a burgeoning trade war between the US and China. Notably, Pakistan considers China historically as an all-weather friend and ally.
The attack also comes a few weeks after the Indian government passed the Waqf (Amendment) Act which seeks to change how properties worth billions donated by Muslims, including mosques, madrassas, graveyards and orphanages, are governed. This act is also accused of diluting the rights of India’s Muslim communities by permitting the appointment of non-Muslims to their boards and tribunals.
Resistance Front
The Resistance Front (TRF) has claimed responsibility for the attack. A hitherto lesser-known armed group in the Kashmir region, TRF emerged in 2019 with the aim to fight for Kashmir’s secession from India. In 2023, it was designated as a terrorist organisation by the Indian government under the Unlawful Activities (Prevention) Act (UAPA), and the group’s founder, Sheikh Sajjad Gul was declared a terrorist.
TRF was formed largely in response to the Indian government’s move to strip Kashmir (India’s erstwhile only Muslim-majority state) of its semi-autonomous status in 2019. At this point, the Modi split Kasmhir into two union territories – Jammu & Kashmir – and brought it under more direct federal control.
The move also paved the way for the extension of land-owning rights and access to government-sponsored job quotas to non-locals. These changes could deprive locals of much-needed opportunities, and radically alter the demographics of the region.
In a message on messaging app Telegram, the group said: “Consequently, violence will be directed toward those attempting to settle illegally.” This tends to support the idea that the influx of “outsiders” was the justification for the attack.
In its short life, TRF has been responsible for numerous attacks targeting civilians, security forces and politicians in the region. The group took shape using social media and continues to rely on it to organise and recruit members.
Notably, the name TRF breaks from traditional rebel groups operating in the region, most of whom bear Islamic names. By doing so, it supposedly aims to project a “neutral” (read as non-religious) front, rather emphasising the fight for Kashmiri nationalism.
Was Pakistan involved?
The group is also reported to be linked to the Pakistani spy agency, Inter-Services Intelligence (ISI). Pakistan has denied these links. But analysts fear that any retaliation could escalate and threaten the tenuous peace along the border between the two countries.
Importantly, the TRF is believed to be an offshoot of, – or perhaps simply a front for – the Lashkar-e-Taiba (LeT), a Pakistan-based armed group. The LeT was involved in many terrorist attacks on Indian soil, most significantly, the 2008 Mumbai terrorist attacks in which an estimated 176 people were killed. The perpetrators of the atrocity are believed by many – including the US government – to have involved help from the ISI.
While not explicitly stated as a link to the Pahalgam attack, it is noteworthy that the suspected mastermind of the Mumbai attacks, Tahawwur Rana, a Pakistan-born Canadian citizen was extradited to India from the US on April 10. The US Embassy in New Delhi has confirmed that Rana will stand trial in India on ten criminal charges.
In contrast to the supposed “neutral” ostensibly non-Islamist nature of the TRF, the LeT (which translates as Army of the Righteous/Pure), is a Sunni terrorist group. Its aim is to to establish an Islamic state in south Asia and parts of central Asia – with Kashmir being integral to its plans.
To achieve this, since its formation in the early 1990s, the group’s focus has been on attacking military and civilian targets in Kashmir, supporting Pakistan’s claim to the region.
In the late 1990s, the then US president, Bill Clinton, described south Asia as the most dangerous place on Earth. Given the chance of a rapidly escalating India-Pakistan standoff, this could well be the case once again.
M. Sudhir Selvaraj does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Source: The Conversation – UK – By Lauren Bridgstock, Research Associate, Healthcare Communication, Faculty of Health and Education, School of Nursing and Public Health, Manchester Metropolitan University
In the emotionally complex world of dementia care, communication is more than just what we say – it’s how we say it. Terms of endearment like “darling”, “my lovely” and “sweetheart” are often used by healthcare staff with the best intentions: to comfort, connect and show warmth. But some people believe that elderspeak may sound patronising.
In my research, I focused on the use of elderspeak – a style of speech often directed at older adults. It typically involves a higher-pitched tone, simplified grammar and sentence structure and the use of terms of endearment.
Some people compare elderspeak to the way someone might speak to a young child, which is why it’s often viewed as patronising. Terms of endearment – like love, sweetheart, or darling – are particularly controversial and frequently debated in healthcare settings.
Some people have strong opinions about ‘elderspeak’ and assume it’s patronising.
Yet, despite these concerns – and that healthcare professionals are discouraged from using terms of endearment during training – the data showed that experienced healthcare professionals were using the terms regularly, suggesting that they might actually serve a valuable purpose in communication. When I closely analysed a range of real-life hospital interactions where terms of endearment were used, that’s exactly what I found. Three key themes emerged from the data.
1. Mirroring
First, healthcare professionals weren’t the only people using these terms. Terms of endearment were used responsively – so both patients with dementia and staff used them, reflecting or mirroring each other’s language.
This resulted in positive interactions. For example, a patient saying “OK duck” when a doctor asked them if they could sit the hospital bed up higher, and the doctor responding with “all right mate”. These examples shows that terms of endearment can be helpful for building rapport and trust between staff and patients.
2. Signposting
Second, terms of endearment were used at the beginning and end of conversations between staff and patients. In this case, terms of endearment were helpful for signposting and giving information about context to patients. Previous work has shown that people living with dementia can struggle with recognising cues in conversation. So, a term of endearment could help to signal that a conversation is coming to an end, such as a nurse saying: “Alright darling, it’s lovely to speak to you.”
This is not surprising since people use terms of endearment to signal the end of conversations in many social settings. For example, in a shop, a cashier might say “Thanks very much, love!” to signal the end of the transaction.
Terms of endearment were also used regularly when conversations began, signalling that the healthcare professional who has come to speak to the person with dementia is someone familiar or friendly. Although in this case, the healthcare professional would need to show caution depending on context and whether they’re familiar to the patient.
For example, one doctor opened a conversation with: “Hello my dear, you haven’t seen me for a while, have you?” The conversation continued with no issue. Another doctor used a very similar opening of: “Hi darling, I’m Ethan I’m the doctor for today.” In this case some conversational trouble followed. The difference here is that in the first example the doctor’s words demonstrate he has met the patient before. In the second, the words show they are unfamiliar.
3. Mitigation
A third way terms of endearment are used is to mitigate or minimise an imposition on a patient. Examples of this are:
• When a healthcare professional asks a patient to repeat something if their words were hard to interpret or unclear. For example: “What my lovely? Say that again.”
• When a healthcare professional is giving an instruction during a healthcare task. For instance: “Just bend this knee my love.”
• When a healthcare professional is responding to a patient expressing unease or discomfort – often when an unpleasant but medically necessary medical task is occurring, such as a blood test. For instance: “I won’t be a second darling.”
In these cases, the terms of endearment work to soften whatever the healthcare professional is doing. This can help to save face – avoid or reduce embarrassment on the part of the patient – particularly in cases where the healthcare professional has to ask them to repeat a comment or question. It can also aid in minimising whatever the professional is doing – similar to if someone said “We’re just going to do xyz,” rather than “We are going to do xzy.” Terms of endearment also acknowledge the sensitivity of the healthcare situation.
While there were many examples of terms of endearment being used successfully in healthcare settings, they are not a magic bullet that can improve every situation. There were a couple of examples in the data of patients rejecting terms of endearment. In both cases, patients were particularly distressed about the healthcare activity at hand – a painful injection, for example.
In these cases, the terms of endearment were not enough to excuse the action that the healthcare professional was trying to do. This is therefore an example of where context and sensitivity to the individual situation are important.
Lauren Bridgstock received funding from an ESRC Midlands Graduate School DTP collaborative PhD studentship between the University of Nottingham and Nottingham University Hospitals NHS Trust (ES/P000711/1). The data discussed in this article were collected as part of the NIHR funded VOICE (13/114/93) and VOICE2 (NIHR134221) research projects. The views expressed in this article are those of the author and not necessarily those of the ESRC, NIHR or the Department of Health and Social Care.
My ongoing PhD research is analysing masculinity, class and nationalism and exploring how these narratives appear in the everyday lives of men. I argue that responding to the harm that stems from these online communities requires an understanding of the manosphere as a product of a global, neoliberal, capitalist system built on inequality and division.
Neoliberalism can be described as “capitalism on steroids”. It’s a hyper-individualistic and market-driven ideology that fosters a culture of competition.
Neoliberalism encourages us to see ourselves as isolated individuals, responsible for our own success or failure. Among many other things research has shown that one of its outcomes is a profound loneliness. This is something that the manosphere exploits.
Role models are important, but the disconnect felt by so many today won’t be fixed by better role models within the same system. For example, black feminist thought, which recognises the way racism and sexism intersect and can offer extensive structural critiques, shows us that efforts to end violence against women must take place alongside work to change wider systems. So to start preventing violence we must first deal with root causes, such as poverty and inequality.
Measuring people by ‘value’
The manosphere picks up on messages around failing. Alongside hate-filled and misogynistic content, shame-based narratives from the manosphere suggest that boys and men are losers, weak and lazy if they aren’t “succeeding”. This is deeply damaging to all who find themselves drawn to such messages.
The concept of self-worth regularly appears in the manosphere, but it’s largely in relation to wealth or productivity: hustle harder, rise and grind, make money. These ideas don’t just exist in these online spaces. Similar language – self-investment, output, productivity, personal growth, efficiency – has become part of our everyday way of talking about ourselves and others.
The wellness industry promises us we can “glow up”. Self-help books and hustle culture encourage us to be better and produce more. Lifestyle influencers demonstrate how to turn our everyday existence into a marketable product.
This way of thinking turns people into products. It’s not about who you are – it’s about what you produce. Today’s far-right (of which the manosphere is part) capitalises on these ideas and the obsession with economic value.
There are versions of this aimed at women and girls, such as “cleanfluencers”, who reframe housework not only as a consumable personal brand but also as glamorous and fun.
But the hustle culture messaging central to the manosphere is particularly distinct in its hypermasculine messaging centred on “self-improvement” which advocates working harder and longer while being ruthless and dominant.
A focus on domination and individual success encourages young boys and men to see their self-worth tied up in that and that alone. This message extends beyond the manosphere and is part of the very system with which we all exist.
Resisting the system
Those captivated by manosphere narratives are victims as well as perpetrators. This doesn’t excuse their actions, or mean they shouldn’t be held accountable. How we care for each other within a capitalist society isn’t easy or straightforward.
Too often, though, discussion focuses solely on punitive responses, such as advocating for longer prison sentences. If we only focus on punishment, we miss the bigger picture. We need to find more inclusive ways of talking about, and responding to, harm – while rethinking what it means to truly care for each other.
Abolitionist movements strive to create systems which improve people’s health and safety and build a future without prisons. They seek to build responses to harm that are founded on education and community accountability – where communities take responsibility for identifying issues they need to address.
Abolitionist approaches advocate for expanding support networks and investing in resources deemed appropriate by survivors. Proposals like this work towards preventing violence. Their community focus means they address the isolating effects of neoliberalism at the same time.
We also can’t convince ourselves that once the likes of Andrew Tate and others involved in the manosphere disappear, women and girls will no longer suffer such extreme levels of misogyny and violence at the hands of boys and men.
This is because we exist within a system built on inequality and violence. It’s a system which rewards competition over cooperation, greed over care and one which is harmful to us all.
Sophie Lively receives funding from the Economic and Social Research Council as part of Northern Ireland and North East Doctoral Training Partnership.
Source: The Conversation – Canada – By Jason Hawreliak, Associate Professor, Game Studies. Department of Digital Humanities., Brock University
While some film adaptations of video games achieve commercial success, others struggle to replicate the ‘feel’ of a video game for cinema audiences.(Warner Bros.)
Video game adaptations are having a moment. On television, shows like HBO’s The Last of Us and Amazon Prime’s Fallout — each based on popular game franchises — have been gigantic hits. On the big screen, 2023’s The Super Mario Bros. Moviebroke box office records, and at the time of writing, A Minecraft Movie looks to be well on its way to generating one billion dollars in ticket sales.
With these recent successes, it can be hard to remember that movie adaptations of video games have historically been notoriously bad, typically failing to win over audiences and critics alike.
My first experience with adaptation disappointment came from the 1993 adaptation of Nintendo’s Super Mario Bros., starring Hollywood legend Bob Hoskins as Mario and John Leguizamo as his brother, Luigi.
Movie studio executives can perhaps be forgiven for trying to capitalize on the popularity of video games. With billions of players worldwide and a market valuation surpassing Hollywood and the music industry combined, video games are seemingly low-hanging fruit for commercial success. So why, with a few notable exceptions notwithstanding, are video game adaptations so difficult to pull off?
A trailer for A Minecraft Movie. (Warner Bros.)
The problem with adaptations
One key issue is that video games and movies are two very different media with different functions and different representational strengths and weaknesses. At their most basic, video games are meant to be interactive. They provide players with goals to achieve and challenges to overcome through some combination of strategy, skill and luck.
Sometimes, these goals and challenges are clear and direct. When a player sees a Goomba approach in Super Mario Bros., for example, they must press a button to jump on its head and defeat it; otherwise, the player takes damage and may have to start the level again.
Other times, the goals and challenges are less direct. In open-world or “sandbox” games like Minecraft, players are given a high degree of freedom in how they interact with the game world. There are ways to “win” in Minecraft, but the true pleasure of the game lies in giving players freedom to explore a vast world and create unique structures, villages, or even functional computers.
Interacting with a game world — its goals, rules and aesthetics — is a fundamentally distinct process from watching a film or reading a novel. Minecraft’s motto of “Create. Explore. Survive.” is not readily applicable in media like film and books though these media have experimented with interactivity too.
Game worlds on the big screen
So why have adaptations like The Super Mario Bros. Movie and A Minecraft Movie been successful, at least commercially? Part of the reason is that these are massive franchises with instant brand recognition. Even people who do not play video games know who Mario is, and Minecraft is among the most popular games of all time.
However, as we have seen with recent unsuccessful adaptations like Warcraft and Borderlands, brand recognition alone is not sufficient.
One reason why The Super Mario Bros. Movie and A Minecraft Movie have done well is that they get the “feel” of their respective worlds right. When Mario transports into the Mushroom Kingdom in the 2023 film, it looks and sounds like the Mushroom Kingdom players encounter in the games.
The colours, shapes and sounds in the film closely match the colours, shapes, and sounds in the games. The Goombas look like Goombas, the power-ups look like power-ups and the film retains the whimsical nature of the games.
Although the radical freedom afforded to players of Minecraft is difficult to replicate in a film, A Minecraft Movie nevertheless retains the look, sound and feel of the game. The Creepers look and behave like Creepers and the Piglins look and behave like Piglins.
When Steve (played by Jack Black) first learns to build his first structures, the audience watches as he joyfully creates whatever he can imagine, gradually learning to build larger and more complex structures, just as players do in the game.
Finally, it should be noted that while these films were commercial successes, they have failed to win over critics. On Metacritic, The Super Mario Bros. Moviesits at 46 (though the user score is a healthy 8.2) while A Minecraft Movie has a similarly paltry 45. As the Los Angeles Times puts it in their review, “A Minecraft Movie is a block of big dumb fun.”
So no, it is unlikely the film will win an Oscar for best picture. But its ability to capture the essence of Minecraft is clearly enough for audiences, many of whom have spent countless hours exploring virtual mines, fending off zombies and creating their own fantastical worlds.
Jason Hawreliak receives funding from The Social Sciences and Humanities Research Council.
Euthanasia has been legal in Belgium since mid-2002, and in the past two decades, the number of reported cases has risen sharply. In 2003, only 236 cases were recorded, but by 2023, this had increased to 3,423. This means that euthanasia now accounts for around 3% of all deaths. But what explains this increase? And does it suggest a worrying trend, as some critics fear?
In a new study published in Jama Network Open, my colleagues and I analysed trends in all reported euthanasia cases between 2002 and 2023. Our findings show that the rise in euthanasia cases can be attributed to two factors: “regulatory onset” (the time required for both the medical community to adapt its practices and protocols to the new law, and for the public to become informed about its availability and implications) and demographic change, including population ageing.
We saw a sharp rise in cases during the 15 years following the law being introduced, followed by a period of stabilisation. About one-third of the increase can be explained by demographic changes – mainly population ageing. Euthanasia is indeed most common among people in their 70s and 80s, who often suffer from terminal cancer or several conditions. The number of people in those age categories has steadily increased.
A common point of contention in the euthanasia debate is the inclusion of psychiatric disorders as a valid reason. In Belgium, euthanasia for psychiatric conditions has been permitted since the law was first introduced. However, despite concerns that this might lead to a rapid expansion of cases, our study finds that psychiatric euthanasia remains extremely rare.
Between 2002 and 2023, psychiatric conditions accounted for just 1.3% of all euthanasia cases, and this figure has remained stable over time. The strict criteria mean that these cases typically involve long-standing conditions where all treatment options failed. In all cases, the person seeking to end their life underwent an extensive assessment before euthanasia was approved.
Euthanasia for dementia, however, has increased slightly in recent years. While cases remain low – under 1% of total euthanasia cases – there has been a gradual rise, partially reflecting the ageing of Belgium’s population.
There are also regional differences. Historically, euthanasia rates have been higher in the Flemish region than in French-speaking Wallonia and Brussels. However, our study shows that this gap has narrowed in recent years. This may reflect shifting cultural attitudes or changes in access to end-of-life care, but, overall, the trend points to a growing alignment in practices across the country.
One of the biggest concerns around euthanasia laws is the so-called slippery slope argument – the idea that legalisation could lead to a broadening of criteria, eventually allowing euthanasia for non-terminal conditions, mental health issues or even socioeconomic reasons. However, our study finds no evidence to support this claim.
Slippery slope argument explained.
The increase in euthanasia cases has largely followed demographic trends and legislation implementation, rather than any broadening of legal criteria or changes in medical practice. Over time, both the regional and gender gaps have decreased, showing a more consistent pattern across the population rather than diverging trends.
Belgium’s approach differs significantly from the assisted dying bill currently being debated in the UK. With assisted dying, the patient ends their own life but a doctor prescribes the life-ending medication. With euthanasia, a doctor administers the life-ending medication. The proposed UK legislation would allow assisted dying only for terminally ill patients with a short life expectancy, whereas Belgium’s law permits euthanasia even when death is not expected in the near future.
This is particularly relevant for patients with psychiatric disorders or dementia, who may suffer unbearably for years before meeting the UK’s proposed eligibility criteria. Another key distinction is decision-making: in Belgium, the final decision is made by doctors, whereas the UK is mooting judicial oversight.
Data gaps
One thing that countries allowing assisted dying need to think about is how to track and collect euthanasia data. Belgium has a national system for reporting, but there are still gaps – especially in connecting euthanasia data with people’s social and economic backgrounds. It’s important to understand who asks for euthanasia and why, to assess the long-term effects of the law.
As more countries consider assisted dying laws, Belgium’s experience offers valuable lessons – not only on regulation but also on the importance of robust data monitoring from the outset.
Jacques Wels receives funding from the Belgian National Scientific Fund (FNRS) and the European Research Council (ERC).
Natasia Hamarat reports participating in the Federal Commission for the Control and Evaluation of Euthanasia (FCCEE).
Donald Trump has threatened to walk away from the Ukraine peace talks if there is no progress soon. The implicit threat here is that the US will no longer get involved, perhaps withdrawing arms shipments and even humanitarian aid to Ukraine.
It is understood that the proposed plan the Trump team has been working on has involved Ukraine giving up territory including Crimea and giving up any possibility of joining Nato. The plan favours Russia’s recent demands and Trump has recently said he has found Russia much easier to deal with than Ukraine.
But which country do US voters feel closer to and which do they feel is more of an ally to their nation?
An Economist/YouGov poll conducted on March 17 asked Americans whether they thought Russia and Ukraine were allies or enemies. Some 2% thought Russia was an ally, compared with 46% who saw it as an enemy. In the case of Ukraine, the figures were 26% ally and 4% enemy. Given these figures, Trump’s Russia-friendly policy looks unpopular.
Meanwhile, the Cooperative Election Study data in the US has just been released. This project involves a large group of researchers who conducted a survey of 60,000 Americans at the time of the presidential election last year. This very large sample provides an accurate picture of US public opinion.
American attitudes to policy alternatives for dealing with the Ukraine war
The survey included the following question: “As you may know Russia invaded Ukraine in February 2022. What should the U.S. do about the situation in Ukraine?”
Respondents were asked to choose as many of the options shown in the above chart which they favoured, with some choosing one or two and others several.
This technique means that failing to choose an option does not mean they disagreed with it, since they may not have thought about it, were indifferent to it, or did not believe it would work.
It is clear from the chart that Americans do not want their troops to get involved in combat in the Ukraine, since only 5% chose this option. However, 22% agreed with the idea of sending military support staff, 33% agreed with sending military aid and 51% favoured sending humanitarian aid.
A key point is that only 23% said the US should not get involved. There is not much support among Americans for abandoning Ukraine.
Can President Trump abandon Ukraine?
This raises the question as to whether the US can simply walk away from the war as the president suggested. However, this could cause political problems for the Trump administration.
The US has already provided US$66.5 billion (£49.9 billion) of aid to the Ukraine. Abandoning the country would call into question Trump’s much vaunted negotiation skills and mean that achieving a peace deal, supported by 41% in the survey, had clearly failed.
When former president Joe Biden withdrew US forces from Afghanistan in 2021, he was heavily criticised by Republicans in the US Congress, despite the fact that the previous Trump administration had negotiated the agreement to withdraw. Rapid withdrawal now from Ukraine could attract even stronger criticisms in light of his earlier claims that he would settle conflict in 24 hours.
The chart below, based on questions in the survey, shows that American voters are not that reluctant to send troops abroad if they agree with the reasons for doing it. They were asked to choose as many of five policy alternatives relating to military interventions abroad.
Once again, different respondents chose different numbers of alternatives. The chart makes clear they are not enthusiastic about using military force to assist in the spread of democracy, or to ensure that the US has a regular supply of oil.
American support for using US military forces abroad
At the same time, it shows that 38% support using troops to prevent a genocide happening and 46% support using them to protect allies being attacked, or as part of a United Nations peacekeeping force. Finally, a majority support the idea of destroying a terrorist camp, a response probably influenced by the elimination of Osama Bin Laden by US special forces when Barack Obama was president in 2011.
There is no contradiction between a generalised willingness to use force in various circumstances and a reluctance to do this in Ukraine. Americans fighting in Ukraine would mean involvement in a war with Russia with all the risks that would entail.
But there was a strong willingness to support Ukraine prior to Trump’s second term and these attitudes suggest that if he tried to withdraw from Nato or continues to put forward a pro-Putin deal large numbers of American voters would be unhappy with this, and it could affect his support.
There has been global criticism of the Trump administration’s introduction of high tariffs and warnings of the consequences of these for the world economy. And what might be seen by many Americans as an abandonment for Ukraine would also alienate many international allies of the US, but so far Trump has not shown many signs of worrying about that.
Paul Whiteley has received funding from the British Academy and the ESRC.
Source: The Conversation – UK – By Anastasia Vayona, Postdoctoral Research Fellow in Social Science and Policy, Faculty of Science and Technology, Bournemouth University
Have you ever thrown something in the recycling bin, hoping it’s recyclable? Maybe a toothpaste tube, bubble wrap or plastic toy labelled “eco-friendly”?
This common practice, known as “wishcycling”, might seem harmless. But my colleagues and I have published research that shows misleading environmental claims by companies are making recycling more confusing – and less effective.
This kind of marketing leads to greenwashed consumer behaviour — when people believe they are making environmentally friendly choices, but are being misled by exaggerated or false claims about how sustainable a product is.
We surveyed 537 consumers from 102 towns across the UK to explore a simple question: is there a link between greenwashed consumer behaviour and wishcycling? We wanted to find out whether they feed into each other, what drives them both, and how consumers perceive the connection.
What makes this issue particularly interesting is its psychological foundation. We argue that modern consumers have been burdened with a responsibility that may be beyond their capacity: deciding what to do with product packaging after use.
Many people are unprepared, undereducated or simply unaware of the full effect of their choices — and why should they be? This is a burden that should not rest on their shoulders. Into this gap has stepped recycling, presented as the solution. Consumers are led to believe that by recycling, they are doing their part to help the environment.
However, when products carry environmental claims or symbols — even vague ones like a green leaf, green banner or “earth-friendly” label — consumers often fall prey to what we call the “environmental halo effect”. This cognitive bias causes people to attribute positive environmental qualities to the entire product, including how it’s disposed of, even when those claims may not be accurate.
Surprisingly, our study reveals that environmentally conscious consumers can be most susceptible to this effect. Their strong environmental values may make them more inclined to trust green marketing claims, even when those claims are vague or misleading.
Driven by their desire to make sustainable choices, these consumers often accept green marketing claims at face value, assuming that environmental claims reflect genuine efforts toward sustainability.
Even more intriguingly, we found that people with higher levels of education tend to trust companies’ environmental claims more readily, especially when these companies present themselves as environmentally responsible.
This all leads to more wishcycling, not less. When companies talk about their environmental ethos and social responsibility, we’re more likely to believe their packaging is recyclable – even when it isn’t.
Our research also suggests that younger consumers, despite being generally more environmentally aware, are more likely to wishcycle. While millennials and generation Z often express strong environmental values, they’re also often more likely to contaminate recycling streams by throwing in non-recyclable items.
The future is circular
The solution is not to stop caring for the environment, but to channel that care more effectively. At the heart of this approach is the concept of a circular economy, where products and materials are reused, refurbished and recycled, rather than discarded.
The answer isn’t just better recycling – it’s better packaging design and corporate responsibility from the start. While we as consumers should continue doing our part, the primary burden should rest with manufacturers to create packaging that’s genuinely recyclable or reusable, not just marketed as “eco-friendly”.
This means implementing clear, standardised labelling that leaves no room for confusion, using packaging made from single, easily recyclable materials, and designing for reuse and refill systems.
On February 11 2025, the EU enacted a new packaging and packaging waste directive. This is designed to reduce packaging waste and support a circular economy by setting rules for how packaging should be made, used and disposed of throughout its lifecycle.
Until these systemic changes are fully implemented, we need to be both environmentally conscious and critically aware consumers. But it’s important to remember: while our daily choices and actions matter, the key to real change lies in pushing for corporate and policy-level transformation of our packaging systems.
By designing out waste, the circular economy offers a sustainable model that can guide these changes and reduce our dependence on single-use packaging. Hopefully, this can inspire us to improve current practices and keep finding better ways to do things, leading to a more sustainable and resilient future.
Don’t have time to read about climate change as much as you’d like?
Anastasia Vayona does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
The early morning light spills over the raised beds of a thriving community garden in Harlem, New York. It’s a Saturday, and people of all ages move among the plants – harvesting collard greens, making compost and packing bags of fresh vegetables.
A community initiative called Harlem Grown began in 2011 as a single urban farm on an abandoned neighbourhood lot. It has since become a lifeline for the people who live there.
The project combats food insecurity, provides fresh produce to local families – 150,000 servings of food in 2023 alone – and teaches the next generation how to nourish themselves and their communities. As one long-term female volunteer told me: “Healthy habits start young.” That’s why their programmes involve schoolchildren as young as five.
Across the boroughs of New York City, a lively ecosystem of urban farmers, non-profit leaders, dietitians and chefs work together to localise food systems. This helps communities to become more self-sufficient and less reliant on ultra-processed foods, all while ensuring support reaches the most vulnerable.
While healthy food options are readily available in affluent areas such as in upper east side Manhattan, lower-income neighbourhoods – dominated by fast-food establishments – face a far greater need. In the Bronx, residents are establishing community gardens to encourage access to fresh, organic produce that people would otherwise require to travel outside the borough to find.
Some young, female urban farmers from minority communities in New York believe that “like fashion, farming is political too”. Some have built their capacity through courses at the Farm School NYC, which provides them with the tools needed to become effective leaders in the food justice movement.
Localising food systems involves growing and foraging for food in urban settings to reduce food miles and reclaim diverse, locally rooted food traditions long-displaced by industrial systems. This is one of the key lines of work explored by women in my book, What if Women Designed the City?
I’ve been investigating how women as experts of their neighbourhoods engage with local food movements – organising community gardens, coordinating cooperatives and managing farmers markets – viewed through a transatlantic lens that connects efforts in North America with those alive in the UK.
My research adopts a regenerative perspective on urban development, viewed through the eyes of women from diverse backgrounds who uncover untapped potential rooted in the uniqueness of their neighbourhoods. For instance, I conducted walking interviews with 274 women from both affluent and hard-to-reach areas in three Scottish cities: Glasgow, Edinburgh and Perth.
A participant from the modernist housing estate of Wester Hailes in Edinburgh observed that locals often favour convenience foods: “People in this area like hamburgers, pizzas, mashed potatoes and stuff like that.” In her view, encouraging more community gardens could provide healthier alternatives while also reconnecting residents with fresh, seasonal produce.
Another resident recognised the social benefits such spaces could bring, helping to counter isolation. Regular meals at the Murrayburn and Hailes Neighbourhood Garden, for instance, attract people who live alone, providing a welcoming space – even for those who don’t feel like talking. As one participant put it, these meals are especially “good for people who are slightly depressed”.
Research suggests that getting our hands into the soil stimulates the release of serotonin, a natural antidepressant, triggered by the soil bacterium Mycobacterium vaccae, which can help people to feel more relaxed and happier. This aligns with compelling evidence on the benefits of “green care” – including social and therapeutic horticulture, care farming and environmental conservation – which has been shown to reduce anxiety, stress, and depression.
Growing native
At the heart of this community-led food justice movement is the belief that both herbalists and everyday gardeners should prioritise cultivating native plants that naturally thrive in their surroundings, rather than relying on plants from distant regions, that require harvesting, processing and transportation over long distances using fossil fuel energy.
This ethos underpins the work of a growing network of women from the Grass Roots Remedies workers cooperative, who meet regularly at the community-led Calders Garden in Edinburgh to exchange experiences while growing, foraging and making their own herbal medicines.
The vital role of communities as growers and foragers in urban resilience has largely been overlooked by city officials, urban planners and developers. Yet, these community-led efforts are bringing more life and vitality to urban spaces, fostering biodiversity, regenerating soil health and reducing the carbon footprint embedded in industrial food systems.
Several of the women I interviewed believe that being thoughtful consumers involves also taking part in producing what they eat, while reducing food waste at all stages of production. Women are also leading the way by repurposing vacant lots and development sites for community gardening and herbal medicine kitchens while integrating local food production into urban planning and building codes.
Regulatory measures that tie planning approval of new developments to the provision of open space for garden cultivation – either on-site or within the neighbouring area – can ensure that urban agriculture becomes an integral part of city planning. In cities, growing and foraging together deepens social links, encourages more diversified diets, reduces food miles and fosters a regenerative approach to community healthcare.
Don’t have time to read about climate change as much as you’d like?
May East does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Source: The Conversation – UK – By Nitasha Kaul, Professor of Politics, International Relations and Critical Interdisciplinary Studies, University of Westminster
The horrific targeted attack by militants in Kashmir on April 22, which killed at least 25 Indian tourists and one Nepalese national and injured many more, bears all the hallmarks of terrorism. The timing of the attacks during the high-profile visit of the US vice-president J.D. Vance to India, highlights that this was calculated to achieve maximum impact.
The attack came at the beginning of the peak tourist season, right before a major annual Hindu Amarnath Yatra pilgrimage that attracts thousands each year. It also happened soon after provocative statements from Pakistan’s military chief, Asim Munir. In a recent speech, Munir said: “No power in the world can separate Kashmir from Pakistan. Kashmir is Pakistan’s jugular vein.”
The attack was made by gunmen who identified Hindu men by demanding they recite verses from the Qur’an before killing them, while sparing women and children.
Kashmir is a site of multiple competing claims, entrenched conflict and intense militarisation. The political dispute has further been used to divide Kashmiris along religious lines, resulting in a discourse of competing victimhoods between Kashmiri Muslims and Kashmiri Hindus.
Against the backdrop of already normalised Islamophobia in India, such an attack creates greater prospects for repression and violence against Muslims.
The reaction in the Indian media has followed a predictable script. Amid the Hindutva (Hindu nationalist) ratcheting up of anti-Muslim sentiment in the country, some people took to social media to demand the annexation of Pakistan Administered Kashmir (known as “PoK” – or Pakistan Occupied Kashmir by many in India). Kashmiri Muslims in India are reportedly now facing Hindutva groups threatening to target them.
Hindu majoritarianism in India has long relied on constructing a narrative of the beleaguered majority under attack from a Muslim minority. So this attack becomes part of a selectively retold and lengthy history where Muslims have always been aggressors and Hindus always victims.
Indian Muslims then often have to prove their patriotism. A Muslim member of India’s Congress Party even called for the Pakistani city of Rawalpindi to be “flattened”.
India’s prime minister, Narendra Modi, held an emergency meeeting of the (all-male) security cabinet and immediate measures were announced after the meeting included a condemnation of Pakistan for encouraging “cross-border terrorism”. Barely a day later, he is already on the campaign trail in the Indian state of Bihar for the upcoming elections there.
There is a continuing clamour on social media for cross-border military strikes and a desire to go after Pakistan (#AvengePahalgam). These two countries have a long history of conflict. With an ongoing spiral of tit-for-tat responses, a de-escalation cannot be guaranteed and a more general irrational miscalculation between the nuclear-armed neighbours cannot be ruled out.
A question of accountability
In the cacophony of jingoist calls for revenge, what is being ignored completely by the mainstream nationalistic media – often satirically referred to in rhyme as Modi’s “godi” (lapdog) media – is the question of accountability.
In 2019 Jammu and Kashmir was downgraded from a state to a “union territory”, since which all matters of security have been the responsibility of the Delhi-appointed lieutenant governor and central home ministry. So when the home minister Amit Shah – Modi’s right-hand man – went to the region after the attack, the local chief minister, Omar Abdullah, a veteran political leader, was excluded from security briefings and meetings.
Voices calling for accountability and even Shah’s resignation (he was the architect of downgrading Jammu and Kashmir in the name of greater security and integration) are being ignored and termed “anti-national” or traitorous. This contrasts with the reaction after the Mumbai attacks of 2008 under the Congress Party-led United Progressive Alliance. Following that terror attack, the Indian home minister resigned.
By contrast, Shah and India’s current national security advisor, Ajit Doval have remained in post over many such attacks, the last major one being in Pulwama in 2019 when 40 central reserve police force (CRPF) personnel were killed, also in the Kashmir region.
Before the most recent attack there, despite the heavy tourist presence, there was no security deployment on the main road from Pahalgam to Baisaran, another major tourist resort.
Important questions need to be answered. What were the lapses in security and who is responsible? What are the policy failures in Jammu and Kashmir that allowed this to happen? Who in government should be accountable and what lessons can we take from the attack?
In a democracy, elected leaders are held accountable and those who speak truth to power can do so without being punished. Yet, in an environment of censorship on dissent, any questioning of Indian ruling party leaders, especially Modi and Shah, is branded as hostile to India’s national interest.
The problem with tourism as a political solution
Modi’s policy towards Kashmir has been to encourage tourism in response to terrorism. This makes the people there dependent on the centre, as well as presenting the idea of post-conflict normality as a propaganda coup.
But anyone who knows Kashmir will tell you that official platitudes about “normality” mean very little. The conflict in Kashmir has a complex history in which the idea of Kashmiri self-determination has long been the most important factor. Now the region is without autonomy and only held an election last year – for the first time in a decade – after the Indian Supreme Court ordered it.
In today’s India, where authoritarianism is ascendant and Hindu nationalism poses a threat to Muslim rights and security, questions of Kashmiri people’s rights are almost impossible to address.
Meanwhile they are vulnerable to attacks in the name of revenge for whatever Pakistani or Pakistani-backed militants do. And any acts of solidarity by Kashmiri Muslims, such as vigils and shutdowns tend simply to be ignored by a narrative that points the finger at Muslims.
Rather than focus on the shared grief, the risk is that Modi’s Hindu nationalist government will adopt a narrow and aggressive stance, making tensions in the region worse. Calls for a vendetta may fail to distinguish between Indian Muslims or Kashmiri civilians and terrorists. This will only make the entire south Asian region less secure and more violent.
Nitasha Kaul does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Source: The Conversation – UK – By Chantal Gautier, Senior Lecturer in Psychology and Sex and Relationship Therapist, University of Westminster
Warning: contains minor spoilers for Dying for Sex.
When Molly (Michelle Williams) learns that her breast cancer has returned and time is now slipping through her fingers, she decides she isn’t ready to write off her ending. Not before living the chapter she’d never dared to start: the one about self and sexual discovery.
The Disney+ series Dying for Sex opens with a couples’ therapy moment that, as a sex and relationships therapist, I know well. Molly is craving more sex but her husband Steve (Jay Duplass) just isn’t feeling it. After one final attempt to elicit sex, Molly gives Steve a blow job, but when she moves his hand to her chest, he breaks down. “When I touch your breasts,” he explains, “it makes me think about the mastectomy and then I think about losing you”.
It’s not uncommon for partners like Steve to share these feelings. Studies have shown that the physical and emotional toll of care-giving and desire to protect the patient, can sometimes lead partners to withdraw from intimacy.
Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.
Still, it doesn’t land well. Watching Molly’s reaction is painful but it marks a turning point. She decides to divorce, firmly declaring that she “doesn’t want to die with him” and longs to be seen beyond the lens of her illness.
In pursuit of unlocking her true sexual self, Molly navigates her way through the wilds of dating apps, embarking on a string of sexual escapades from hook-ups to experimentation with sex toys. But it hits her that she doesn’t know what she really likes or dislikes.
This isn’t unusual. Many people don’t have a clear sense of their turn-ons or preferred pleasures at first. In my private practice, it’s actually a frequent theme. Clients often come feeling unsure or disconnected from their desires, and together we explore what’s sometimes called their “erotic template”.
In pursuit of her “yums”, Sonja (Esco Jouley), her palliative-care specialist, invites Molly and best friend Nikki (Jenny Slate) into the sex-positive world of the “play party”, a space where like-minded people into kink, BDSM, and other forms of consensual play can hang out, connect and explore.
It’s here that something in Molly awakens. She allows herself to fully embrace new aspects of her sexuality as she discovers a preference for dominance and a strong desire to have others submit to her sexually. We get an early glimpse of this power dynamic between Molly and her neighbour Guy (Rob Delaney), setting the stage for their unique relationship.
The trailer for Dying for Sex.
Despite taking naturally to her newfound proficiency at dom/sub dynamics, Molly is still held back in seeking her own pleasure, specifically, her quest for an orgasm with another person. It’s only when we delve into her history that we truly see how profoundly haunted Molly is in moments of sex and her struggles to stay connected.
This kind of disconnect or dissociation is a common response to trauma, a way the mind tries to protect itself when things feel unsafe or too overwhelming. When the body senses a threat – even if there is no real threat, but a reminder of past trauma – it can shoot us outside our window of tolerance, meaning we disconnect.
Realising that she has spent most of her life locked out of her own body pushes Molly to revisit her childhood and subsequent sexuality. Perhaps sex and dominance is a language her nervous system can understand – a way to heal. In dom/sub spaces, everything is based on clear consent, safety and mutual respect. Here Molly can decide who touches her and how.
And so, we find Molly at a crossroad where something deeper quietly begins to take root: agency. Molly starts to feel in charge of her life, her body and her choices – including how she navigates her cancer. She makes her own choices about which treatments feel right for her: when to stop chemo, when to be sedated for pain management and even who she wants by her side when she dies. Not out of fear, but from a place of clarity and ownership, because she has found her power.
Dying for Sex takes viewers on a roller coaster of emotions – laughter, surprise, tenderness, sadness, even hope. Boldly provocative and deeply moving it weaves together themes of sexuality, love, a complex maternal relationship and enduring friendships.
What emerges is not just a story about dying for sex – but a powerful celebration of what it means to truly live.
Chantal Gautier does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Immigrants and diaspora communities make up a growing segment of Canada’s population. In 2021, a record 23 per cent of the Canadian population, more than 8.3 million people, were current or former immigrants, the highest share since 1921. People from Asia constituted 51.4 per cent of this immigrant population.
I am a postdoctoral fellow at the University of British Columbia’s Faculty of Education. My doctoral research focused on the integration practices of South Asian immigrants from Pakistan, India and Bangladesh living or working in northeast Calgary.
Using the Canadian Index for Measuring Integration, I explored how they engaged with Canadian society across economic, social, health and political dimensions. Much of this engagement is driven by multilingualism and ethnic networks, increasingly mediated by platforms like WhatsApp, X and Facebook.
Researching political integration in a multilingual digital world
Since the federal election was called in late March, I’ve been conducting a digital ethnography of social media pages run by South Asian community influencers. Digital ethnography involves observing how people use internet technologies to communicate, engage and make meaning in online spaces.
The influencers in my study are individuals who manage digital platforms, such as Facebook groups, WhatsApp chats and other community networks, and play a key role in shaping how community members access, discuss and act on political information. The pages I examined — mostly on WhatsApp, Facebook and X — continue to show how multilingualism and ethnic networks shape political awareness and influence voter behaviour.
Too often, political engagement is narrowly defined by voter turnout. But my research with the South Asian diaspora in Calgary shows that political integration extends far beyond the ballot box. It happens on social media, at mosques, temples and gurdwaras, through multilingual volunteering and in community spaces where language, culture and civic life intersect.
Crucially, it also extends to transnational issues. Many community members discuss global events — such as the Israel-Hamas conflict, the Russia-Ukraine war or United States trade policies — as well as Canadian issues like immigration.
For my research, I interviewed 19 first-generation South Asians from Bangladesh, India and Pakistan, living in Calgary. Participants in my study described the wide range of civic and democratic activities they take part in: volunteering, joining online discussions and attending cultural or religious events where political issues were discussed — mostly in both English and their heritage languages.
Participation spans both formal volunteering, often in English-dominant spaces, and informal volunteering at religious institutions, festivals or on social media. Many preferred to volunteer where they could speak Hindi, Punjabi, Bangla or Urdu or sometimes a mixture of multiple languages, referred to as translanguaging.
One participant, a banker and social media influencer who runs a Pakistani Facebook group, said:
“I often volunteer on Facebook. I also join politicians in their campaigns. My entire social media work is based on Urdu. It allows me to connect with people.”
During digital ethnography, this participant was observed combining artificial intelligence (AI) generated images with multilingual postings to campaign for a political party.
Beyond voter turnout
South Asians are Canada’s largest visible minority group and their civic participation offers a vital lens into how democracy functions in a multicultural, multilingual society. There’s a widespread belief that if people aren’t engaging with politics in the dominant language, then they must not be engaging at all.
However, my research shows otherwise. Societal multilingualism — the ability to use both English and heritage languages — is protected under Canada’s Multiculturalism Act and supports more inclusive participation. A participant who works for a settlement agency explained that multilingual political activities help “in communication, explaining policies, responding to people’s questions, understanding their concerns and addressing them.”
There’s also a common misconception that nominating a candidate from a specific ethnic background guarantees community support. While that may influence local elections, federal voting decisions are often more complex. Participants in my research emphasized party platforms, past performance and national and international issues alongside identity. Ethnic concentration alone does not determine electoral success.
Ethnic networks — made up of extended family, faith groups, digital communities and neighbourhood ties — act as civic incubators. They are not isolated enclaves but dynamic platforms where newcomers develop political literacy and trust.
Rethinking political participation
Canada’s official languages are English and French, but multilingualism plays a central role in immigrant communities. In my research, language is dynamic — a social and cultural resource that fosters identity and engagement.
Participants translated political materials, explained policies to others and used multilingual platforms to discuss topics like housing, health care and immigration. These practices are visible in this election cycle too, as South Asian community members use language, digital tools, artificial intelligence and hot-button issues to engage voters. Language in these settings is cultural capital. It enables participation through familiarity, emotional connection and social belonging.
Faith-based spaces like gurdwaras, mosques and mandirs are civic forums. Candidates visit during campaigns and community leaders help shape political dialogue and participation. These institutions offer cultural fluency and language access that mainstream systems often lack.
As immigration reshapes Canada’s demographics, political integration is more than a trend — it’s essential to a functioning democracy. While some parties provide translations or host cultural events, they often miss how deep civic engagement already exists within these communities.
Immigrants are not passive recipients anymore. They are active participants, shaping conversations in their own languages and networks. Ahead of the 2025 election, it’s time to move beyond ethnic voting myth and recognize the full civic ecosystem — from WhatsApp groups to mosque courtyards.
Political parties must go beyond hiring translators or leaning on community leaders. Multilingual civic participation is not an afterthought — it’s foundational. It’s time to engage people in the languages they speak, in the spaces they trust.
If we want a truly inclusive democracy, we must meet people where they are linguistically, culturally and locally. Ethnic networks are not detours from political life. They are on-ramps. And multilingualism is not a barrier to participation. It’s the language of democracy.
Kashif Raza receives funding from the Social Sciences and Humanities Research Council (SSHRC) of Canada.