New research led by Viktoria Cologna at ETH Zurich in Switzerland may help to explain what’s going on. Using data from around the world, the study suggests simple exposure to extreme weather events does not affect people’s view of climate action – but linking those events to climate change can make a big difference.
Global opinion, global weather
The new study, published in Nature Climate Change, looked at the question of extreme weather and climate opinion using two global datasets.
The first is the Trust in Science and Science-related Populism (TISP) survey, which includes responses from more than 70,000 people in 68 countries. It measures public support for climate policies and the extent that people think climate change is behind increases in extreme weather.
The second dataset estimates how much of each country’s population has been affected each year by events such as droughts, floods, heatwaves and storms. These estimates are based on detailed models and historical climate records.
Public support for climate policies
The survey measured public support for climate policy by asking people how much they supported five specific actions to cut carbon emissions. These included raising carbon taxes, improving public transport, using more renewable energy, protecting forests and land, and taxing carbon-heavy foods.
Responses ranged from 1 (not at all) to 3 (very much). On average, support was fairly strong, with an average rating of 2.37 across the five policies. Support was especially high in parts of South Asia, Africa, the Americas and Oceania, but lower in countries such as Russia, Czechia and Ethiopia.
Exposure to extreme weather events
The study found most people around the world have experienced heatwaves and heavy rainfall in recent decades. Wildfires affected fewer people in many European and North American countries, but were more common in parts of Asia, Africa and Latin America.
Cyclones mostly impacted North America and Asia, while droughts affected large populations in Asia, Latin America and Africa. River flooding was widespread across most regions, except Oceania.
Do people in countries with higher exposure to extreme weather events show greater support for climate policies? This study found they don’t.
In most cases, living in a country where more people are exposed to disasters was not reflected in stronger support for climate action.
Wildfires were the only exception. Countries with more wildfire exposure showed slightly higher support, but this link disappeared once factors such as land size and overall climate belief were considered.
In short, just experiencing more disasters does not seem to translate into increased support for mitigation efforts.
Seeing the link between weather and climate change
In the global survey, people were asked how much they think climate change has increased the impact of extreme weather over recent decades. On average, responses were moderately high (3.8 out of 5) suggesting that many people do link recent weather events to climate change.
Such an attribution was especially strong in Latin America, but lower in parts of Africa (such as Congo and Ethiopia) and Northern Europe (such as Finland and Norway).
Crucially, people who more strongly believed climate change had worsened these events were also more likely to support climate policies. In fact, this belief mattered more for policy support than whether they had actually experienced the events firsthand.
Prior research shows less dramatic and chronic events like rainfall or temperature anomalies have less influence on public views than more acute hazards like floods or bushfires. Even then, the influence on beliefs and behaviour tends to be slow and limited.
This study shows climate impacts alone may not change minds. However, it also highlights what may affect public thinking: helping people recognise the link between climate change and extreme weather events.
In countries such as Australia, climate change makes up only about 1% of media coverage. What’s more, most of the coverage focuses on social or political aspects rather than scientific, ecological, or economic impacts.
Omid Ghasemi receives funding from the Australian Academy of Science. He was a member of the TISP consortium and a co-author of the dataset used in this study.
Canadian researchers recently investigated this idea in a sample of 1,082 undergraduate psychology students. The students completed a survey, which included questions about how they perceived their diet influenced their sleep and dreams.
Some 40% of participants reported certain foods impacted their sleep, with 25% of the whole sample claiming certain foods worsened their sleep, and 20% reporting certain foods improved their sleep.
Only 5.5% of respondents believed what they ate affected the nature of their dreams. But many of these people thought sweets or dairy products (such as cheese) made their dreams more strange or disturbing and worsened their sleep.
In contrast, participants reported fruits, vegetables and herbal teas led to better sleep.
This study used self-reporting, meaning the results rely on the participants recalling and reporting information about their sleep and dreams accurately. This could have affected the results.
It’s also possible participants were already familiar with the notion that cheese causes nightmares, especially given they were psychology students, many of whom may have studied sleep and dreaming.
This awareness could have made them more likely to notice or perceive their sleep was disrupted after eating dairy. In other words, the idea cheese leads to nightmares may have acted like a self-fulfilling prophecy and results may overestimate the actual likelihood of strange dreams.
Nonetheless, these findings show some people perceive a connection between what they eat and how they dream.
While there’s no evidence to prove cheese causes nightmares, there is evidence that does explain a link.
The science behind cheese and nightmares
Humans are diurnal creatures, meaning our body is primed to be asleep at night and awake during the day. Eating cheese before bed means we’re challenging the body with food at a time when it really doesn’t want to be eating.
At night, our physiological systems are not primed to digest food. For example, it takes longer for food to move through our digestive tract at night compared with during the day.
If we eat close to going to sleep, our body has to process and digest the food while we’re sleeping. This is a bit like running through mud – we can do it, but it’s slow and inefficient.
If your body is processing and digesting food instead of focusing all its resources on sleep, this can affect your shut-eye. Research has shown eating close to bedtime reduces our sleep quality, particularly our time spent in rapid eye movement (REM) sleep, which is the stage of sleep associated with vivid dreams.
People will have an even harder time digesting cheese at night if they’re lactose intolerant, which might mean they experience even greater impacts on their sleep. This follows what the Canadian researchers found in their study, with lactose intolerant participants reporting poorer sleep quality and more nightmares.
It’s important to note we might actually have vivid dreams or nightmares every night – what could change is whether we’re aware of the dreams and can remember them when we wake up.
Poor sleep quality often means we wake up more during the night. If we wake up during REM sleep, research shows we’re more likely to report vivid dreams or nightmares that we mightn’t even remember if we hadn’t woken up during them.
This is very relevant for the cheese and nightmares question. Put simply, eating before bed impacts our sleep quality, so we’re more likely to wake up during our nightmares and remember them.
Don’t panic – I’m not here to tell you to give up your cheesy evenings. But what we eat before bed can make a real difference to how well we sleep, so timing matters.
General sleep hygiene guidelines suggest avoiding meals at least two hours before bed. So even if you’re eating a very cheese-heavy meal, you have a window of time before bed to digest the meal and drift off to a nice peaceful sleep.
How about other dairy products?
Cheese isn’t the only dairy product which may influence our sleep. Most of us have heard about the benefits of having a warm glass of milk before bed.
Milk can be easier to digest than cheese. In fact, milk is a good choice in the evening, as it contains tryptophan, an amino acid that helps promote sleep.
Nonetheless, we still don’t want to be challenging our body with too much dairy before bed. Participants in the Canadian study did report nightmares after dairy, and milk close to bed might have contributed to this.
While it’s wise to steer clear of food (especially cheese) in the two hours before lights out, there’s no need to avoid cheese altogether. Enjoy that cheesy pasta or cheese board, just give your body time to digest before heading off to sleep. If you’re having a late night cheese craving, opt for something small. Your sleep (and your dreams) will thank you.
Charlotte Gupta does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Mountains on the moon as seen by NASA Lunar Reconnaissance Orbiter. (NASA/GSFC/Arizona State University)
In science-fiction stories, companies often mine the moon or asteroids. While this may seem far-fetched, this idea is edging closer to becoming reality.
Celestial bodies like the moon contain valuable resources, such as lunar regolith — also known as moon dust — and helium-3. These resources could serve a range of applications, including making rocket propellant and generating energy to sustaining long missions, bringing benefits in space and on Earth.
The first objective on this journey is being able to collect lunar regolith. One company taking up this challenge is ispace, a Japanese space exploration company ispace that signed a contract with NASA in 2020 for the collection and transfer of ownership of lunar regolith.
The company recently attempted to land its RESILIENCE lunar lander, but the mission was ultimately unsuccessful. Still, this endeavour marked a significant move toward the commercialization of space resources.
These circumstances give rise to a fundamental question: what are the legal rules governing the exploitation of space resources? The answer is both simple and complex, as there is a mix of international agreements and evolving regulations to consider.
Space activities have exponentially evolved since the treaty’s adoption. In the 60 years following the launch of Sputnik 1 — the first satellite placed in orbit — less than 500 space objects were launched annually. But since 2018, this number has risen into the thousands, with nearly 3,000 launched in 2024.
Because of this, the treaty is often judged as inadequate to address the current complexities of space activities, particularly resource exploitation.
A longstanding debate centres on whether Article II of the treaty, which prohibits the appropriation of outer space — including the moon and other celestial bodies — also prohibits space mining.
The prevailing position is that Article II solely bans the appropriation of territory, not the extraction of resources themselves.
We are now at a crucial moment in the development of space law. Arguing over whether extraction is legal serves no purpose. Instead, the focus must shift to ensuring resource extraction is carried out in accordance with principles that ensure the safe and responsible use of outer space.
International and national space laws
A significant development in the governance of space resources has been the adoption Artemis Accords, which — as of June 2025 — has 55 signatory nations. The accords reflect a growing international consensus concerning the exploitation of space resources.
Notably, Section 10 of the accords indicates that the exploitation of space resources does not constitute appropriation, and therefore doesn’t violate the Outer Space Treaty.
Considering the typically slow pace of multilateral negotiations, a handful of nations introduced national legislation. These laws govern the legality of space resource exploitation, allowing private companies to request licenses to conduct this type of activity.
Among these, Luxembourg’s legal framework is the most complete. It provides a series of requirements to provide authorization for the exploitation of space resources. In fact, ispace’s licence to collect lunar regolith was obtained under this regime.
This first high-resolution image taken on the first day of the Artemis I mission by a camera on the tip of one of Orion’s solar arrays. The spacecraft was 57,000 miles from Earth when the image was captured. (NASA)
The rest of the regulations usually tend to limit themselves to proclaiming the legality of this activity without entering into too much detail and deferring the specifics of implementation to future regulations.
While these initiatives served to put space resources at the forefront of international forums, they also risk regulatory fragmentation, as different countries adopt varying standards and approaches.
In May 2025, the chair of the working group, Steven Freeland, presented a draft of recommended principles based on input from member states.
These principles reaffirm the freedom of use and exploration of outer space for peaceful purposes, while introducing rules pertaining to the safety of the activities and their sustainability, as well as the protection of the environment, both of Earth and outer space.
The development of a legal framework for space resources is still in its early stages. The working group is expected to submit its final report by 2027, but the non-binding nature of the principles raises concerns about their enforcement and application.
As humanity moves closer to extracting and using space resources, the need for a cohesive and responsible governance system has never been greater.
Martina Elia Vitoloni does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
A new flood barrier is being built to prevent climate-induced Flooding in Chittagong in Bangladesh. amdadphoto/Shutterstock.com
At a coastal port in Chittagong, Bangladesh, something remarkable is underway. With support from a US$850 million (£620 million) investment from the World Bank, engineers are building flood-resistant infrastructure that can survive rising seas and stronger storms. A new 3.7-mile-long barrier will protect people, homes, and trade in one of the world’s most climate-vulnerable regions.
Projects like this do more than save lives. They show why investing in climate
adaptation is one of the smartest financial opportunities of our time. There are plenty of global conferences where leaders discuss climate change and make big
promises. Yet, less than 5.5% ofglobal climate finance actually reaches the countries most at risk. That is not just a failure of fairness. It is a missed chance for real impact.
As the world gathers in Seville, Spain for the fourth international meeting on development financing, the focus must go beyond pledges and shift toward practical, on-the-ground investment in resilience.
At the previous UN climate finance meeting, also held in Seville, leaders focused
on fixing how public money flows through global institutions. But just as important is the need to invest in climate adaptation. This means helping people live with the changes already happening, including more floods, longer droughts, rising seas and intense heat.
While mitigation is about stopping climate change getting worse (by switching to clean energy or protecting forests that absorb carbon, for example), adaptation is about coping with the effects we can no longer avoid. It includes building stronger homes, growing more resilient crops, and improving hospitals and schools so they can keep working during extreme weather. Both approaches are necessary, but adaptation often gets less attention. And less money.
Private investors have already committed large sums to clean energy projects. But they have done much less to support communities on the frontlines of climate change. Many of these countries struggle with limited budgets, complex rules for accessing finance, and a lack of support to develop viable projects. So promising ideas often go unfunded.
That is beginning to change. New tools are helping investors take on less risk and back more projects. These include low-interest loans, partnerships between public and private institutions, and guarantees that reduce the risk of failure.
The Green Climate Fund is the largest source of dedicated climate finance for developing countries. By the end of 2023, it had approved US$13.5 billion in funding, rising to US$51.9 billion when co-financing is included. This money helps unlock adaptation efforts that were previously out of reach.
We can already see progress. In Kenya and Ethiopia, farmers are using drought-resistant seeds to grow more food in changing conditions. In the Caribbean, solar energy is powering schools and clinics in remote communities. And in Bangladesh, the new port infrastructure in Chittagong is protecting a vital economic hub while boosting local businesses.
Working with nature
In coastal areas, restoring mangrove forests can reduce the force of incoming storms, protect biodiversity and support fisheries. The Pollination Group, a climate investment firm, is helping turn “nature-based solutions” like these into projects that attract private finance.
In his previous role as the Prince of Wales, King Charles III launched the Natural Capital Investment Alliance, an initiative that aims to mobilise US$10 billion for projects that restore and protect nature while offering solid financial returns. The alliance also helps investors better understand these kinds of opportunities by creating clearer guidance and standards. This supports the Terra Carta, a charter created by King Charles III that offers a roadmap for businesses to align with the needs of both people and the planet by 2030.
Investors who step into these emerging spaces gain more than financial returns. They build long-term relationships with governments and local communities. They help shape future policy. And they create lasting foundations for growth in places that are ready to lead if given the chance.
Adaptation projects also bring real benefits to people. They improve access to clean water, protect food supplies, create jobs, strengthen education and support healthcare systems. For families already facing climate disruption, these changes are not just improvements. They are lifelines.
By creating stable and welcoming environments for responsible investment, governments can accelerate this shift. By simplifying how money is accessed, international institutions can make it easier for good ideas to become funded projects. Philanthropic groups and development agencies can help build local skills and prepare projects for funding. Private investors can bring capital, innovation and experience.
Investing in climate adaptation is no longer just a moral issue. It is a smart, scalable and necessary response to a changing world.
Don’t have time to read about climate change as much as you’d like?
Ali Serim does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Source: The Conversation – UK – By Christina Philippou, Associate Professor in Accounting and Sport Finance, University of Portsmouth
Aside from victory and sporting glory, the players in the women’s Euro 2025 football tournament are playing for more money than ever before. The prize fund of €41 million (£35 million), to be shared among the 16 participating sides, is more than double what it was last time around.
It’s still a long way off from the prize money on offer to the men’s equivalent tournament (€331 million), but is a clear indication of the continuing rise of interest and investment in women’s football, particularly within England.
The English team’s hosting and victory of the 2022 women’s Euros were rightly credited with providing a massive boost to the game three years ago. But interest in women’s club football was already on the rise, with an almost sixfold increase in revenue between 2011 (the first season of the Women’s Super League (WSL)) and 2019.
Get your news from actual experts, straight to your inbox.Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.
Other numbers are encouraging too. Generally, match-day attendances have seen a dramatic rise including for the sport’s second tier (now named WSL2).
Broadcasting income for WSL was up 40% in 2023-24 compared to the previous season. And a new five-year deal with Sky Sports and the BBC, worth £65 million, is worth almost double the previous arrangement.
However, there’s room for improvement.
Research suggests that well-considered scheduling (weekend games are best) can have a marked effect on attendances (as does weather and pricing). And stadium capacity matters too, partly because more people can attend but also because a larger (often iconic) stadium tends to act as an attraction in itself.
For example, Arsenal’s women’s side saw average crowds of just under 29,000 in 2024-25 compared to a WSL average of 6,662. They have the highest revenue from match-day income, with nine games being played at the Emirates stadium last season and all WSL games scheduled to be played there in the next.
Facilities within the stadiums are another concern, as they were traditionally built for mostly male spectators, so do not cater as well to the more female and family demographic of women’s football.
This means, for example, that there are often not enough women’s toilets available, while refreshment options may be geared towards drinkers rather than children. Even the gates seem designed for a steady entry trickle of fans over an hour rather than a mass turnout of time-pressured families arriving just before kick-off.
Some good news on this front is that Brighton and Hove Albion FC are now building a stadium specifically for use by their women’s team, due to be in use by 2027. And Everton have decided to repurpose Goodison Park for use by its women’s side following the men’s move to a new stadium.
Commercial break
But aside from people actually watching the matches, the biggest chunk of income for women’s teams comes from commercial enterprises. And while affiliated teams (those linked to a men’s side) can benefit from sharing a brand, there are also a large number of commercial partners emerging specifically in the women’s game.
But while commercial and competition success stories are something to celebrate, women’s football still faces challenges. One of the big ones is to do with building a legacy – the idea that just hosting a major tournament should not be the end goal, but something which ensures lasting change and development.
As for the club game, attitudes to building a legacy by offering financial support to women’s teams are mixed. Some clubs view the women’s team as different (in terms of marketing, say) but integrated as part of the club (in terms of ticketing and sharing of resources). Others seem to consider a women’s side as good PR or community outreach rather than a genuine commercial opportunity.
All of those teams mentioned worries over costs. And most women’s teams do lose money.
But men’s teams tend to lose money too, with the majority not only making losses but also being technically insolvent (meaning owners need to pump money in to keep clubs going).
The difference is that women’s football is essentially in a start-up phase, with lots of commercial, broadcasting and match-day potential, as showcased by annual revenue growth rates. In contrast, the men’s football market is a mature one that has been professional for decades, and shows much lower annual revenue growth.
Euro 2025 then, needs to play its part in keeping up momentum. It needs to keep the crowds, the commercial partners, the broadcasters and fans on board and committed.
For while women’s football is connected to men’s football, it is a different business. And celebrating that difference could do the women’s game a world of economic good.
Christina Philippou does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Scopolamine, more chillingly known as “devil’s breath,” is a drug with a dual identity. In medicine, it’s used to prevent motion sickness and nausea. But in the criminal underworld, particularly in parts of South America, it has gained a dark reputation as a substance that can erase memory, strip away free will and facilitate serious crimes. Now, its presence may be sparking fresh concerns in the UK.
While most reports of devil’s breath come from countries like Colombia, concerns about its use in Europe are not new. In 2015, three people were arrested in Paris for allegedly using the drug to rob victims, turning them into compliant “zombies”.
The UK’s first known murder linked to scopolamine was reported in 2019 when the Irish dancer Adrian Murphy was poisoned by thieves attempting to sell items stolen from him. In a more recent case in London, a woman reported symptoms consistent with scopolamine exposure after being targeted on public transport.
Get your news from actual experts, straight to your inbox.Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.
Scopolamine, also known as hycosine, is a tropane alkaloid, a type of plant-derived compound found in the nightshade family (Solanaceae). It has a long history: indigenous communities in South America traditionally used it for spiritual rituals due to its potent psychoactive effects.
In modern medicine, scopolamine (marketed in the UK as hyoscine hydrobromide) is prescribed to prevent motion sickness, nausea, vomiting and muscle spasms. It also reduces saliva production before surgery. Brand names include Kwells (tablets) and Scopoderm (patches).
As an anticholinergic drug, scopolamine blocks the neurotransmitter acetylcholine, which plays a vital role in memory, learning, and coordination. Blocking it helps reduce nausea by interrupting signals from the balance (vestibular) system to the brain. But it also comes with side effects, especially when used in high doses or outside a clinical setting.
How it affects the brain
Scopolamine disrupts the cholinergic system, which is central to memory formation and retrieval. As a result, it can cause temporary but severe memory loss: a key reason it’s been weaponised in crimes. Some studies also suggest it increases oxidative stress in the brain, compounding its effects on cognition.
The drug’s power to erase memory, sometimes described as “zombifying”, has made it a focus of forensic and criminal interest. Victims often describe confusion, hallucinations and a complete loss of control.
Recreational users are drawn to its hallucinogenic effects – but the line between tripping and toxic is razor thin.
In Colombia and other parts of South America, scopolamine, also known as burundanga, has been implicated in countless robberies and sexual assaults. Victims describe feeling dreamlike, compliant, and unable to resist or recall events. That’s what makes it so sinister – it robs people of both agency and memory.
The drug is often administered surreptitiously. In its powdered form, it’s odourless and tasteless, making it easy to slip into drinks or blow into someone’s face, as some victims have reported. Online forums detail how to make teas or infusions from plant parts, seeds, roots, flowers – heightening the risk of DIY misuse.
Once ingested, the drug works quickly and exits the body within about 12 hours, making it hard to detect in routine drug screenings. For some people, even a dose under 10mg can be fatal.
Signs of scopolamine poisoning include rapid heartbeat and palpitations, dry mouth and flushed skin, blurred vision, confusion and disorientation, hallucinations and drowsiness.
If you experience any of these, especially after an unexpected drink or interaction, seek medical attention immediately.
Dipa Kamdar does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Israel’s attack on Iran last month and the US bombing of the country’s nuclear facilities, the first-ever direct US attacks on Iranian soil, were meant to cripple Tehran’s strategic capabilities and reset the regional balance.
The strikes came after 18 months during which Israel had effectively dismantled Hamas in Gaza, dealt a devastating blow to Hezbollah in Lebanon, weakened the Houthis in Yemen, and seen the collapse of the Assad regime in Syria – a longstanding and key Iranian ally.
From a military standpoint, these were remarkable achievements. But they failed to deliver the strategic outcome Israeli and US leaders had long hoped for: the collapse of Iran’s influence and the weakening of its regime.
Instead, the confrontation exposed a deeper miscalculation. Iran’s power isn’t built on impulse or vulnerable proxies alone. It is decentralised, ideologically entrenched and designed to endure. While battered, the Islamic Republic did not fall. And now, it may be more determined – and more dangerous – than before.
Get your news from actual experts, straight to your inbox.Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.
Israel’s attack – dubbed “operation rising lion” – began with attacks on Iranian radar systems, followed by precision airstrikes on Iranian enrichment facilities and senior military officers and scientists. Israel spent roughly US$1.45 (£1.06 billion) billion in the first two days and in the first week of strikes on Iran, costs hit US$5 billion, with daily spending at US$725 million: US$593 million on offensive operations and US$132 million on defence and mobilization.
The day after the US strikes, the Israeli prime minister, Benjamin Netanyahu, spoke with Donald Trump about a ceasefire. He and his generals were reportedly keen to bring the conflict to a speedy end. Reports suggest that Netanyahu wanted to avoid a lengthy war of attrition that Israel could not sustain, and was already looking for an exit strategy.
Crucially, the Iranian regime remained intact. Rather than inciting revolt, the war rallied nationalist sentiment. Opposition movements remain fractured and lack a common platform or domestic legitimacy. Hopes of a popular uprising that might topple the regime expressed by both Trump and Netanyahu were misplaced.
In the aftermath, Iranian authorities launched a sweeping crackdown on suspected dissenters and what it referred to as “spies”. Former activists, reformists and loosely affiliated protest organisers were arrested or interrogated. What was meant to fracture the regime instead reinforced its grip on power.
Most notably, Iran’s parliament voted to suspend cooperation with the International Atomic Energy Agency (IAEA), ending inspections and giving Tehran the freedom to expand its nuclear programme – both civilian and potentially military – without oversight.
Perhaps the clearest misreading came from Israel and the US treating Syria as a template. The 2024 fall of Bashar al-Assad was hailed as a turning point. His successor, Ahmed al-Sharaa – a little-known opposition figure, former al-Qaeda insurgent and IS affiliate – was rebranded as a pragmatic reformer, who Trump praised as “attractive” and “tough”.
For western and Israeli strategists, Syria offered both a way to weaken Iran and a blueprint of how eventual regime change could play out: collapse the regime, install cooperative leadership in a swift reordering process. But this analogy was dangerously flawed. Iran’s stronger institutions, military depth, resistance-driven identity and existence made it a fundamentally different and more resilient state.
Both Israel and Iran, however, came away with new intelligence. Israel learned that its missile defences and economic resilience were not built for prolonged, multi-front warfare. Iran, meanwhile, gained valuable insight into how far its arsenal – drones, missiles and regional proxies – could reach, and where its limits lie.
Most of Iran’s drones and missiles were intercepted — up to 99% in the cases of drones — exposing critical weaknesses in accuracy, penetration, and survivability against modern air defenses. Yet the few that did break through caused significant damage in Tel Aviv, striking residential areas and critical infrastructure.
This war was not only a clash of weapons but a real-time stress test of each side’s strategic depth. Iran may now adjust its doctrine accordingly – prioritising survivability, mobility and precision in anticipation of future conflicts.
Israel’s vulnerabilities
Internally, Israel entered the war politically fractured and socially strained. Netanyahu’s far-right coalition was already under fire for attempting to weaken judicial independence. The war has temporarily united the country, but the economic and human toll have reignited deeper concerns.
Israel’s geographic and demographic constraints have become clear. Its high-tech economy, tightly integrated with global markets, could not weather prolonged instability. And critically, the damage inflicted by the US bombing was more limited than hoped for. While Washington joined in the initial strikes, it resisted deeper involvement, partly to avoid broader regional escalation and largely because of the lack of domestic appetite for war and high potential for energy inflation, if Iran was to close the Strait of Hormuz.
What happens now?
The war of 2025 did not produce peace. It produced recalibration. Israel emerges militarily capable but politically shaken and economically strained. Iran, though damaged, stands more unified, with fewer international constraints on its nuclear ambitions. Its crackdown on dissent, withdrawal from IAEA oversight, and deepening ties to rival powers suggest a regime preparing not for collapse, but for survival, perhaps even confrontation.
The broader lesson is sobering. Regime change cannot be engineered through precision strikes. Tactical brilliance does not guarantee strategic victory. And the assumption that Iran could unravel like Syria was not strategy, it was hubris.
Both sides now better understand each other’s strengths and limits, a clarity that could deter future war – or make the next one more dangerous. In a region shaped by trauma and shifting power, mistaking resistance for weakness or pause for peace remains the gravest miscalculation.
Bamo Nouri does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
The UK is now more than halfway (50.4%) to achieving a net zero carbon economy, which means it has reduced its national emissions significantly compared to 1990.
We should even celebrate that 0.4%. Why? Because every tonne of carbon saved from the atmosphere and every fraction of a degree celsius of warming avoided saves lives and leaves more life-sustaining ecosystems intact for our children and grandchildren.
It also reduces the risk of triggering irreversible, devastating tipping points in the Earth system. We absolutely do not want to go there. Though, it may already be too late to save 90% of warm-water coral reefs, on which hundreds of millions of people depend for food and protection from storms.
Luckily, tipping points can also work in our favour. Researchers like us call them positive tipping points, which kickstart irreversible, self-propelling change towards a more sustainable future.
Get your news from actual experts, straight to your inbox.Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.
Solar energy has already crossed a tipping point, having become the cheapest source of power in most of the world. Because it is quick to deploy widely and in a variety of formats and settings, solar is expanding exponentially, including to the roughly 700 million people who don’t have electricity.
Electric vehicle sales have also crossed tipping points in China and several European markets, as evidenced by the abrupt acceleration of their shares in national vehicle fleets. The more people buy them, the cheaper and better they get, which makes even more people buy them – a self-propelling change towards a low-carbon road transport system.
Recent findings from the Climate Change Committee, independent advisers to the UK government on climate policy, show that the UK too may be on the cusp of a positive tipping point for electric vehicles (EVs), but that further work is needed to reach a tipping point for heat pumps.
EV sales are racing ahead
According to the CCC, more than half of the UK’s success in decarbonising its economy since 2008 can be attributed to the energy sector. Here, the transition from electricity generated by coal to gas and, increasingly, renewable sources like solar and wind, has occurred “behind the scenes”, without much disruption to daily life.
However, over 80% of the greenhouse gas emission cuts needed between now and 2030 (the UK aims to reduce emissions by 68% by 2030) need to come from other sectors that require the involvement and support of the public and businesses.
The adoption of low-carbon technologies by households, including the buying of EVs and installing of heat pumps, is a critical next step to determining the success or failure of the UK’s ability to achieve net zero. Cars account for about 15% of the UK’s emissions and home heating a further 18%.
Encouragingly, and despite concerted misinformation campaigns to discredit EVs, sales in the UK accounted for 19.6% of all new cars in 2024, which puts this sector close to the critical 20-25% range for triggering the phase of self-propelling adoption, according to positive tipping points theory.
This rise in EV sales is happening for two main reasons. First, the UK has a rule that bans the sale of new petrol and diesel cars from 2035, which gives carmakers and buyers a clear deadline to switch.
Second, they are becoming a better choice all round. They’re getting cheaper (some are expected to cost the same as petrol cars between 2026 and 2028), more appealing (with longer ranges and faster charging), and easier to use (thanks to more charging points and better infrastructure).
If this positive trend continues, emissions saved by EV adoption will be sufficient to achieve the UK road transport sector’s 2030 emissions target.
Where is the heat pump tipping point?
Heat pumps have been slower on the uptake in the UK, leading the CCC to identify their deployment as one of the biggest risks to achieving the 2030 emissions target.
The UK government has set a target of installing 600,000 heat pumps a year by 2028. But despite 90% of British homes being suitable for a heat pump, only 1% have one.
There are signs that installations are picking up pace, however. In 2024, 98,000 heat pumps were installed – an increase of 56% from 2023. Deployment will need to be increased more than six times its current rate over the next three years to reach the installation target. In other words, we urgently need to trigger a positive tipping point in this sector.
The triggering of self-propelling change depends on the relative strength of feedbacks that either resist change (damping or negative feedback) or drive it forward (positive feedback).
One important negative feedback highlighted by the CCC is the UK’s high electricity-to-gas price ratio, which increases the running costs of a heat pump on top of the high upfront cost of buying and installing one. Addressing this issue has been at the top of the CCC’s policy recommendations for the last two years.
One positive feedback that needs to be strengthened is the perception among installers of household demand for heat pumps. When installers perceive demand, they are more likely to invest in the training and certifications needed to meet it.
Two ways the CCC suggests the government could encourage installer confidence are to extend the boiler upgrade scheme (which provides grants to households to install heat pumps) and clean heat mechanism (which obliges manufacturers and installers to prioritise heat pumps) and to reinstate the 2035 phase-out rule for new fossil fuel boilers.
An understanding of positive tipping points helps us identify key leverage points where intervention can be most effective in tackling the remaining half of the UK’s emissions. When implemented as part of a coherent national strategy, positive change can be accomplished at the pace and scale required. There is no time to lose.
Don’t have time to read about climate change as much as you’d like?
Kai Greenlees receives funding from the Economic Social Research Council, through the South West Doctoral Training Partnership.
Steven R. Smith does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
US Food and Drug Administration, Office of Regulatory Affairs, Health Fraud Branch
The US Food and Drug Administration (FDA) has issued an urgent warning about tianeptine – a substance marketed as a dietary supplement but known on the street as “gas station heroin”.
Linked to overdoses and deaths, it is being sold in petrol stations, smoke shops and online retailers, despite never being approved for medical use in the US.
But what exactly is tianeptine, and why is it causing alarm?
Get your news from actual experts, straight to your inbox.Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.
Structurally, it resembles tricyclic antidepressants – an older class of antidepressant – but pharmacologically it behaves very differently. Unlike conventional antidepressants, which typically increase serotonin levels, tianeptine appears to act on the brain’s glutamate system, which is involved in learning and memory.
It is used as a prescription drug in some European, Asian and Latin American countries under brand names like Stablon or Coaxil. But researchers later discovered something unusual, tianeptine also activates the brain’s mu-opioid receptors, the same receptors targeted by morphine and heroin – hence it’s nickname “gas station heroin”.
As a prescription drug, tianeptine is sold under various brand names, including Stablon. Wikimedia Commons
At prescribed doses, the effect is subtle, but in large amounts, tianeptine can trigger euphoria, sedation and eventually dependence. People chasing a high might take doses far beyond anything recommended in medical settings.
Despite never being approved by the FDA, the drug is sold in the US as a “wellness” product or nootropic – a substance supposedly used to enhance mood or mental clarity. It’s packaged as capsules, powders or liquids, often misleadingly labelled as dietary supplements.
This loophole has enabled companies to circumvent regulation. Products like Neptune’s Fix have been promoted as safe and legal alternatives to traditional medications, despite lacking any clinical oversight and often containing unlisted or dangerous ingredients.
Some samples have even been found to contain synthetic cannabinoids and other drugs. According to US poison control data, calls related to tianeptine exposure rose by over 500% between 2018 and 2023. In 2024 alone, the drug was involved in more than 300 poisoning cases. The FDA’s latest advisory included product recalls and import warnings.
Users have taken to the social media site Reddit, including a dedicated channel, and other forums to describe their experiences, both the highs and the grim withdrawals. Some report taking hundreds of pills a day. Others struggle to quit, describing cravings and relapses that mirror those seen with classic opioid addiction.
Since tianeptine doesn’t show up in standard toxicology screenings, health professionals may not recognise it. According to doctors in North America, it could be present in hospital patients without being detected, particularly in cases involving seizures or unusual heart symptoms.
It can be bought online from overseas vendors, and a quick search reveals dozens of sellers offering “research-grade” powder and capsules.
There is little evidence that tianeptine is circulating widely in the UK; to date, just one confirmed sample has been publicly recorded in a national drug testing database. It’s not mentioned in recent Home Office or Advisory Council on the Misuse of Drugs briefings, and it does not appear in official crime or hospital statistics.
But that may simply reflect the fact that no one is looking for it. Without testing protocols in place, it could be present, just unrecorded.
Because of its chemical structure and unusual effects, if tianeptine did show up in a UK emergency department, it could easily be mistaken for a tricyclic antidepressant overdose, or even dismissed as recreational drug use. This makes it harder to diagnose and treat appropriately.
It’s possible, particularly among people seeking alternatives to harder-to-access opioids, or those looking for a legal high. With its low visibility, online availability and potential for addiction, tianeptine ticks many of the same boxes that once made drugs like mephedrone or spice popular before they were banned.
The UK has seen waves of novel psychoactive substances emerge through similar routes, first appearing online or in head shops, then spreading quietly until authorities responded. If tianeptine follows the same path, by the time it appears on the radar, harm may already be underway.
Michelle Sahai does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
We all need to learn how to place trust in others. It’s easy to be misled. Someone who doesn’t deserve trust can appear a lot like someone who does – and part of growing up in a society is developing the ability to tell the difference.
An important part of this is learning about the signals people give about themselves. These might be a smile, a style of dressing or a way of speaking. In particular, we use accents to make decisions about others – especially in the UK.
But what if people adapt or change their accents to fit into a certain social group or geographical area? Our past research has shown that native speakers are pretty good at spotting such speech. We’ve now published a follow-up study that supports and further strengthens our original results.
Get your news from actual experts, straight to your inbox.Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.
We associate accents with places, classes and groups. Research shows that even infants use accents to determine whether they think someone is considered trustworthy. This can be a problem – studies have demonstrated that accents can affect someone’s odds of getting a job – and potentially the likelihood of being found guilty of a crime.
As with most topics in the social sciences, evolutionary theory has a lot to say about this process. Scientists are interested in understanding how people send and receive signals like accents, how those signals affect relationships between people and how, in turn, those relationships affect us.
But because accents can affect how we treat each other, we’d expect some people to try to change them for personal gain. A social chameleon who can pretend to be a member of any social class or group is likely to win trust within each – assuming they are not caught.
If that’s true, though, then we’d expect people to also be good at detecting when someone is “faking” it – what we call mimicry – setting up a kind of arms race between those who want to deceive us into trusting them and those who try to catch deceivers out.
Over the last few years, we’ve looked into how well people detect accent mimicry. Last year we found that generally speaking, people in the UK and Ireland are strong at this, detecting mimicked accents in the UK and Ireland better than we’d expect by chance alone.
What was more interesting, though, was that native listeners from the specific places of the imitated accent – Belfast, Glasgow and Dublin – were a lot better at this task than were non-natives or native listeners from further away in the UK, like Essex.
Beyond the UK
Our new findings went further, though. Of the roughly 2,000 people that participated, more than 1,500 were this time based in English-speaking countries outside the UK, including the US, Canada and Australia. And on average, this group did a lot worse at detecting mimicked accents from seven different regions in the UK and Ireland than did people from the UK.
In fact, people from places other than the UK barely did better than we’d expect by chance, while people who were native listeners were right between about two-thirds and three-quarters of the time.
As we argued in our original article, we believe it’s local cultural tensions — tribalism, classism or even warfare — that explain the differences. For example, as someone commented to me some time ago, people living in Belfast in the 1970s and 80s – a time of huge political tension – needed to be attuned to the accents of those around them. Hearing something off, like an out-group member’s accent, could signal an imminent threat.
This wouldn’t have put the same pressures on people living in a more peaceful regions. In fact, we found that people living in large, multicultural and largely peaceful areas, such as London, didn’t need to pay much attention to the accents of those around them and were worse at detecting mimicked accents.
The further you move out from the native accent, too, the less likely a listener is to place emphasis on or notice anything wrong with a local accent. Someone living in the US is likely to pay even less attention to an imitation Belfast accent than is someone living in London, and accordingly will be worse at detecting mimicry. Likewise, someone growing up in Australia would be better at spotting a mimicked Australian accent than a Brit.
So while accents, and our ability to detect differences in accents, probably evolved to help us place trust more effectively at a broad level, it’s the cultural environment that shapes that process at the local level.
Together, this has the unfortunate effect that we sometimes place a lot more emphasis on accents than we should. How someone speaks should be a lot less important than what is said.
Still, accents drive how people treat each other at every level of society, just as other signals, be they tattoos, smiles or clothes, that tell us something about another person’s background or heritage.
Learning how these processes work and why they evolved is critical for overcoming them – and helping us to override the biases that so often prevent us from placing trust in people who deserve it.
Jonathan R. Goodman receives funding from the Wellcome Trust (grant no. 220540/Z/20/A).
Hearing improvements were both rapid and significant after patients received the gene therapy we developed.Nina Lishchuk/ Shutterstock
Up to three in every 1,000 newborns has hearing loss in one or both ears. While cochlear implants offer remarkable hope for these children, it requires invasive surgery. These implants also cannot fully replicate the nuance of natural hearing.
But recent research my colleagues and I conducted has shown that a form of gene therapy can successfully restore hearing in toddlers and young adults born with congenital deafness.
Our research focused specifically on toddlers and young adults born with OTOF-related deafness. This condition is caused by mutations in the OTOF gene that produces the otoferlin protein –a protein critical for hearing.
The protein transmits auditory signals from the inner ear to the brain. When this gene is mutated, that transmission breaks down leading to profound hearing loss from birth.
Get your news from actual experts, straight to your inbox.Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.
Unlike other types of genetic deafness, people with OTOF mutations have healthy hearing structures in their inner ear – the problem is simply that one crucial gene isn’t working properly. This makes it an ideal candidate for gene therapy: if you can fix the faulty gene, the existing healthy structures should be able to restore hearing.
In our study, we used a modified virus as a delivery system to carry a working copy of the OTOF gene directly into the inner ear’s hearing cells. The virus acts like a molecular courier, delivering the genetic fix exactly where it’s needed.
The modified viruses do this by first attaching themselves to the hair cell’s surface, then convincing the cell to swallow them whole. Once inside, they hitch a ride on the cell’s natural transport system all the way to its control centre (the nucleus). There, they finally release the genetic instructions for otoferlin to the auditory neurons.
Our team had previously conducted studies in primates and young children (five- and eight-year-olds) which confirmed the virus therapy was safe. We were also able to illustrate the therapy’s potential to restore hearing – sometimes to near-normal levels.
But key questions had remained about whether the therapy could work in older patients – and what age is optimal for patients to receive the treatment.
To answer these questions, we expanded our clinical trial across five hospitals, enrolling ten participants aged one to 24 years. All were diagnosed with OTOF-related deafness. The virus therapy was injected into the inner ears of each participant.
We closely monitored safety during the 12-months of the study through ear examinations and blood tests. Hearing improvements were measured using both objective brainstem response tests and behavioural hearing assessments.
From the brainstem response tests, patients heard rapid clicking sounds or short beeps of different pitches while sensors measured the brain’s automatic electrical response. In another test, patients heard constant, steady tones at different pitches while a computer analysed brainwaves to see if they automatically followed the rhythm of these sounds.
The therapy used a synthetic version of a virus to deliver a functional gene to the inner ear. Kateryna Kon/ Shutterstock
For the behavioural hearing assessment, patients wore headphones and listened to faint beeps at different pitches. They pressed a button or raised their hand each time they heard a beep – no matter how faint.
Hearing improvements were both rapid and significant – especially in younger participants. Within the first month of treatment, the average total hearing improvement reached 62% on the objective brainstem response tests and 78% on the behavioural hearing assessments. Two participants achieved near-normal speech perception. The parent of one seven-year-old participant said her child could hear sounds just three days after treatment.
Over the 12-month study period, ten patients experienced very mild to moderate side-effects. The most common adverse effect was a decrease in white blood cells. Crucially, no serious adverse events were observed. This confirmed the favourable safety profile of this virus-based gene therapy.
Treating genetic deafness
This is the first time such results have been achieved in both adolescent and adult patients with OTOF-related deafness.
The findings also reveal important insights into the ideal window for treatment, with children between the ages of five and eight showing the most pronounced benefit.
While younger children and older participants also showed improvement, their recovery was less dramatic. These counter-intuitive results in younger children are surprising. Although preserved inner-ear integrity and function at early ages should theoretically predict a better response to the gene therapy, these findings suggest the brain’s ability to process newly restored sounds may vary at different ages. The reasons for this are not yet understood.
This trial is a milestone. By bridging the gap between animal and human studies and diverse patients of different ages, we’re entering a new era in the treatment of genetic deafness. Although questions still remain about how long the effects of this therapy last, as gene therapy continues to advance, the possibility of curing – not just managing – genetic hearing loss is becoming a reality.
OTOF-related deafness is just the beginning. We, along with other research teams, are working on developing therapies that target other, more common genes that are linked to hearing loss. These are more complex to treat, but animal studies have yielded promising results. We’re optimistic that in the future, gene therapy will be available for many different types of genetic deafness.
Maoli Duan does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Motorcycle-taxis are one of the fastest and most convenient ways to get around Uganda’s congested capital, Kampala. But they are also the most dangerous. Though they account for one-third of public transport trips taking place within the city, police reports suggest motorcycles were involved in 80% of all road-crash deaths registered in Kampala in 2023.
Promising to solve the safety problem while also improving the livelihoods of moto-taxi workers, digital ride-hail platforms emerged a decade ago on the city’s streets. It is no coincidence that Uganda’s ride-hailing pioneer and long-time market leader goes by the name of SafeBoda.
Conceived in 2014 as a “market-based approach to road safety”, the idea is to give riders a financial incentive to drive safely by making digital moto-taxi work pay better. SafeBoda claimed at the time that motorcyclists who signed up with it would increase their incomes by up to 50% relative to the traditional mode of operation, in which riders park at strategic locations called “stages” and wait for passengers.
In the years since, the efforts of SafeBoda and its ride-hail competitors to bring safety to the sector have largely been deemed a success. One study carried out in 2017 found that digital riders were more likely to wear a helmet and less likely to drive towards oncoming traffic. Early press coverage wasparticularlyglowing, while recentacademicstudies continue to cite the Kampala case as evidence that ride-hailing platforms may hold the key to making African moto-taxi sectors a safer place to work and travel.
Is it all as clear-cut as this? In a new paper based on PhD research, I suggest not. Because at its core the ride-hail model – in which riders are classified as independent contractors who do poorly paid “gig work” rather than as wage-earning employees – undermines its own safety ambitions.
Speed traps
In my study of Kampala’s vast moto-taxi industry – estimated to employ hundreds of thousands of people – I draw on 112 in-depth interviews and a survey of 370 moto-taxi riders to examine how livelihoods and working conditions have been affected by the arrival of the platforms.
To date, there has been only limited critical engagement with how this change has played out over the past decade. I wanted to get beneath the big corporate claims and alluring platform promises to understand how riders themselves had experienced the digital “transformation” of their industry, several years after it first began.
One of the things I found was that, from a safety perspective, the ride-hail model represents a paradox. We can think of it as a kind of “speed trap”.
On one hand, ride-hail platforms try to moderate moto-taxi speeds and behaviours through managerial techniques. They make helmet use compulsory. They put riders through road safety training before letting them out onto the streets. And they enforce a professional “code of conduct” for riders.
In some cases, companies also deploy “field agents” to major road intersections around the city. Their task is to monitor the behaviour of riders in company uniform and, should they be spotted breaking the rules, discipline them.
On the other hand, however, the underlying economic structure of digital ride-hailing pulls transport workers in the opposite direction by systematically depressing trip fares and rewarding speed.
Under the “gig economy” model used by Uganda’s ride-hail platforms, the livelihood promise hangs not in the offer of a guaranteed wage but in the possibility of higher earnings. Crucially, it is a promise that only materialises if riders are able to reach and maintain a faster, harder work-rate throughout the day – completing enough jobs that pay “little money”, as one rider put it, to make the gig-work deal come good. Or, as summed up by another interviewee:
We are like stakeholders, I can say that. No basic salary, just commission. So it depends on your speed.
And yet, it is precisely these factors that routinely lead to road traffic accidents. Extensive research from across east Africa has shown that motorcycle crashes arestronglyassociated with financial pressure and the practices that lead directly from this, such as speeding, working long hours and performing high-risk manoeuvres. All are driven by the need to break even each day in a hyper-competitive informal labour market, with riders compelled to go fast by the raweconomics of their work.
Deepening the pressure
Ride-hail platforms may not be the reason these circumstances exist in the first place. But the point is that they do not mark a departure from them.
If anything, my research suggests they may be making things worse. According to the survey data, riders working through the apps make on average 12% higher gross earnings each week relative to their analogue counterparts. This is because the online world gets them more jobs.
But to stay connected to that world they must shoulder higher operating costs, for: mobile data (to remain logged on); fuel (to perform more trips); the use of helmets and uniforms (which remain company property); and commissions extracted by the platform companies (as much as 15%-20% per trip).
As soon as these extras are factored in, the difference completely disappears. The digital rider works faster and harder – but for no extra reward.
But it is important to remember that these are private enterprises with a clear bottom line: to one day turn a profit. As recentreports and my own thesis show, efforts to reach that point often alienate and ultimately repel the workers on whom these platforms depend – and whose livelihoods and safety standards they claim to be transforming.
A recent investment evaluation by one of SafeBoda’s first funders perhaps puts it best: it is time to reframe ride-hailing as a “risky vehicle” for safety reform in African cities, rather than a clear road to success.
Rich received funding for this research from the UK’s Economic and Social Research Council (ESRC).
The global ecosystem of climate finance is complex, constantly changing and sometimes hard to understand. But understanding it is critical to demanding a green transition that’s just and fair. That’s why The Conversation has collaborated with climate finance experts to create this user-friendly guide, in partnership with Vogue Business. With definitions and short videos, we’ll add to this glossary as new terms emerge.
Blue bonds
Blue bonds are debt instruments designed to finance ocean-related conservation, like protecting coral reefs or sustainable fishing. They’re modelled after green bonds but focus specifically on the health of marine ecosystems – this is a key pillar of climate stability.
By investing in blue bonds, governments and private investors can fund marine projects that deliver both environmental benefits and long-term financial returns. Seychelles issued the first blue bond in 2018. Now, more are emerging as ocean conservation becomes a greater priority for global sustainability efforts.
By Narmin Nahidi, assistant professor in finance at the University of Exeter
Carbon border adjustment mechanism
Did you know that imported steel could soon face a carbon tax at the EU border? That’s because the carbon border adjustment mechanism is about to shake up the way we trade, produce and price carbon.
The carbon border adjustment mechanism is a proposed EU policy to put a carbon price on imports like iron, cement, fertiliser, aluminium and electricity. If a product is made in a country with weaker climate policies, the importer must pay the difference between that country’s carbon price and the EU’s. The goal is to avoid “carbon leakage” – when companies relocate to avoid emissions rules and to ensure fair competition on climate action.
But this mechanism is more than just a tariff tool. It’s a bold attempt to reshape global trade. Countries exporting to the EU may be pushed to adopt greener manufacturing or face higher tariffs.
The carbon border adjustment mechanism is controversial: some call it climate protectionism, others argue it could incentivise low-carbon innovation worldwide and be vital for achieving climate justice. Many developing nations worry it could penalise them unfairly unless there’s climate finance to support greener transitions.
Carbon border adjustment mechanism is still evolving, but it’s already forcing companies, investors and governments to rethink emissions accounting, supply chains and competitiveness. It’s a carbon price with global consequences.
By Narmin Nahidi, assistant professor in finance at the University of Exeter
Carbon budget
The Paris agreement aims to limit global warming to 1.5°C above pre-industrial levels by 2030. The carbon budget is the maximum amount of CO₂ emissions allowed, if we want a 67% chance of staying within this limit. The Intergovernmental Panel on Climate Change (IPCC) estimates that the remaining carbon budgets amount to 400 billion tonnes of CO₂ from 2020 onwards.
Think of the carbon budget as a climate allowance. Once it has been spent, the risk of extreme weather or sea level rise increases sharply. If emissions continue unchecked, the budget will be exhausted within years, risking severe climate consequences. The IPCC sets the global carbon budget based on climate science, and governments use this framework to set national emission targets, climate policies and pathways to net zero emissions.
By Dongna Zhang, assistant professor in economics and finance, Northumbria University
Carbon credits
Carbon credits are like a permit that allow companies to release a certain amount of carbon into the air. One credit usually equals one tonne of CO₂. These credits are issued by the local government or another authorised body and can be bought and sold. Think of it like a budget allowance for pollution. It encourages cuts in carbon emissions each year to stay within those global climate targets.
The aim is to put a price on carbon to encourage cuts in emissions. If a company reduces its emissions and has leftover credits, it can sell them to another company that is going over its limit. But there are issues. Some argue that carbon credit schemes allow polluters to pay their way out of real change, and not all credits are from trustworthy projects. Although carbon credits can play a role in addressing the climate crisis, they are not a solution on their own.
By Sankar Sivarajah, professor of circular economy, Kingston University London
Carbon credits explained.
Carbon offsetting
Carbon offsetting is a way for people or organisations to make up for the carbon emissions they are responsible for. For example, if you contribute to emissions by flying, driving or making goods, you can help balance that out by supporting projects that reduce emissions elsewhere. This might include planting trees (which absorb carbon dioxide) or building wind farms to produce renewable energy.
The idea is that your support helps cancel out the damage you are doing. For example, if your flight creates one tonne of carbon dioxide, you pay to support a project that removes the same amount.
While this sounds like a win-win, carbon offsetting is not perfect. Some argue that it lets people feel better without really changing their behaviour, a phenomenon sometimes referred to as greenwashing.
Not all projects are effective or well managed. For instance, some tree planting initiatives might have taken place anyway, even without the offset funding, deeming your contribution inconsequential. Others might plant the non-native trees in areas where they are unlikely to reach their potential in terms of absorbing carbon emissions.
So, offsetting can help, but it is no magic fix. It works best alongside real efforts to reduce greenhouse gas emissions and encourage low-carbon lifestyles or supply chains.
By Sankar Sivarajah, professor of circular economy, Kingston University London
Carbon offsetting explained.
Carbon tax
A carbon tax is designed to reduce greenhouse gas emissions by placing a direct price on CO₂ and other greenhouse gases.
A carbon tax is grounded in the concept of the social cost of carbon. This is an estimate of the economic damage caused by emitting one tonne of CO₂, including climate-related health, infrastructure and ecosystem impacts.
A carbon tax is typically levied per tonne of CO₂ emitted. The tax can be applied either upstream (on fossil fuel producers) or downstream (on consumers or power generators). This makes carbon-intensive activities more expensive, it incentivises nations, businesses and people to reduce their emissions, while untaxed renewable energy becomes more competitively priced and appealing.
Carbon tax was first introduced by Finland in 1990. Since then, more than 39 jurisdictions have implemented similar schemes. According to the World Bank, carbon pricing mechanisms (that’s both carbon taxes and emissions trading systems) now cover about 24% of global emissions. The remaining 76% are not priced, mainly due to limited coverage in both sectors and geographical areas, plus persistent fossil fuel subsidies. Expanding coverage would require extending carbon pricing to sectors like agriculture and transport, phasing out fossil fuel subsidies and strengthening international governance.
What is carbon tax?
Sweden has one of the world’s highest carbon tax rates and has cut emissions by 33% since 1990 while maintaining economic growth. The policy worked because Sweden started early, applied the tax across many industries and maintained clear, consistent communication that kept the public on board.
Canada introduced a national carbon tax in 2019. In Canada, most of the revenue from carbon taxes is returned directly to households through annual rebates, making the scheme revenue-neutral for most families. However, despite its economic logic, inflation and rising fuel prices led to public discontent – especially as many citizens were unaware they were receiving rebates.
Carbon taxes face challenges including political resistance, fairness concerns and low public awareness. Their success depends on clear communication and visible reinvestment of revenues into climate or social goals. A 2025 study that surveyed 40,000 people in 20 countries found that support for carbon taxes increases significantly when revenues are used for environmental infrastructure, rather than returned through tax rebates.
By Meilan Yan, associate professor and senior lecturer in financial economics, Loughborough University
Climate resilience
Floods, wildfires, heatwaves and rising seas are pushing our cities, towns and neighbourhoods to their limits. But there’s a powerful idea that’s helping cities fight back: climate resilience.
Resilience refers to the ability of a system, such as a city, a community or even an ecosystem – to anticipate, prepare for, respond to and recover from climate-related shocks and stresses.
Sometimes people say resilience is about bouncing back. But it’s not just about surviving the next storm. It’s about adapting, evolving and thriving in a changing world.
Resilience means building smarter and better. It means designing homes that stay cool during heatwaves. Roads that don’t wash away in floods. Power grids that don’t fail when the weather turns extreme.
It’s also about people. A truly resilient city protects its most vulnerable. It ensures that everyone – regardless of income, age or background – can weather the storm.
And resilience isn’t just reactive. It’s about using science, local knowledge and innovation to reduce a risk before disaster strikes. From restoring wetlands to cool cities and absorb floods, to creating early warning systems for heatwaves, climate resilience is about weaving strength into the very fabric of our cities.
By Paul O’Hare, senior lecturer in geography and development, Manchester Metropolitan University
The meaning of climate resilience.
Climate risk disclosure
Climate risk disclosure refers to how companies report the risks they face from climate change, such as flood damage, supply chain disruptions or regulatory costs. It includes both physical risks (like storms) and transition risks (like changing laws or consumer preferences).
Mandatory disclosures, such as those proposed by the UK and EU, aim to make climate-related risks transparent to investors. Done well, these reports can shape capital flows toward more sustainable business models. Done poorly, they become greenwashing tools.
By Narmin Nahidi, assistant professor in finance at the University of Exeter
Emissions trading scheme
An emissions trading scheme is the primary market-based approach for regulating greenhouse gas emissions in many countries, including Australia, Canada, China and Mexico.
Part of a government’s job is to decide how much of the economy’s carbon emissions it wants to avoid in order to fight climate change. It must put a cap on carbon emissions that economic production is not allowed to surpass. Preferably, the polluters (that’s the manufacturers, fossil fuel companies) should be the ones paying for the cost of climate mitigation.
Regulators could simply tell all the firms how much they are allowed to emit over the next ten years or so. But giving every firm the same allowance across the board is not cost efficient, because avoiding carbon emissions is much harder for some firms (such as steel producers) than others (such as tax consultants). Since governments cannot know each firm’s specific cost profile either, it can’t customise the allowances. Also, monitoring whether polluters actually abide by their assigned limits is extremely costly.
An emissions trading scheme cleverly solves this dilemma using the cap-and-trade mechanism. Instead of assigning each polluter a fixed quota and risking inefficiencies, the government issues a large number of tradable permits – each worth, say, a tonne of CO₂-equivalent (CO₂e) – that sum up to the cap. Firms that can cut greenhouse gas emissions relatively cheaply can then trade their surplus permits to those who find it harder – at a price that makes both better off.
By Mathias Weidinger, environmental economist, University of Oxford
Emissions trading schemes, explained by climate finance expert Mathias Weidinger.
Environmental, social and governance (ESG) investing
ESG investing stands for environmental, social and governance investing. In simple terms, these are a set of standards that investors use to screen a company’s potential investments.
ESG means choosing to invest in companies that are not only profitable but also responsible. Investors use ESG metrics to assess risks (such as climate liability, labour practices) and align portfolios with sustainability goals by looking at how a company affects our planet and treats its people and communities. While there isn’t one single global body governing ESG, various organisations, ratings agencies and governments all contribute to setting and evolving these metrics.
For example, investing in a company committed to renewable energy and fair labour practices might be considered “ESG aligned”. Supporters believe ESG helps identify risks and create long-term value. Critics argue it can be vague or used for greenwashing, where companies appear sustainable without real action. ESG works best when paired with transparency and clear data. A barrier is that standards vary, and it’s not always clear what counts as ESG.
Why do financial companies and institutions care? Issues like climate change and nature loss pose significant risks, affecting company values and the global economy.
However, gathering reliable ESG information can be difficult. Companies often self-report, and the data isn’t always standardised or up to date. Researchers – including my team at the University of Oxford – are using geospatial data, like satellite imagery and artificial intelligence, to develop global databases for high-impact industries, across all major sectors and geographies, and independently assess environmental and social risks and impacts.
For instance, we can analyse satellite images of a facility over time to monitor its emissions effect on nature and biodiversity, or assess deforestation linked to a company’s supply chain. This allows us to map supply chains, identify high-impact assets, and detect hidden risks and opportunities in key industries, providing an objective, real-time look at their environmental footprint.
The goal is for this to improve ESG ratings and provide clearer, more consistent insights for investors. This approach could help us overcome current data limitations to build a more sustainable financial future.
By Amani Maalouf, senior researcher in spatial finance, University of Oxford
Environmental, social and governance investing explained.
Financed emissions
Financed emissions are the greenhouse gas emissions linked to a bank’s or investor’s lending and investment portfolio, rather than their own operations. For example, a bank that funds a coal mine or invests in fossil fuels is indirectly responsible for the carbon those activities produce.
Measuring financed emissions helps reveal the real climate impact of financial institutions not just their office energy use. It’s a cornerstone of climate accountability in finance and is becoming essential under net zero pledges.
By Narmin Nahidi, assistant professor in finance at the University of Exeter
Green bonds
Green bonds are loans issued to fund environmentally beneficial projects, such as energy-efficient buildings or clean transportation. Investors choose them to support climate solutions while earning returns.
Green bonds are a major tool to finance the shift to a low-carbon economy by directing finance toward climate solutions. As climate costs rise, green bonds could help close the funding gap while ensuring transparency and accountability.
Green bonds are required to ensure funds are spent as promised. For instance, imagine a city wants to upgrade its public transportation by adding electric buses to reduce pollution. Instead of raising taxes or slashing other budgets, the city can issue green bonds to raise the necessary capital. Investors buy the bonds, the city gets the funding, and the environment benefits from cleaner air and fewer emissions.
The growing participation of government issuers has improved the transparency and reliability of these investments. The green bond market has grown rapidly in recent years. According to the Bank for International Settlements, the green bond market reached US$2.9 trillion (£2.1 trillion) in 2024 – nearly six times larger than in 2018. At the same time, annual issuance (the total value of green bonds issued in a year) hit US$700 billion, highlighting the increasing role of green finance in tackling climate change.
By Dongna Zhang, assistant professor in economics and finance, Northumbria University
Just transition
Just transition is the process of moving to a low-carbon society that is environmentally sustainable and socially inclusive. In a broad sense, a just transition means focusing on creating a more fair and equal society.
Just transition has existed as a concept since the 1970s. It was originally applied to the green energy transition, protecting workers in the fossil fuel industry as we move towards more sustainable alternatives.
These days, it has so many overlapping issues of justice hidden within it, so the concept is hard to define. Even at the level of UN climate negotiations, global leaders struggle to agree on what a just transition means.
The big battle is between developed countries, who want a very restrictive definition around jobs and skills, and developing countries, who are looking for a much more holistic approach that considers wider system change and includes considerations around human rights, Indigenous people and creating an overall fairer global society.
A just transition is essentially about imagining a future where we have moved beyond fossil fuels and society works better for everyone – but that can look very different in a European city compared to a rural setting in south-east Asia.
For example, in a British city it might mean fewer cars and better public transport. In a rural setting, it might mean new ways of growing crops that are more sustainable, and building homes that are heatwave resistant.
By Alix Dietzel, climate justice and climate policy expert, University of Bristol
The meaning of just transition.
Loss and damage
A global loss and damage fund was agreed by nations at the UN climate summit (Cop27) in 2022. This means that the rich countries of the world put money into a fund that the least developed countries can then call upon when they have a climate emergency.
At the moment, the loss and damage fund is made up of relatively small pots of money. Much more will be needed to provide relief to those who need it most now and in the future.
By Mark Maslin, professor of earth system science, UCL
Mark Maslin explains loss and damage.
Mitigation v adaptation
Mitigation means cutting greenhouse gas emissions to slow climate change. Adaptation means adjusting to its effects, like building sea walls or growing heat-resistant crops. Both are essential: mitigation tackles the cause, while adaptation tackles the symptoms.
Globally, most funding goes to mitigation, but vulnerable communities often need adaptation support most. Balancing the two is a major challenge in climate policy, especially for developing countries facing immediate climate threats.
By Narmin Nahidi, assistant professor in finance at the University of Exeter
Nationally determined contributions
Nationally determined contributions (NDCs) are at the heart of the Paris agreement, the global effort to collectively combat climate change. NDCs are individual climate action plans created by each country. These targets and strategies outline how a country will reduce its greenhouse gas emissions and adapt to climate change.
Each nation sets its own goals based on its own circumstances and capabilities – there’s no standard NDC. These plans should be updated every five years and countries are encouraged to gradually increase their climate ambitions over time.
The aim is for NDCs to drive real action by guiding policies, attracting investment and inspiring innovation in clean technologies. But current NDCs fall short of the Paris agreement goals and many countries struggle to turn their plans into a reality. NDCs also vary widely in scope and detail so it’s hard to compare efforts across the board. Stronger international collaboration and greater accountability will be crucial.
By Doug Specht, reader in cultural geography and communication, University of Westminster
Fashion depends on water, soil and biodiversity – all natural capital. And forward-thinking designers are now asking: how do we create rather than deplete, how do we restore rather than extract?
Natural capital is the value assigned to the stock of forests, soils, oceans and even minerals such as lithium. It sustains every part of our economy. It’s the bees that pollinate our crops. It’s the wetlands that filter our water and it’s the trees that store carbon and cool our cities.
If we fail to value nature properly, we risk losing it. But if we succeed, we unlock a future that is not only sustainable but also truly regenerative.
My team at the University of Oxford is developing tools to integrate nature into national balance sheets, advising governments on biodiversity, and we’re helping industries from fashion to finance embed nature into their decision making.
Natural capital, explained by a climate finance expert.
By Mette Morsing, professor of business sustainability and director of the Smith School of Enterprise and the Environment, University of Oxford
Net zero
Reaching net zero means reducing the amount of additional greenhouse gas emissions that accumulate in the atmosphere to zero. This concept was popularised by the Paris agreement, a landmark deal that was agreed at the UN climate summit (Cop21) in 2015 to limit the impact of greenhouse gas emissions.
There are some emissions, from farming and aviation for example, that will be very difficult, if not impossible, to reach absolute zero. Hence, the “net”. This allows people, businesses and countries to find ways to suck greenhouse gas emissions out of the atmosphere, effectively cancelling out emissions while trying to reduce them. This can include reforestation, rewilding, direct air capture and carbon capture and storage. The goal is to reach net zero: the point at which no extra greenhouse gases accumulate in Earth’s atmosphere.
By Mark Maslin, professor of earth system science, UCL
Mark Maslin explains net zero.
For more expert explainer videos, visit The Conversation’s quick climate dictionary playlist here on YouTube.
Mark Maslin is Pro-Vice Provost of the UCL Climate Crisis Grand Challenge and Founding Director of the UCL Centre for Sustainable Aviation. He was co-director of the London NERC Doctoral Training Partnership and is a member of the Climate Crisis Advisory Group. He is an advisor to Sheep Included Ltd, Lansons, NetZeroNow and has advised the UK Parliament. He has received grant funding from the NERC, EPSRC, ESRC, DFG, Royal Society, DIFD, BEIS, DECC, FCO, Innovate UK, Carbon Trust, UK Space Agency, European Space Agency, Research England, Wellcome Trust, Leverhulme Trust, CIFF, Sprint2020, and British Council. He has received funding from the BBC, Lancet, Laithwaites, Seventh Generation, Channel 4, JLT Re, WWF, Hermes, CAFOD, HP and Royal Institute of Chartered Surveyors.
Amani Maalouf receives funding from IKEA Foundation and UK Research and Innovation (NE/V017756/1).
Narmin Nahidi is affiliated with several academic associations, including the Financial Management Association (FMA), British Accounting and Finance Association (BAFA), American Finance Association (AFA), and the Chartered Association of Business Schools (CMBE). These affiliations do not influence the content of this article.
Paul O’Hare receives funding from the UK’s Natural Environment Research Council (NERC). Award reference NE/V010174/1.
Alix Dietzel, Dongna Zhang, Doug Specht, Mathias Weidinger, Meilan Yan, and Sankar Sivarajah do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.
Milan Kundera opens his novel The Book of Laughter and Forgetting with a scene from the winter of 1948. Klement Gottwald, leader of the Communist Party of Czechoslovakia, is giving a speech to the masses from a palace balcony, surrounded by fellow party members. Comrade Vladimir Clementis thoughtfully places his fur hat on Gottwald’s bare head; the hat then features in an iconic photograph.
Four years later, Clementis is found guilty of being a bourgeois nationalist and hanged. His ashes are strewn on a Prague street. The propaganda section of the party removes him from written history and erases him from the photograph.
“Nothing remains of Clementis,” writes Kundera, “but the fur hat on Gottwald’s head.”
Review: Memory Lane: The Perfectly Imperfect Ways We Remember – Ciara Greene & Gillian Murphy (Princeton University Press)
Efforts to enforce political forgetting are often associated with totalitarian regimes. The state endeavours to control not only its citizens, but also the past. To create a narrative that glorifies the present and idealises the future, history must be rewritten or even completely obliterated.
In a famous article on “the totalitarian ego”, the social psychologist Anthony Greenwald argued that individual selves operate in the same way. We deploy an array of cognitive biases to maintain a sense of control, and to shape and reshape our personal history. We distort the present and fabricate the past to ensure we remain the heroes of our life narratives.
Likening the individual to a destructive political system might sound extreme, but it has an element of truth. Memory Lane, a new book by Irish psychology researchers Ciara Greene and Gillian Murphy, shows how autobiographical memory has a capacity to rewrite history that is almost Stalinesque.
There is no shortage of books on memory, from self-help guides for the anxiously ageing to scholarly works of history. Memory Lane is distinctive for taking the standpoint of applied cognitive psychology. Emphasising how memory functions in everyday life, Greene and Murphy explore the processes of memory and the influences that shape them.
What memory is not
The key message of the book is that the memory system is not a recording device. We may be tempted to see memory as a vault where past experience is faithfully preserved, but in fact it is fundamentally reconstructive.
Memories are constantly revised in acts of recollection. They change in predictable ways over time, moulded by new information, our prior beliefs and current emotions, other people’s versions of events, or an interviewer’s leading questions.
According to Greene and Murphy’s preferred analogy, memory is like a Lego tower. A memory is initially constructed from a set of elements, but over time some will be lost as the structure simplifies to preserve the gist of the event. Elements may also be added as new information is incorporated and the memory is refashioned to align with the person’s beliefs and expectations.
The malleability of memory might look like a weakness, especially by comparison to digital records. Memory Lane presents it as a strength. Humans did not evolve to log objective truths for posterity, but to operate flexibly in a complex and changing world.
From an adaptive standpoint, the past only matters insofar as it helps us function in the present. Our knowledge should be updated by new information. We should assimilate experiences to already learned patterns. And we should be tuned to our social environment, rather than insulated from it.
“If all our memories existed in some kind of mental quarantine, separate from the rest of our knowledge and experiences,” the authors write, “it would be like using a slow, inefficient computer program that could only show you one file at a time, never drawing connections or updating incorrect impressions.”
Simplifying and discarding memories is also beneficial because our cognitive capacity is limited. It is better to filter out what matters from the deluge of past experiences than to be overwhelmed with irrelevancies. Greene and Murphy present the case of a woman with exceptional autobiographical memory, who is plagued by the triggering of obsolete memories.
Forgetting doesn’t merely de-clutter memory; it also serves emotional ends. Selectively deleting unpleasant memories increases happiness. Sanding off out-of-character experiences fosters a clear and stable sense of self.
“Hindsight bias” boosts this feeling of personal continuity by bringing our recollections into line with our current beliefs. Revisionist history it may be, but it is carried out in the service of personal identity.
‘Forgetting doesn’t merely de-clutter memory; it also serves emotional ends.’ Shutterstock
Eyewitness memories and misinformation
Memory Lane pays special attention to situations in which memory errors have serious consequences, such as eyewitness testimony. Innocent people can be convicted on the basis of inaccurate eyewitness identifications. An array of biases make these more likely and they are especially common in interracial contexts.
Recollections can also be influenced by the testimony of other witnesses, and even by the language used during questioning. In a classic study, participants who viewed videos of car accidents estimated the car’s speed as substantially faster when the cars were described as having “smashed” rather than “contacted”. These distortions are not temporary: new information overwrites and overrides the original memory.
Misinformation works in a similar way and with equally dire consequences, such as vaccination avoidance. False information not only modifies existing memories but can even produce false memories, especially when it aligns with our preexisting beliefs and ideologies.
Greene and Murphy present intriguing experimental evidence that false memories are prevalent and easy to implant. Children and older adults seem especially susceptible to misinformation, but no one is immune, regardless of education or intelligence.
Reassuringly, perhaps, digital image manipulation and deepfake videos are no more likely to induce false memories than good old-fashioned verbiage. A doctored picture may not be worth a thousand words when it comes to warping memory.
Memory Lane devotes some time to the “memory wars” of the 1980s and 1990s, when debate raged over the existence of repressed memories. Greene and Murphy argue the now mainstream view that many traumatic memories supposedly recovered in therapy were false memories induced by therapists. Memories for traumatic events are not repressed, they argue, and traumatic memories are neither qualitatively different from other memories, nor stored separately from them.
Here the science of memory runs contrary to the wildly popular claims of writers such as psychiatrist Bessel van der Kolk, author of the bestseller The Body Keeps the Score.
Psychology researchers Ciara Greene (left) and Gillian Murphy (right) want us to be humbler about our fallible memories. Princeton University Press
Misunderstanding memory
The authors of Memory Lane contend that we hold memory to unrealistic standards of accuracy, completeness and stability. When people misremember the past or change their recollections, we query their honesty or mental health. When our own memories are hazy, we worry about cognitive decline.
Greene and Murphy argue that it is in the very nature of memory to be fallible, malleable and limited. This message is heartening, but it does not clarify why we would expect memory to be more capacious, coherent and durable in the first place. Nor does it explain why we persist with this wrongheaded expectation, despite so much evidence to the contrary.
The authors hint that our mistake might have its roots in dominant metaphors of memory. If we now understand the mind as computer-like, we will see memories as digital traces that sit, silent and unchanging, in a vast storage system.
“Many of the catastrophic consequences of memory distortion arise not because our individual memories are terrible,” they argue, “but because we have unrealistic expectations about how memory works, treating it as a video camera rather than a reconstruction.”
In earlier times, when memory was likened to a telephone switchboard or to books or, for the ancient Greeks, to wax tablets, memory errors and erasures may have seemed less surprising and more tolerable.
These shifting technological analogies, explored historically in Douwe Draaisma’s Metaphors of Memory, may partly account for our extravagant expectations for memory. Expecting silicon chip performance from carbon-based organisms, who evolved to care more about adaptation than truth, would be foolish.
But there is surely more to this than metaphor. All aspects of our lives are increasingly recorded and datafied, a process that demands objectivity, accuracy and consistency. The recorded facts of the matter determine who should be rewarded, punished and regulated. The bounded and mutable nature of human memory presents a challenge to this digital regime.
Human memory is also increasingly taxed by the overwhelming and accelerating volume of information that assails us. Our frustration with its limitations reflects the desperate mismatch we feel between human nature and the impersonal systems of data in which we live.
Greene and Murphy urge us to relax. We should be humbler about our memory, and more realistic and forgiving about the memories of others. We should not be judgemental about the errors and inconsistencies of friends, or overconfident about our own recollections. And we should remember that, although memory is fallible, it is fallible in beneficial ways.
A person whose memory system always kept an accurate record of our lives would be profoundly impaired, Greene and Murphy argue. Such a person “would struggle to plan for the future, learn from the past, or respond flexibly to unexpected events”. Brimming with insights such as these, Memory Lane offers an informative and readable account of how the apparent weaknesses of human memory may be strengths in disguise.
Nick Haslam receives funding from the Australian Research Council.
Ice loss in Antarctica and its impact on the planet – sea level rise, changes to ocean currents and disturbance of wildlife and food webs – has been in the news a lot lately. All of these threats were likely on the minds of the delegates to the annual Antarctic Treaty Consultative Meeting, which finishes up today in Milan, Italy.
This meeting is where decisions are made about the continent’s future. These decisions rely on evidence from scientific research. Moreover, only countries that produce significant Antarctic research – as well as being parties to the treaty – get to have a final say in these decisions.
Our new report – published as a preprint through the University of the Arctic – shows the rate of research on the Antarctic and Southern Ocean is falling at exactly the time when it should be increasing. Moreover, research leadership is changing, with China taking the lead for the first time.
This points to a dangerous disinvestment in Antarctic research just when it is needed, alongside a changing of the guard in national influence. Antarctica and the research done there are key to everyone’s future, so it’s vital to understand what this change might lead to.
Why is Antarctic research so important?
With the Antarctic region rapidly warming, its ice shelves destabilising and sea ice shrinking, understanding the South Polar environment is more crucial than ever.
Research to understand these impacts is vital. First, knowing the impact of our actions – particularly carbon emissions – gives us an increased drive to make changes and lobby governments to do so.
Second, even when changes are already locked in, to prepare ourselves we need to know what these changes will look like.
And third, we need to understand the threats to the Antarctic and Southern Ocean environment to govern it properly. This is where the treaty comes in.
Fifty-eight countries are parties to the treaty, but only 29 of them – called consultative parties – can make binding decisions about the region. They comprise the 12 original signatories from 1959, along with 17 more recent signatory nations that produce substantial scientific research relating to Antarctica.
This makes research a key part of a nation’s influence over what happens in Antarctica.
For most of its history, the Antarctic Treaty System has functioned remarkably well. It maintained peace in the region during the Cold War, facilitated scientific cooperation, and put arguments about territorial claims on indefinite hold. It indefinitely forbade mining, and managed fisheries.
Because decisions are made by consensus, any country can effectively block progress. Russia and China – both long-term actors in the system – have been at the centre of the impasse.
Tracking the amount of Antarctic research being done tells us whether nations as a whole are investing enough in understanding the region and its global impact.
It also tells us which nations are investing the most and are therefore likely to have substantial influence.
Our new report examined the number of papers published on Antarctic and Southern Ocean topics from 2016 to 2024, using the Scopus database. We also looked at other factors, such as the countries affiliated with each paper.
The results show five significant changes are happening in the world of Antarctic research.
The number of Antarctic and Southern Ocean publications peaked in 2021 and then fell slightly yearly through to 2024.
While the United States has for decades been the leader in Antarctic research, China overtook them in 2022.
If we look only at the high-quality publications (those published in the best 25% of journals) China still took over the US, in 2024.
Of the top six countries in overall publications (China, the US, the United Kingdom, Australia, Germany and Russia) all except China have declined in publication numbers since 2016.
Although collaboration in publications is higher for Antarctic research than in non-Antarctic fields, Russia, India and China have anomalously low rates of co-authorship compared with many other signatory countries.
Why is this research decline a problem?
A recent parliamentary inquiry in Australia emphasised the need for funding certainty. In the UK, a House of Commons committee report considered it “imperative for the UK to significantly expand its research efforts in Antarctica”, in particular in relation to sea level rise.
US commentators have pointed to the inadequacy of the country’s icebreaker infrastructure. The Trump administration’s recent cuts to Antarctic funding are only likely to exacerbate the situation. Meanwhile China has built a fifth station in Antarctica and announced plans for a sixth.
Given the nation’s population and global influence, China’s leadership in Antarctic research is not surprising. If China were to take a lead in Antarctic environmental protection that matched its scientific heft, its move to lead position in the research ranks could be positive. Stronger multi-country collaboration in research could also strengthen overall cooperation.
But the overall drop in global Antarctic research investment is a problem however you look at it. We ignore it at our peril.
Elizabeth Leane receives funding from the Australian Research Council, the Dutch Research Council, the Council on Australian and Latin American Relations DFAT and HX (Hurtigruten Expeditions). She has received in-kind support from Hurtigruten Expeditions in the recent past. The University of Tasmania is a member of the UArctic, which has provided support for this project.
Keith Larson is affiliated with the UArctic and European Polar Board. The UArctic paid for the development and publication of this report. The UArctic Thematic Network on Research Analytics and Bibliometrics conducted the analysis and developed the report. The Arctic Centre at Umeå University provided in-kind support for staff time on the report.
New research led by Viktoria Cologna at ETH Zurich in Switzerland may help to explain what’s going on. Using data from around the world, the study suggests simple exposure to extreme weather events does not affect people’s view of climate action – but linking those events to climate change can make a big difference.
Global opinion, global weather
The new study, published in Nature Climate Change, looked at the question of extreme weather and climate opinion using two global datasets.
The first is the Trust in Science and Science-related Populism (TISP) survey, which includes responses from more than 70,000 people in 68 countries. It measures public support for climate policies and the extent that people think climate change is behind increases in extreme weather.
The second dataset estimates how much of each country’s population has been affected each year by events such as droughts, floods, heatwaves and storms. These estimates are based on detailed models and historical climate records.
Public support for climate policies
The survey measured public support for climate policy by asking people how much they supported five specific actions to cut carbon emissions. These included raising carbon taxes, improving public transport, using more renewable energy, protecting forests and land, and taxing carbon-heavy foods.
Responses ranged from 1 (not at all) to 3 (very much). On average, support was fairly strong, with an average rating of 2.37 across the five policies. Support was especially high in parts of South Asia, Africa, the Americas and Oceania, but lower in countries such as Russia, Czechia and Ethiopia.
Exposure to extreme weather events
The study found most people around the world have experienced heatwaves and heavy rainfall in recent decades. Wildfires affected fewer people in many European and North American countries, but were more common in parts of Asia, Africa and Latin America.
Cyclones mostly impacted North America and Asia, while droughts affected large populations in Asia, Latin America and Africa. River flooding was widespread across most regions, except Oceania.
Do people in countries with higher exposure to extreme weather events show greater support for climate policies? This study found they don’t.
In most cases, living in a country where more people are exposed to disasters was not reflected in stronger support for climate action.
Wildfires were the only exception. Countries with more wildfire exposure showed slightly higher support, but this link disappeared once factors such as land size and overall climate belief were considered.
In short, just experiencing more disasters does not seem to translate into increased support for mitigation efforts.
Seeing the link between weather and climate change
In the global survey, people were asked how much they think climate change has increased the impact of extreme weather over recent decades. On average, responses were moderately high (3.8 out of 5) suggesting that many people do link recent weather events to climate change.
Such an attribution was especially strong in Latin America, but lower in parts of Africa (such as Congo and Ethiopia) and Northern Europe (such as Finland and Norway).
Crucially, people who more strongly believed climate change had worsened these events were also more likely to support climate policies. In fact, this belief mattered more for policy support than whether they had actually experienced the events firsthand.
Prior research shows less dramatic and chronic events like rainfall or temperature anomalies have less influence on public views than more acute hazards like floods or bushfires. Even then, the influence on beliefs and behaviour tends to be slow and limited.
This study shows climate impacts alone may not change minds. However, it also highlights what may affect public thinking: helping people recognise the link between climate change and extreme weather events.
In countries such as Australia, climate change makes up only about 1% of media coverage. What’s more, most of the coverage focuses on social or political aspects rather than scientific, ecological, or economic impacts.
Omid Ghasemi receives funding from the Australian Academy of Science. He was a member of the TISP consortium and a co-author of the dataset used in this study.
Canadian researchers recently investigated this idea in a sample of 1,082 undergraduate psychology students. The students completed a survey, which included questions about how they perceived their diet influenced their sleep and dreams.
Some 40% of participants reported certain foods impacted their sleep, with 25% of the whole sample claiming certain foods worsened their sleep, and 20% reporting certain foods improved their sleep.
Only 5.5% of respondents believed what they ate affected the nature of their dreams. But many of these people thought sweets or dairy products (such as cheese) made their dreams more strange or disturbing and worsened their sleep.
In contrast, participants reported fruits, vegetables and herbal teas led to better sleep.
This study used self-reporting, meaning the results rely on the participants recalling and reporting information about their sleep and dreams accurately. This could have affected the results.
It’s also possible participants were already familiar with the notion that cheese causes nightmares, especially given they were psychology students, many of whom may have studied sleep and dreaming.
This awareness could have made them more likely to notice or perceive their sleep was disrupted after eating dairy. In other words, the idea cheese leads to nightmares may have acted like a self-fulfilling prophecy and results may overestimate the actual likelihood of strange dreams.
Nonetheless, these findings show some people perceive a connection between what they eat and how they dream.
While there’s no evidence to prove cheese causes nightmares, there is evidence that does explain a link.
The science behind cheese and nightmares
Humans are diurnal creatures, meaning our body is primed to be asleep at night and awake during the day. Eating cheese before bed means we’re challenging the body with food at a time when it really doesn’t want to be eating.
At night, our physiological systems are not primed to digest food. For example, it takes longer for food to move through our digestive tract at night compared with during the day.
If we eat close to going to sleep, our body has to process and digest the food while we’re sleeping. This is a bit like running through mud – we can do it, but it’s slow and inefficient.
If your body is processing and digesting food instead of focusing all its resources on sleep, this can affect your shut-eye. Research has shown eating close to bedtime reduces our sleep quality, particularly our time spent in rapid eye movement (REM) sleep, which is the stage of sleep associated with vivid dreams.
People will have an even harder time digesting cheese at night if they’re lactose intolerant, which might mean they experience even greater impacts on their sleep. This follows what the Canadian researchers found in their study, with lactose intolerant participants reporting poorer sleep quality and more nightmares.
It’s important to note we might actually have vivid dreams or nightmares every night – what could change is whether we’re aware of the dreams and can remember them when we wake up.
Poor sleep quality often means we wake up more during the night. If we wake up during REM sleep, research shows we’re more likely to report vivid dreams or nightmares that we mightn’t even remember if we hadn’t woken up during them.
This is very relevant for the cheese and nightmares question. Put simply, eating before bed impacts our sleep quality, so we’re more likely to wake up during our nightmares and remember them.
Don’t panic – I’m not here to tell you to give up your cheesy evenings. But what we eat before bed can make a real difference to how well we sleep, so timing matters.
General sleep hygiene guidelines suggest avoiding meals at least two hours before bed. So even if you’re eating a very cheese-heavy meal, you have a window of time before bed to digest the meal and drift off to a nice peaceful sleep.
How about other dairy products?
Cheese isn’t the only dairy product which may influence our sleep. Most of us have heard about the benefits of having a warm glass of milk before bed.
Milk can be easier to digest than cheese. In fact, milk is a good choice in the evening, as it contains tryptophan, an amino acid that helps promote sleep.
Nonetheless, we still don’t want to be challenging our body with too much dairy before bed. Participants in the Canadian study did report nightmares after dairy, and milk close to bed might have contributed to this.
While it’s wise to steer clear of food (especially cheese) in the two hours before lights out, there’s no need to avoid cheese altogether. Enjoy that cheesy pasta or cheese board, just give your body time to digest before heading off to sleep. If you’re having a late night cheese craving, opt for something small. Your sleep (and your dreams) will thank you.
Charlotte Gupta does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Motorcycle-taxis are one of the fastest and most convenient ways to get around Uganda’s congested capital, Kampala. But they are also the most dangerous. Though they account for one-third of public transport trips taking place within the city, police reports suggest motorcycles were involved in 80% of all road-crash deaths registered in Kampala in 2023.
Promising to solve the safety problem while also improving the livelihoods of moto-taxi workers, digital ride-hail platforms emerged a decade ago on the city’s streets. It is no coincidence that Uganda’s ride-hailing pioneer and long-time market leader goes by the name of SafeBoda.
Conceived in 2014 as a “market-based approach to road safety”, the idea is to give riders a financial incentive to drive safely by making digital moto-taxi work pay better. SafeBoda claimed at the time that motorcyclists who signed up with it would increase their incomes by up to 50% relative to the traditional mode of operation, in which riders park at strategic locations called “stages” and wait for passengers.
In the years since, the efforts of SafeBoda and its ride-hail competitors to bring safety to the sector have largely been deemed a success. One study carried out in 2017 found that digital riders were more likely to wear a helmet and less likely to drive towards oncoming traffic. Early press coverage wasparticularlyglowing, while recentacademicstudies continue to cite the Kampala case as evidence that ride-hailing platforms may hold the key to making African moto-taxi sectors a safer place to work and travel.
Is it all as clear-cut as this? In a new paper based on PhD research, I suggest not. Because at its core the ride-hail model – in which riders are classified as independent contractors who do poorly paid “gig work” rather than as wage-earning employees – undermines its own safety ambitions.
Speed traps
In my study of Kampala’s vast moto-taxi industry – estimated to employ hundreds of thousands of people – I draw on 112 in-depth interviews and a survey of 370 moto-taxi riders to examine how livelihoods and working conditions have been affected by the arrival of the platforms.
To date, there has been only limited critical engagement with how this change has played out over the past decade. I wanted to get beneath the big corporate claims and alluring platform promises to understand how riders themselves had experienced the digital “transformation” of their industry, several years after it first began.
One of the things I found was that, from a safety perspective, the ride-hail model represents a paradox. We can think of it as a kind of “speed trap”.
On one hand, ride-hail platforms try to moderate moto-taxi speeds and behaviours through managerial techniques. They make helmet use compulsory. They put riders through road safety training before letting them out onto the streets. And they enforce a professional “code of conduct” for riders.
In some cases, companies also deploy “field agents” to major road intersections around the city. Their task is to monitor the behaviour of riders in company uniform and, should they be spotted breaking the rules, discipline them.
On the other hand, however, the underlying economic structure of digital ride-hailing pulls transport workers in the opposite direction by systematically depressing trip fares and rewarding speed.
Under the “gig economy” model used by Uganda’s ride-hail platforms, the livelihood promise hangs not in the offer of a guaranteed wage but in the possibility of higher earnings. Crucially, it is a promise that only materialises if riders are able to reach and maintain a faster, harder work-rate throughout the day – completing enough jobs that pay “little money”, as one rider put it, to make the gig-work deal come good. Or, as summed up by another interviewee:
We are like stakeholders, I can say that. No basic salary, just commission. So it depends on your speed.
And yet, it is precisely these factors that routinely lead to road traffic accidents. Extensive research from across east Africa has shown that motorcycle crashes arestronglyassociated with financial pressure and the practices that lead directly from this, such as speeding, working long hours and performing high-risk manoeuvres. All are driven by the need to break even each day in a hyper-competitive informal labour market, with riders compelled to go fast by the raweconomics of their work.
Deepening the pressure
Ride-hail platforms may not be the reason these circumstances exist in the first place. But the point is that they do not mark a departure from them.
If anything, my research suggests they may be making things worse. According to the survey data, riders working through the apps make on average 12% higher gross earnings each week relative to their analogue counterparts. This is because the online world gets them more jobs.
But to stay connected to that world they must shoulder higher operating costs, for: mobile data (to remain logged on); fuel (to perform more trips); the use of helmets and uniforms (which remain company property); and commissions extracted by the platform companies (as much as 15%-20% per trip).
As soon as these extras are factored in, the difference completely disappears. The digital rider works faster and harder – but for no extra reward.
But it is important to remember that these are private enterprises with a clear bottom line: to one day turn a profit. As recentreports and my own thesis show, efforts to reach that point often alienate and ultimately repel the workers on whom these platforms depend – and whose livelihoods and safety standards they claim to be transforming.
A recent investment evaluation by one of SafeBoda’s first funders perhaps puts it best: it is time to reframe ride-hailing as a “risky vehicle” for safety reform in African cities, rather than a clear road to success.
Rich received funding for this research from the UK’s Economic and Social Research Council (ESRC).
European Commission Speech Brussels, 02 Jul 2025 Thank you very much.
I think what you see today and what you’re going to read is a very clear roadmap, or if you will, a clear expression of a European offer: how to make Europe the global leader in life sciences by 2030.
Because if you look at all the sectors of our economies, what you will find is that it is the biotech sector and the health sector where Europe has the biggest potential to become or to stay a leader.
If you look at the Draghi report, it is very clear that this is where Europe needs to up its game and where Europe has the basics to create itself as the hub for innovation and investment in long-term and sustainable healthcare. But for this to happen, we need to do a major overhaul of how we do things and also to use our assets even more strategically. Strategically meaning attracting science, attracting innovation, attracting investments and this way ensuring that our patients will always have the most state-of-the-art healthcare throughout the times to come.
But for this to happen, we need to do things urgently. Urgently because there is a very clear global race for this. If I want to put it into three major challenges, what we need to achieve is first of all we have a trade challenge.
This is the sector which is the second biggest exporter from the EU. This is a sector which is contributing very largely to the trade surplus that we are having. And this is the sector which is truly global, and this is the sector which is still leading globally.
The second challenge is the challenge of investments. How do we create a climate in which we have long-term vision ensured for everyone to invest into these new technologies. New technologies are around the corner we all know and in the healthcare sector this might come even faster than anywhere else.
We have new therapies emerging by the day. We have completely new combinations of innovations that we have not seen before based on artificial intelligence, European health data space just to name two of the main cornerstones. But for this to be turned into real economic output and also patient outcomes, we need to do an overhaul of the European legislative framework.
And this is what we have sketched out in a broad term in this paper today. Some of the elements are already on the way.
The pharma review is already very well advanced. We hope that this will be concluded already this year. And this should already give a very clear and strong signal to the innovators that we want them to stay. And not only that but we want them to invest more because the ground for innovation has been reinforced.
The second is of course the very important Critical Medicines Act which should act to create the markets on the ground for all innovative products, but which should also create the accessibility for the patients to all these new technologies. And of course, when it comes to talking about the rest of this year, the most important elements we anticipate to come forward with is going to be first of all a full review of the medical devices sector, a Biotech Act and also that should include a revision of the Clinical Trials Regulation.
And to bring all these innovations into therapies, a very comprehensive European cardiovascular health plan. We do hope that we can achieve all this still this year, and we can put it on the table of the co-legislators because we have no time to lose. So, let’s go one by one.
The medical devices. The medical devices is an area maybe overlooked by many, but the medical devices area is a backbone of our healthcare system. And it has a huge potential for the development of the healthcare system because we are living in the age when innovators are combining different products that have not been seen before.
Ozempic is the talk of the town. Ozempic is a pharmaceutical product, but it is marketed together with a medical device. And for this to be authorised it had to be done twice.
It had to be authorised as a pharmaceutical product, and it had to be authorised as a medical device. Of course, we do not want to compromise on health and safety. We don’t want to compromise on efficacy.
But we have to make sure that, when we will have medical devices that are also using artificial intelligence, we will be the first and the fastest to authorise them. And we will be the place that these are going to be developed and innovated. So, we need a major overhaul for this sector which is mainly composed of SMEs so that they can really unravel the whole new avenue of medtech innovation.
This should come still this year. Second big proposal we are trying to make is going to be the Biotech Act. If you ask me, if I want to translate it into everyday language, the Biotech Act should serve two things.
One is to break the boundaries of innovation. So far we have silos. We have the pharmaceutical sector, we have the medical devices sector, we have the chemical sector, and I can go on with all the interlinked sectors.
But our goal here is to make innovation easier. And when you have a genuine idea which crosscuts the different sectors that we have you should be able to go much faster and you should be able to go much easier into creating new products in Europe and hopefully manufacturing them also in Europe. But for this to happen, we also need to look into the other field of major international competition which is the clinical trials.
It is clear that we are challenged in Europe on two main fronts. One is the clinical trials; the other one is the basic life science research where we are losing ground. We are losing ground to competitors like the US and China.
And Europe has been at the forefront of all this 10 years ago. So, we need to really change our mindset and this starts with a full review of the clinical trials and how to make it more effective and also faster. Also, by using new technologies because there are ways in which we can speed up things by using simply the new technologies.
We need the therapies to enter the markets much faster and we need also innovation to be translated into patient outcomes much much faster. So, as you know we are now at the stage of consulting the public about the Biotech Act and, if it is up to me, I still want to deliver this this year because again we have no time to lose.
And finally, on the cardiovascular health plan, this should be the vehicle that brings these new therapies to the patients. Cardiovascular health, I think, is the biggest challenge of Europe currently. We have a comprehensive plan already for cancer but still the single biggest cause of death in Europe is the cardiovascular diseases. And unfortunately, the situation is not improving but actually deteriorating.
If you look at only the figure related to the young generation, what you see is that the young generation, meaning the under 30s, 40% of them are either obese or having diabetes or both. That means that, 10 years from now, we will have a generation with a condition. A whole generation in the prime of their life having a condition, most probably cardiovascular condition.
We have to act now, and we have to make it much easier and much faster for them to access new therapies that are personalized, that are also based on predictive medicine, that are changing the realities, and which are creating real personal choices that people can make.
I think if you look at our little paper you will see a vision, but I want to translate this very fast into action as well.
Thank you, I am now happy to answer your questions.
Hearing improvements were both rapid and significant after patients received the gene therapy we developed.Nina Lishchuk/ Shutterstock
Up to three in every 1,000 newborns has hearing loss in one or both ears. While cochlear implants offer remarkable hope for these children, it requires invasive surgery. These implants also cannot fully replicate the nuance of natural hearing.
But recent research my colleagues and I conducted has shown that a form of gene therapy can successfully restore hearing in toddlers and young adults born with congenital deafness.
Our research focused specifically on toddlers and young adults born with OTOF-related deafness. This condition is caused by mutations in the OTOF gene that produces the otoferlin protein –a protein critical for hearing.
The protein transmits auditory signals from the inner ear to the brain. When this gene is mutated, that transmission breaks down leading to profound hearing loss from birth.
Get your news from actual experts, straight to your inbox.Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.
Unlike other types of genetic deafness, people with OTOF mutations have healthy hearing structures in their inner ear – the problem is simply that one crucial gene isn’t working properly. This makes it an ideal candidate for gene therapy: if you can fix the faulty gene, the existing healthy structures should be able to restore hearing.
In our study, we used a modified virus as a delivery system to carry a working copy of the OTOF gene directly into the inner ear’s hearing cells. The virus acts like a molecular courier, delivering the genetic fix exactly where it’s needed.
The modified viruses do this by first attaching themselves to the hair cell’s surface, then convincing the cell to swallow them whole. Once inside, they hitch a ride on the cell’s natural transport system all the way to its control centre (the nucleus). There, they finally release the genetic instructions for otoferlin to the auditory neurons.
Our team had previously conducted studies in primates and young children (five- and eight-year-olds) which confirmed the virus therapy was safe. We were also able to illustrate the therapy’s potential to restore hearing – sometimes to near-normal levels.
But key questions had remained about whether the therapy could work in older patients – and what age is optimal for patients to receive the treatment.
To answer these questions, we expanded our clinical trial across five hospitals, enrolling ten participants aged one to 24 years. All were diagnosed with OTOF-related deafness. The virus therapy was injected into the inner ears of each participant.
We closely monitored safety during the 12-months of the study through ear examinations and blood tests. Hearing improvements were measured using both objective brainstem response tests and behavioural hearing assessments.
From the brainstem response tests, patients heard rapid clicking sounds or short beeps of different pitches while sensors measured the brain’s automatic electrical response. In another test, patients heard constant, steady tones at different pitches while a computer analysed brainwaves to see if they automatically followed the rhythm of these sounds.
The therapy used a synthetic version of a virus to deliver a functional gene to the inner ear. Kateryna Kon/ Shutterstock
For the behavioural hearing assessment, patients wore headphones and listened to faint beeps at different pitches. They pressed a button or raised their hand each time they heard a beep – no matter how faint.
Hearing improvements were both rapid and significant – especially in younger participants. Within the first month of treatment, the average total hearing improvement reached 62% on the objective brainstem response tests and 78% on the behavioural hearing assessments. Two participants achieved near-normal speech perception. The parent of one seven-year-old participant said her child could hear sounds just three days after treatment.
Over the 12-month study period, ten patients experienced very mild to moderate side-effects. The most common adverse effect was a decrease in white blood cells. Crucially, no serious adverse events were observed. This confirmed the favourable safety profile of this virus-based gene therapy.
Treating genetic deafness
This is the first time such results have been achieved in both adolescent and adult patients with OTOF-related deafness.
The findings also reveal important insights into the ideal window for treatment, with children between the ages of five and eight showing the most pronounced benefit.
While younger children and older participants also showed improvement, their recovery was less dramatic. These counter-intuitive results in younger children are surprising. Although preserved inner-ear integrity and function at early ages should theoretically predict a better response to the gene therapy, these findings suggest the brain’s ability to process newly restored sounds may vary at different ages. The reasons for this are not yet understood.
This trial is a milestone. By bridging the gap between animal and human studies and diverse patients of different ages, we’re entering a new era in the treatment of genetic deafness. Although questions still remain about how long the effects of this therapy last, as gene therapy continues to advance, the possibility of curing – not just managing – genetic hearing loss is becoming a reality.
OTOF-related deafness is just the beginning. We, along with other research teams, are working on developing therapies that target other, more common genes that are linked to hearing loss. These are more complex to treat, but animal studies have yielded promising results. We’re optimistic that in the future, gene therapy will be available for many different types of genetic deafness.
Maoli Duan does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
The global ecosystem of climate finance is complex, constantly changing and sometimes hard to understand. But understanding it is critical to demanding a green transition that’s just and fair. That’s why The Conversation has collaborated with climate finance experts to create this user-friendly guide, in partnership with Vogue Business. With definitions and short videos, we’ll add to this glossary as new terms emerge.
Blue bonds
Blue bonds are debt instruments designed to finance ocean-related conservation, like protecting coral reefs or sustainable fishing. They’re modelled after green bonds but focus specifically on the health of marine ecosystems – this is a key pillar of climate stability.
By investing in blue bonds, governments and private investors can fund marine projects that deliver both environmental benefits and long-term financial returns. Seychelles issued the first blue bond in 2018. Now, more are emerging as ocean conservation becomes a greater priority for global sustainability efforts.
By Narmin Nahidi, assistant professor in finance at the University of Exeter
Carbon border adjustment mechanism
Did you know that imported steel could soon face a carbon tax at the EU border? That’s because the carbon border adjustment mechanism is about to shake up the way we trade, produce and price carbon.
The carbon border adjustment mechanism is a proposed EU policy to put a carbon price on imports like iron, cement, fertiliser, aluminium and electricity. If a product is made in a country with weaker climate policies, the importer must pay the difference between that country’s carbon price and the EU’s. The goal is to avoid “carbon leakage” – when companies relocate to avoid emissions rules and to ensure fair competition on climate action.
But this mechanism is more than just a tariff tool. It’s a bold attempt to reshape global trade. Countries exporting to the EU may be pushed to adopt greener manufacturing or face higher tariffs.
The carbon border adjustment mechanism is controversial: some call it climate protectionism, others argue it could incentivise low-carbon innovation worldwide and be vital for achieving climate justice. Many developing nations worry it could penalise them unfairly unless there’s climate finance to support greener transitions.
Carbon border adjustment mechanism is still evolving, but it’s already forcing companies, investors and governments to rethink emissions accounting, supply chains and competitiveness. It’s a carbon price with global consequences.
By Narmin Nahidi, assistant professor in finance at the University of Exeter
Carbon budget
The Paris agreement aims to limit global warming to 1.5°C above pre-industrial levels by 2030. The carbon budget is the maximum amount of CO₂ emissions allowed, if we want a 67% chance of staying within this limit. The Intergovernmental Panel on Climate Change (IPCC) estimates that the remaining carbon budgets amount to 400 billion tonnes of CO₂ from 2020 onwards.
Think of the carbon budget as a climate allowance. Once it has been spent, the risk of extreme weather or sea level rise increases sharply. If emissions continue unchecked, the budget will be exhausted within years, risking severe climate consequences. The IPCC sets the global carbon budget based on climate science, and governments use this framework to set national emission targets, climate policies and pathways to net zero emissions.
By Dongna Zhang, assistant professor in economics and finance, Northumbria University
Carbon credits
Carbon credits are like a permit that allow companies to release a certain amount of carbon into the air. One credit usually equals one tonne of CO₂. These credits are issued by the local government or another authorised body and can be bought and sold. Think of it like a budget allowance for pollution. It encourages cuts in carbon emissions each year to stay within those global climate targets.
The aim is to put a price on carbon to encourage cuts in emissions. If a company reduces its emissions and has leftover credits, it can sell them to another company that is going over its limit. But there are issues. Some argue that carbon credit schemes allow polluters to pay their way out of real change, and not all credits are from trustworthy projects. Although carbon credits can play a role in addressing the climate crisis, they are not a solution on their own.
By Sankar Sivarajah, professor of circular economy, Kingston University London
Carbon credits explained.
Carbon offsetting
Carbon offsetting is a way for people or organisations to make up for the carbon emissions they are responsible for. For example, if you contribute to emissions by flying, driving or making goods, you can help balance that out by supporting projects that reduce emissions elsewhere. This might include planting trees (which absorb carbon dioxide) or building wind farms to produce renewable energy.
The idea is that your support helps cancel out the damage you are doing. For example, if your flight creates one tonne of carbon dioxide, you pay to support a project that removes the same amount.
While this sounds like a win-win, carbon offsetting is not perfect. Some argue that it lets people feel better without really changing their behaviour, a phenomenon sometimes referred to as greenwashing.
Not all projects are effective or well managed. For instance, some tree planting initiatives might have taken place anyway, even without the offset funding, deeming your contribution inconsequential. Others might plant the non-native trees in areas where they are unlikely to reach their potential in terms of absorbing carbon emissions.
So, offsetting can help, but it is no magic fix. It works best alongside real efforts to reduce greenhouse gas emissions and encourage low-carbon lifestyles or supply chains.
By Sankar Sivarajah, professor of circular economy, Kingston University London
Carbon offsetting explained.
Carbon tax
A carbon tax is designed to reduce greenhouse gas emissions by placing a direct price on CO₂ and other greenhouse gases.
A carbon tax is grounded in the concept of the social cost of carbon. This is an estimate of the economic damage caused by emitting one tonne of CO₂, including climate-related health, infrastructure and ecosystem impacts.
A carbon tax is typically levied per tonne of CO₂ emitted. The tax can be applied either upstream (on fossil fuel producers) or downstream (on consumers or power generators). This makes carbon-intensive activities more expensive, it incentivises nations, businesses and people to reduce their emissions, while untaxed renewable energy becomes more competitively priced and appealing.
Carbon tax was first introduced by Finland in 1990. Since then, more than 39 jurisdictions have implemented similar schemes. According to the World Bank, carbon pricing mechanisms (that’s both carbon taxes and emissions trading systems) now cover about 24% of global emissions. The remaining 76% are not priced, mainly due to limited coverage in both sectors and geographical areas, plus persistent fossil fuel subsidies. Expanding coverage would require extending carbon pricing to sectors like agriculture and transport, phasing out fossil fuel subsidies and strengthening international governance.
What is carbon tax?
Sweden has one of the world’s highest carbon tax rates and has cut emissions by 33% since 1990 while maintaining economic growth. The policy worked because Sweden started early, applied the tax across many industries and maintained clear, consistent communication that kept the public on board.
Canada introduced a national carbon tax in 2019. In Canada, most of the revenue from carbon taxes is returned directly to households through annual rebates, making the scheme revenue-neutral for most families. However, despite its economic logic, inflation and rising fuel prices led to public discontent – especially as many citizens were unaware they were receiving rebates.
Carbon taxes face challenges including political resistance, fairness concerns and low public awareness. Their success depends on clear communication and visible reinvestment of revenues into climate or social goals. A 2025 study that surveyed 40,000 people in 20 countries found that support for carbon taxes increases significantly when revenues are used for environmental infrastructure, rather than returned through tax rebates.
By Meilan Yan, associate professor and senior lecturer in financial economics, Loughborough University
Climate resilience
Floods, wildfires, heatwaves and rising seas are pushing our cities, towns and neighbourhoods to their limits. But there’s a powerful idea that’s helping cities fight back: climate resilience.
Resilience refers to the ability of a system, such as a city, a community or even an ecosystem – to anticipate, prepare for, respond to and recover from climate-related shocks and stresses.
Sometimes people say resilience is about bouncing back. But it’s not just about surviving the next storm. It’s about adapting, evolving and thriving in a changing world.
Resilience means building smarter and better. It means designing homes that stay cool during heatwaves. Roads that don’t wash away in floods. Power grids that don’t fail when the weather turns extreme.
It’s also about people. A truly resilient city protects its most vulnerable. It ensures that everyone – regardless of income, age or background – can weather the storm.
And resilience isn’t just reactive. It’s about using science, local knowledge and innovation to reduce a risk before disaster strikes. From restoring wetlands to cool cities and absorb floods, to creating early warning systems for heatwaves, climate resilience is about weaving strength into the very fabric of our cities.
By Paul O’Hare, senior lecturer in geography and development, Manchester Metropolitan University
The meaning of climate resilience.
Climate risk disclosure
Climate risk disclosure refers to how companies report the risks they face from climate change, such as flood damage, supply chain disruptions or regulatory costs. It includes both physical risks (like storms) and transition risks (like changing laws or consumer preferences).
Mandatory disclosures, such as those proposed by the UK and EU, aim to make climate-related risks transparent to investors. Done well, these reports can shape capital flows toward more sustainable business models. Done poorly, they become greenwashing tools.
By Narmin Nahidi, assistant professor in finance at the University of Exeter
Emissions trading scheme
An emissions trading scheme is the primary market-based approach for regulating greenhouse gas emissions in many countries, including Australia, Canada, China and Mexico.
Part of a government’s job is to decide how much of the economy’s carbon emissions it wants to avoid in order to fight climate change. It must put a cap on carbon emissions that economic production is not allowed to surpass. Preferably, the polluters (that’s the manufacturers, fossil fuel companies) should be the ones paying for the cost of climate mitigation.
Regulators could simply tell all the firms how much they are allowed to emit over the next ten years or so. But giving every firm the same allowance across the board is not cost efficient, because avoiding carbon emissions is much harder for some firms (such as steel producers) than others (such as tax consultants). Since governments cannot know each firm’s specific cost profile either, it can’t customise the allowances. Also, monitoring whether polluters actually abide by their assigned limits is extremely costly.
An emissions trading scheme cleverly solves this dilemma using the cap-and-trade mechanism. Instead of assigning each polluter a fixed quota and risking inefficiencies, the government issues a large number of tradable permits – each worth, say, a tonne of CO₂-equivalent (CO₂e) – that sum up to the cap. Firms that can cut greenhouse gas emissions relatively cheaply can then trade their surplus permits to those who find it harder – at a price that makes both better off.
By Mathias Weidinger, environmental economist, University of Oxford
Emissions trading schemes, explained by climate finance expert Mathias Weidinger.
Environmental, social and governance (ESG) investing
ESG investing stands for environmental, social and governance investing. In simple terms, these are a set of standards that investors use to screen a company’s potential investments.
ESG means choosing to invest in companies that are not only profitable but also responsible. Investors use ESG metrics to assess risks (such as climate liability, labour practices) and align portfolios with sustainability goals by looking at how a company affects our planet and treats its people and communities. While there isn’t one single global body governing ESG, various organisations, ratings agencies and governments all contribute to setting and evolving these metrics.
For example, investing in a company committed to renewable energy and fair labour practices might be considered “ESG aligned”. Supporters believe ESG helps identify risks and create long-term value. Critics argue it can be vague or used for greenwashing, where companies appear sustainable without real action. ESG works best when paired with transparency and clear data. A barrier is that standards vary, and it’s not always clear what counts as ESG.
Why do financial companies and institutions care? Issues like climate change and nature loss pose significant risks, affecting company values and the global economy.
However, gathering reliable ESG information can be difficult. Companies often self-report, and the data isn’t always standardised or up to date. Researchers – including my team at the University of Oxford – are using geospatial data, like satellite imagery and artificial intelligence, to develop global databases for high-impact industries, across all major sectors and geographies, and independently assess environmental and social risks and impacts.
For instance, we can analyse satellite images of a facility over time to monitor its emissions effect on nature and biodiversity, or assess deforestation linked to a company’s supply chain. This allows us to map supply chains, identify high-impact assets, and detect hidden risks and opportunities in key industries, providing an objective, real-time look at their environmental footprint.
The goal is for this to improve ESG ratings and provide clearer, more consistent insights for investors. This approach could help us overcome current data limitations to build a more sustainable financial future.
By Amani Maalouf, senior researcher in spatial finance, University of Oxford
Environmental, social and governance investing explained.
Financed emissions
Financed emissions are the greenhouse gas emissions linked to a bank’s or investor’s lending and investment portfolio, rather than their own operations. For example, a bank that funds a coal mine or invests in fossil fuels is indirectly responsible for the carbon those activities produce.
Measuring financed emissions helps reveal the real climate impact of financial institutions not just their office energy use. It’s a cornerstone of climate accountability in finance and is becoming essential under net zero pledges.
By Narmin Nahidi, assistant professor in finance at the University of Exeter
Green bonds
Green bonds are loans issued to fund environmentally beneficial projects, such as energy-efficient buildings or clean transportation. Investors choose them to support climate solutions while earning returns.
Green bonds are a major tool to finance the shift to a low-carbon economy by directing finance toward climate solutions. As climate costs rise, green bonds could help close the funding gap while ensuring transparency and accountability.
Green bonds are required to ensure funds are spent as promised. For instance, imagine a city wants to upgrade its public transportation by adding electric buses to reduce pollution. Instead of raising taxes or slashing other budgets, the city can issue green bonds to raise the necessary capital. Investors buy the bonds, the city gets the funding, and the environment benefits from cleaner air and fewer emissions.
The growing participation of government issuers has improved the transparency and reliability of these investments. The green bond market has grown rapidly in recent years. According to the Bank for International Settlements, the green bond market reached US$2.9 trillion (£2.1 trillion) in 2024 – nearly six times larger than in 2018. At the same time, annual issuance (the total value of green bonds issued in a year) hit US$700 billion, highlighting the increasing role of green finance in tackling climate change.
By Dongna Zhang, assistant professor in economics and finance, Northumbria University
Just transition
Just transition is the process of moving to a low-carbon society that is environmentally sustainable and socially inclusive. In a broad sense, a just transition means focusing on creating a more fair and equal society.
Just transition has existed as a concept since the 1970s. It was originally applied to the green energy transition, protecting workers in the fossil fuel industry as we move towards more sustainable alternatives.
These days, it has so many overlapping issues of justice hidden within it, so the concept is hard to define. Even at the level of UN climate negotiations, global leaders struggle to agree on what a just transition means.
The big battle is between developed countries, who want a very restrictive definition around jobs and skills, and developing countries, who are looking for a much more holistic approach that considers wider system change and includes considerations around human rights, Indigenous people and creating an overall fairer global society.
A just transition is essentially about imagining a future where we have moved beyond fossil fuels and society works better for everyone – but that can look very different in a European city compared to a rural setting in south-east Asia.
For example, in a British city it might mean fewer cars and better public transport. In a rural setting, it might mean new ways of growing crops that are more sustainable, and building homes that are heatwave resistant.
By Alix Dietzel, climate justice and climate policy expert, University of Bristol
The meaning of just transition.
Loss and damage
A global loss and damage fund was agreed by nations at the UN climate summit (Cop27) in 2022. This means that the rich countries of the world put money into a fund that the least developed countries can then call upon when they have a climate emergency.
At the moment, the loss and damage fund is made up of relatively small pots of money. Much more will be needed to provide relief to those who need it most now and in the future.
By Mark Maslin, professor of earth system science, UCL
Mark Maslin explains loss and damage.
Mitigation v adaptation
Mitigation means cutting greenhouse gas emissions to slow climate change. Adaptation means adjusting to its effects, like building sea walls or growing heat-resistant crops. Both are essential: mitigation tackles the cause, while adaptation tackles the symptoms.
Globally, most funding goes to mitigation, but vulnerable communities often need adaptation support most. Balancing the two is a major challenge in climate policy, especially for developing countries facing immediate climate threats.
By Narmin Nahidi, assistant professor in finance at the University of Exeter
Nationally determined contributions
Nationally determined contributions (NDCs) are at the heart of the Paris agreement, the global effort to collectively combat climate change. NDCs are individual climate action plans created by each country. These targets and strategies outline how a country will reduce its greenhouse gas emissions and adapt to climate change.
Each nation sets its own goals based on its own circumstances and capabilities – there’s no standard NDC. These plans should be updated every five years and countries are encouraged to gradually increase their climate ambitions over time.
The aim is for NDCs to drive real action by guiding policies, attracting investment and inspiring innovation in clean technologies. But current NDCs fall short of the Paris agreement goals and many countries struggle to turn their plans into a reality. NDCs also vary widely in scope and detail so it’s hard to compare efforts across the board. Stronger international collaboration and greater accountability will be crucial.
By Doug Specht, reader in cultural geography and communication, University of Westminster
Fashion depends on water, soil and biodiversity – all natural capital. And forward-thinking designers are now asking: how do we create rather than deplete, how do we restore rather than extract?
Natural capital is the value assigned to the stock of forests, soils, oceans and even minerals such as lithium. It sustains every part of our economy. It’s the bees that pollinate our crops. It’s the wetlands that filter our water and it’s the trees that store carbon and cool our cities.
If we fail to value nature properly, we risk losing it. But if we succeed, we unlock a future that is not only sustainable but also truly regenerative.
My team at the University of Oxford is developing tools to integrate nature into national balance sheets, advising governments on biodiversity, and we’re helping industries from fashion to finance embed nature into their decision making.
Natural capital, explained by a climate finance expert.
By Mette Morsing, professor of business sustainability and director of the Smith School of Enterprise and the Environment, University of Oxford
Net zero
Reaching net zero means reducing the amount of additional greenhouse gas emissions that accumulate in the atmosphere to zero. This concept was popularised by the Paris agreement, a landmark deal that was agreed at the UN climate summit (Cop21) in 2015 to limit the impact of greenhouse gas emissions.
There are some emissions, from farming and aviation for example, that will be very difficult, if not impossible, to reach absolute zero. Hence, the “net”. This allows people, businesses and countries to find ways to suck greenhouse gas emissions out of the atmosphere, effectively cancelling out emissions while trying to reduce them. This can include reforestation, rewilding, direct air capture and carbon capture and storage. The goal is to reach net zero: the point at which no extra greenhouse gases accumulate in Earth’s atmosphere.
By Mark Maslin, professor of earth system science, UCL
Mark Maslin explains net zero.
For more expert explainer videos, visit The Conversation’s quick climate dictionary playlist here on YouTube.
Mark Maslin is Pro-Vice Provost of the UCL Climate Crisis Grand Challenge and Founding Director of the UCL Centre for Sustainable Aviation. He was co-director of the London NERC Doctoral Training Partnership and is a member of the Climate Crisis Advisory Group. He is an advisor to Sheep Included Ltd, Lansons, NetZeroNow and has advised the UK Parliament. He has received grant funding from the NERC, EPSRC, ESRC, DFG, Royal Society, DIFD, BEIS, DECC, FCO, Innovate UK, Carbon Trust, UK Space Agency, European Space Agency, Research England, Wellcome Trust, Leverhulme Trust, CIFF, Sprint2020, and British Council. He has received funding from the BBC, Lancet, Laithwaites, Seventh Generation, Channel 4, JLT Re, WWF, Hermes, CAFOD, HP and Royal Institute of Chartered Surveyors.
Amani Maalouf receives funding from IKEA Foundation and UK Research and Innovation (NE/V017756/1).
Narmin Nahidi is affiliated with several academic associations, including the Financial Management Association (FMA), British Accounting and Finance Association (BAFA), American Finance Association (AFA), and the Chartered Association of Business Schools (CMBE). These affiliations do not influence the content of this article.
Paul O’Hare receives funding from the UK’s Natural Environment Research Council (NERC). Award reference NE/V010174/1.
Alix Dietzel, Dongna Zhang, Doug Specht, Mathias Weidinger, Meilan Yan, and Sankar Sivarajah do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.
Headline: Quantum communications by satellite: SpeQtral and Thales Alenia Space launch new experimental phase
Cannes, July 3, 2025 – SpeQtral, a pioneer in satellite-based quantum communication technologies, and Thales Alenia Space, the joint company between Thales (67%) and Leonardo (33%), today announced the signing of a new collaboration agreement, extending their strategic partnership for the development and demonstration of quantum communications between space and Earth.
Signature of the cooperation agreement between Thales Alenia Space and SpeQtral in Thales Alenia Space chalet during Paris Airshow 2025 From left to right: Signatories: Dr Robert Bedington, SpeQtral CTO and Co-founder, and Christophe Valorge, CTO of Thales Alenia Space. Second row: Emily Tan, CEO of Thales Singapore; Cindy Koh, Executive Vice President, Economic Development Board; Jonathan Hung, Executive Director, Office for Space Technology & Industry, Singapore (OSTIn)
Quantum information networks represent a major breakthrough in information technology. They will enable quantum computers and quantum sensors to be interconnected to improve performance and resilience. This will pave the way for a new form of internet between quantum devices. These networks will also enable end-to-end secure communications that are resistant to attacks from quantum computers. Satellites will play a key role in these networks, extending connections over long distances and laying the foundations for a future quantum internet.
This new agreement aims to organize joint experiments involving SpeQtral’s quantum satellites, currently in development, and the first quantum ground station designed by Thales Alenia Space. The station will include environmental sensors to assess the impact of local conditions on the quality of quantum signals.
SpeQtral’s quantum satellites are amongst the first demonstrator satellites dedicated to enabling commercial quantum communications, and will be used to test the transmission of entangled photons between space and Earth, paving the way for the future interconnection of terrestrial quantum networks across continents.
SpeQtral will operate the satellites and provide the data needed to coordinate with the ground station, including pointing parameters and time-stamped onboard measurements. Thales Alenia Space will operate the ground station and collect synchronized local measurements to enable cross-analysis of quantum link performance. The shared objective is to experimentally validate the technical foundations of future interoperable quantum networks, as well as explore ways to enhance their performance.
At a later stage, the roles may be reversed, with Thales Alenia Space planning to launch the QINSAT satellite by the end of the decade to distribute entangled photon pairs to ground stations, including those operated by SpeQtral.
Since 2018, Thales Alenia Space has been pursuing an ambitious roadmap, establishing itself a key player in the field of quantum communications. The company is leveraging its expertise in satellite telecommunication systems, optical terminals and over 25 years of quantum technology experience within the Thales Group to develop a fully integrated approach spanning both the space and ground segments of quantum communications systems.
“We’re pleased to be taking this next step with SpeQtral, which builds on the progress we’ve made together since 2022,” said Christophe Valorge, Chief Technical Officer at Thales Alenia Space. “This collaboration provides a structured framework to experiment, learn and move forward together by combining our complementary technological expertise. It represents a new stage in building a reliable, resilient and interoperable quantum communication infrastructure on a global scale.”
“Momentum is building globally to demonstrate that quantum communication systems can work together effectively. This is a key step toward realizing global quantum information networks and reflects one of SpeQtral’s long-term goals. We’ve had many fruitful technical exchanges with Thales Alenia Space since 2022, and we’re excited to take this next step together.” said Dr. Robert Bedington, Cofounder and Chief Technology Officer at SpeQtral.
About THALES ALENIA SPACE
Drawing on over 40 years of experience and a unique combination of skills, expertise and cultures, Thales Alenia Space delivers cost-effective solutions for telecommunications, navigation, Earth observation, environmental monitoring, exploration, science and orbital infrastructures. Governments and private industry alike count on Thales Alenia Space to design satellite-based systems that provide anytime, anywhere connections and positioning, monitor our planet, enhance management of its resources, and explore our Solar System and beyond. Thales Alenia Space sees space as a new horizon, helping to build a better, more sustainable life on Earth. A joint venture between Thales (67%) and Leonardo (33%), Thales Alenia Space also teams up with Telespazio to form the Space Alliance, which offers a complete range of solutions including services. Thales Alenia Space posted consolidated revenues of €2.23 billion in 2024 and has more than 8,100 employees in 7 countries with 15 sites in Europe.
About SPEQTRAL
SpeQtral is a pioneer in quantum communications, with a vision to build and deploy global quantum networks. SpeQtral develops quantum-secure products and services designed to protect sovereign and enterprise telecommunication networks against classical, as well as future quantum-based cyber-attacks on cryptography. Combining both terrestrial and space-based solutions, SpeQtral aims to secure the world’s networks against the threats posed by the imminent quantum revolution and drive innovation in quantum communications that will serve as the building blocks for the future quantum internet.
Source: The Conversation – Canada – By Daphne Rena Idiz, Postdoctoral fellow, Department of Arts, Culture and Media, University of Toronto
What should count as Canadian content (CanCon) in the era of streaming and generative AI (GenAI)?
That’s the biggest unknown at the heart of the Canadian Radio-television and Telecommunications Commission’s recent (CRTC) public hearing, held in Gatineau, Que., from May 14 to 27.
The debate is about how Canada’s current points-based CanCon system remains effective in the context of global streaming giants and generative AI. Shows qualify as CanCon by assigning value to roles like director, screenwriter and lead actors being Canadian.
The outcome will shape who gets to tell Canadian stories and what those stories are, and also which ones count as Canadian under the law. This, in turn, will determine who in the film and television industries can access funding, tax credits and visibility on streaming services.
It will also determine which Canadian productions big streamers like Netflix will invest in under their Online Streaming Act obligations.
The federal government’s recent announcement that it’s rescinding the Digital Services Tax reveals the limits of Canada’s leverage over Big Tech, underscoring the significance of CanCon rules as parameters around how streaming giants contribute meaningfully to the country’s creative industries.
CanCon: Who gets to decide?
The CRTC’s existing approach to defining CanCon relies on the citizenship of key creative personnel.
The National Film Board argued that this misses the “cultural elements” of Canadian storytelling. These include cultural expression, narrative themes and connection to Canadian audiences. That is, a production might technically count as CanCon by hiring Canadians, without feeling particularly “Canadian.”
The acts empower broadcasters and streamers to decide which Canadian stories and content will be developed, produced and distributed through commissioning and licensing powers. This implicitly limits the CRTC’s role to setting rules about which creatives are at the table.
The Writer’s Guild advocates broadening the pool of Canadian key creatives to modernize the CanCon system. It trusts the combined perspectives of a broader pool to make creative decisions about Canadian identity in meaningful ways. Accordingly, it supports the CRTC’s intent to add the showrunner role to the point system since showrunners are the “the chief custodian of the creative vision of a series.”
Battle over Canadian IP
Streaming introduces more players with financial stakes, complicating who controls content and who profits from it. A seismic shift is happening in how intellectual property (IP) is handled.
CRTC has proposed that the updated CanCon definition include Canadian IP ownership as a mandatory element to enable Canadian companies and workers to retain some control over their own IP, and thereby earn sustainable income. For example, in a streaming drama, Canadian screenwriters who retain ownership of the IP could earn ongoing revenue through licensing deals, international sales and royalties each time the series is distributed.
However, the Motion Picture Association-Canada (MPA-Canada), representing industry titans like Netflix, Amazon and Disney, is pushing back against requirements that mandate the sharing of territory or IP.
Without IP rights, Canadian talent and the industry as a whole may be reduced to becoming service providers for global companies.
Intervenors shared a range of preferences from 100 per cent Canadian IP ownership to none at all. One hundred per cent Canadian IP ownership means Canadian creators like a producer of a streaming series would control the rights to the content. They would receive the majority of profits from licensing, distribution and future adaptations.
Even 51 per cent ownership could give them a controlling stake, but would likely require sharing revenue and decision-making with the streaming service.
AI and CanCon
And then, of course, there’s the question of how generative AI should be considered within the updated CanCon definition. The Writers Guild of Canada has drawn a firm line in the sand: AI-generated material should not qualify as Canadian content.
The guild argues that since current AI tools don’t possess identity, nationality or cultural context, their output cannot advance the goals of the Broadcasting Act, centred on promoting Canadian voices and stories.
The Alliance of Canadian Cinema, Television and Radio Artists (ACTRA) raised a different concern around AI. AI, ACTRA argued, “should not take over the jobs of the creators in the ecosystem that we’re in and we should not treat AI-generated performers as if they are a Canadian actor.”
Depending on how the CRTC addresses AI, this could mean that streaming content featuring AI-generated scripts, characters, or performances — even if developed by a Canadian creator or set in Canada — would not qualify as CanCon.
The WGC notes that it has already negotiated restrictions on AI use in screenwriting through its agreement with the Canadian Media Producers Association. These guardrails are being held up as the “emerging industry standard.”
Follow the money
Another contested point is how streamers should pay into CanCon: through direct investment or through more traditional modes of financing. Under the Online Streaming Act, streamers are required to pay five per cent of their annual revenues to certain Canadian funds.
This model echoes previous requirements used to manage decision-making at media broadcasters, some at the much more substantial level of 30 per cent.
Research in the European Union and Canada highlight how different stakeholders benefit from different forms of financial obligations, suggesting the industry may be best served by a policy mix.
As Canada rewrites its broadcasting rules, defining Canadian content is a courtroom drama unfolding in real time — and the verdict will have serious ramifications.
MaryElizabeth Luka receives research project funding from peer-adjudicated grants from the Social Sciences and Humanities Research Council and internal grants at University of Toronto, such as the Creative Labour Critical Futures Cluster of Scholarly Prominence.
Daphne Rena Idiz does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Mountains on the moon as seen by NASA Lunar Reconnaissance Orbiter. (NASA/GSFC/Arizona State University)
In science-fiction stories, companies often mine the moon or asteroids. While this may seem far-fetched, this idea is edging closer to becoming reality.
Celestial bodies like the moon contain valuable resources, such as lunar regolith — also known as moon dust — and helium-3. These resources could serve a range of applications, including making rocket propellant and generating energy to sustaining long missions, bringing benefits in space and on Earth.
The first objective on this journey is being able to collect lunar regolith. One company taking up this challenge is ispace, a Japanese space exploration company ispace that signed a contract with NASA in 2020 for the collection and transfer of ownership of lunar regolith.
The company recently attempted to land its RESILIENCE lunar lander, but the mission was ultimately unsuccessful. Still, this endeavour marked a significant move toward the commercialization of space resources.
These circumstances give rise to a fundamental question: what are the legal rules governing the exploitation of space resources? The answer is both simple and complex, as there is a mix of international agreements and evolving regulations to consider.
Space activities have exponentially evolved since the treaty’s adoption. In the 60 years following the launch of Sputnik 1 — the first satellite placed in orbit — less than 500 space objects were launched annually. But since 2018, this number has risen into the thousands, with nearly 3,000 launched in 2024.
Because of this, the treaty is often judged as inadequate to address the current complexities of space activities, particularly resource exploitation.
A longstanding debate centres on whether Article II of the treaty, which prohibits the appropriation of outer space — including the moon and other celestial bodies — also prohibits space mining.
The prevailing position is that Article II solely bans the appropriation of territory, not the extraction of resources themselves.
We are now at a crucial moment in the development of space law. Arguing over whether extraction is legal serves no purpose. Instead, the focus must shift to ensuring resource extraction is carried out in accordance with principles that ensure the safe and responsible use of outer space.
International and national space laws
A significant development in the governance of space resources has been the adoption Artemis Accords, which — as of June 2025 — has 55 signatory nations. The accords reflect a growing international consensus concerning the exploitation of space resources.
Notably, Section 10 of the accords indicates that the exploitation of space resources does not constitute appropriation, and therefore doesn’t violate the Outer Space Treaty.
Considering the typically slow pace of multilateral negotiations, a handful of nations introduced national legislation. These laws govern the legality of space resource exploitation, allowing private companies to request licenses to conduct this type of activity.
Among these, Luxembourg’s legal framework is the most complete. It provides a series of requirements to provide authorization for the exploitation of space resources. In fact, ispace’s licence to collect lunar regolith was obtained under this regime.
This first high-resolution image taken on the first day of the Artemis I mission by a camera on the tip of one of Orion’s solar arrays. The spacecraft was 57,000 miles from Earth when the image was captured. (NASA)
The rest of the regulations usually tend to limit themselves to proclaiming the legality of this activity without entering into too much detail and deferring the specifics of implementation to future regulations.
While these initiatives served to put space resources at the forefront of international forums, they also risk regulatory fragmentation, as different countries adopt varying standards and approaches.
In May 2025, the chair of the working group, Steven Freeland, presented a draft of recommended principles based on input from member states.
These principles reaffirm the freedom of use and exploration of outer space for peaceful purposes, while introducing rules pertaining to the safety of the activities and their sustainability, as well as the protection of the environment, both of Earth and outer space.
The development of a legal framework for space resources is still in its early stages. The working group is expected to submit its final report by 2027, but the non-binding nature of the principles raises concerns about their enforcement and application.
As humanity moves closer to extracting and using space resources, the need for a cohesive and responsible governance system has never been greater.
Martina Elia Vitoloni does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Source: The Conversation – USA – By Rizwan Virk, Faculty Associate, PhD Candidate in Human and Social Dimensions of Science and Technology, Arizona State University
In Stephenson’s novel ‘The Diamond Age,’ a device called the Young Lady’s Illustrated Primer offers emotional, social and intellectual support.Christopher Michel/Wikimedia Commons, CC BY-SA
Every time I read about another advance in AI technology, I feel like another figment of science fiction moves closer to reality.
“The Diamond Age” depicted a post-cyberpunk sectarian future, in which society is fragmented into tribes, called phyles. In this future world, sophisticated nanotechnology is ubiquitous, and a new type of AI is introduced.
Though inspired by MIT nanotech pioneer Eric Drexler and Nobel Prize winner Richard Feynman, the advanced nanotechnology depicted in the novel still remains out of reach. However, the AI that’s portrayed, particularly a teaching device called the Young Lady’s Illustrated Primer, isn’t only right in front of us; it also raises serious issues about the role of AI in labor, learning and human behavior.
In Stephenson’s novel, the Primer looks like a hardcover book, but each of its “pages” is really a screen display that can show animations and text, and it responds to its user in real time via AI. The book also has an audio component, which voices the characters and narrates stories being told by the device.
It was originally created for the young daughter of an aristocrat, but it accidentally falls into the hands of a girl named Nell who’s living on the streets of a futuristic Shanghai. The Primer provides Nell personalized emotional, social and intellectual support during her journey to adulthood, serving alternatively as an AI companion, a storyteller, a teacher and a surrogate parent.
The AI is able to weave fairy tales that help a younger Nell cope with past traumas, such as her abusive home and life on the streets. It educates her on everything from math to cryptography to martial arts. In a techno-futuristic homage to George Bernard Shaw’s 1913 play “Pygmalion,” the Primer goes so far as to teach Nell the proper social etiquette to be able to blend into neo-Victorian society, one of the prominent tribes in Stephenson’s balkanized world.
No need for ‘ractors’
Three recent developments in AI – in video games, wearable technology and education – reveal that building something like the Primer should no longer be considered the purview of science fiction.
In May 2025, the hit video game “Fortnite” introduced an AI version of Darth Vader, who speaks with the voice of the late James Earl Jones.
While it was popular among fans of the game, the Screen Actors Guild lodged a labor complaint with Epic Games, the creator of “Fortnite.” Even though Epic had received permission from the late actor’s estate, the Screen Actors Guild pointed out that actors could have been hired to voice the character, and the company – in refusing to alert the union and negotiate terms – violated existing labor agreements.
In “The Diamond Age,” while the Primer uses AI to generate the fairy tales that train Nell, for the voices of these archetypal characters, Stephenson concocted a low-tech solution: The characters are played by a network of what he termed “ractors” – real actors working in a studio who are contracted to perform and interact in real time with users.
The Darth Vader “Fortnite” character shows that a Primer built today wouldn’t need to use actors at all. It could rely almost entirely on AI voice generation and have real-time conversations, showing that today’s technology already exceeds Stephenson’s normally far-sighted vision.
Recording and guiding in real time
Synthesizing James Earl Jones’ voice in “Fortnite” wasn’t the only recent AI development heralding the arrival of Primer-like technology.
I recently witnessed a demonstration of wearable AI that records all of the wearer’s conversations. Their words are then sent to a server so they can be analyzed by AI, providing both summaries and suggestions to the user about future behavior.
Several startups are making these “always on” AI wearables. In an April 29, 2025, essay titled “I Recorded Everything I Said for Three Months. AI Has Replaced My Memory,” Wall Street Journal technology columnist Joanna Stern describes the experience of using this technology. She concedes that the assistants created useful summaries of her conversations and meetings, along with helpful to-do lists. However, they also recalled “every dumb, private and cringeworthy thing that came out of my mouth.”
AI wearable devices that continuously record the conversations of their users have recently hit the market.
These devices also create privacy issues. The people whom the user interacts with don’t always know they are being recorded, even as their words are also sent to a server for the AI to process them. To Stern, the technology’s potential for mass surveillance becomes readily apparent, presenting a “slightly terrifying glimpse of the future.”
Relying on AI engines such as ChatGPT, Claude and Google’s Gemini, the wearables work only with words, not images. Behavioral suggestions occur only after the fact. However, a key function of the Primer – coaching users in real time in the middle of any situation or social interaction – is the next logical step as the technology advances.
Education or social engineering?
In “The Diamond Age,” the Primer doesn’t simply weave interactive fairy tales for Nell. It also assumes the responsibility of educating her on everything from her ABCs when younger to the intricacies of cryptography and politics as she gets older.
There are certainly advantages to AI tutors: Tutoring and college tuition can be exorbitantly expensive, and the technology can offer better access to education to people of all income levels.
Pulling together these latest AI advances – interactive avatars, behavioral guides, tutors – it’s easy to envision how an AI device like the Young Lady’s Illustrated Primer could be created in the near future. A young person might have a personalized AI character that accompanies them at all times. It can teach them about the world and offer up suggestions for how to act in certain situations. The AI could be tailored to a child’s personality, concocting stories that include AI versions of their favorite TV and movie characters.
But “The Diamond Age” offers a warning, too.
Toward the end of the novel, a version of the Primer is handed out to hundreds of thousands of young Chinese girls who, like Nell, didn’t have access to education or mentors. This leads to the education of the masses. But it also opens the door to large-scale social engineering, creating an army of Primer-raised martial arts experts, whom the AI then directs to act on behalf of “Princess Nell,” Nell’s fairy tale name.
It’s easy to see how this sort of large-scale social engineering could be used to target certain ideologies, crush dissent or build loyalty to a particular regime. The AI’s behavior could also be subject to the whims of the companies or individuals that created it. A ubiquitous, always-on, friendly AI could become the ultimate monitoring and reporting device. Think of a kinder, gentler face for Big Brother that people have trusted since childhood.
While large-scale deployment of a Primer-like AI could certainly make young people smarter and more efficient, it could also hamper one of the most important parts of education: teaching people to think for themselves.
Rizwan Virk owns shares of investments funds which own stock in various private AI companies such as Open AI and X.ai. He owns public stock in Google and Microsoft. Virk has family members who work for a wearable AI company.
Source: The Conversation – USA – By Robert Bird, Professor of Business Law & Eversource Energy Chair in Business Ethics, University of Connecticut
Something dangerous is happening to the U.S. economy, and it’s not inflation or trade wars. Chaotic deregulation and the selective enforcement of laws have upended markets and investor confidence. At one point, the threat of tariffs and resulting chaos evaporated US$4 trillion in value in the U.S. stock market. This approach isn’t helping the economy, and there are troubling signs it will hurt both the U.S. and the global economy in the short and long term.
The rule of law – the idea that legal rules apply to everyone equally, regardless of wealth or political connections − is essential for a thriving economy. Yet globally the respect for the rule of law is slipping, and the U.S. is slipping with it. According to annual rankings from the World Justice Project, the rule of law has declined in more than half of all countries for seven years in a row. The rule of law in the U.S., the most economically powerful nation in the world, is now weaker than the rule of law in Uruguay, Singapore, Latvia and over 20 other countries.
When regulation is unnecessarily burdensome for business, government should lighten the load. However, arbitrary and frenzied deregulation does not free corporations to earn higher profits. As a business school professor with an MBA who has taught business law for over 25 years, and the author of a recently published book about the importance of legal knowledge to business, I can affirm that the opposite is true. Chaotic deregulation doesn’t drive growth. It only fuels risk.
Chaos undermines investment, talent and trust
Legal uncertainty has become a serious drag on American competitiveness.
A study by the U.S. Chamber of Commerce found that public policy risks — such as unexpected changes in taxes, regulation and enforcement — ranked among the top challenges businesses face, alongside more familiar business threats such as competition or economic volatility. Companies that can’t predict how the law might change are forced to plan for the worst. That means holding back on long-term investment, slowing innovation and raising prices to cover new risks.
When the government enforces rules arbitrarily, it also undermines property rights.
For example, if a country enters into a major trade agreement and then goes ahead and violates it, that threatens the property rights of the companies that relied on the agreement to conduct business. If the government can seize assets without due process, those assets lose their stability and value. And if that treatment depends on whether a company is in the government’s political favor, it’s not just bad economics − it’s a red flag for investors.
When government doesn’t enforce rules fairly, it also threatens people’s freedom to enter into contracts.
Consider presidential orders that threaten the clients of law firms that have challenged the administration with cancellation of their government contracts. The threat alone jeopardizes the value of those agreements.
If businesses can’t trust public contracts to be respected, they’ll be less likely to work with the government in the first place. This deprives the government, and ultimately the American people, of receiving the best value for their tax dollars in critical areas such as transportation, technology and national defense.
Regulatory chaos also allows corruption to spread.
For example, the Foreign Corrupt Practices Act, which prohibits businesses from bribing foreign government officials, has leveled the playing field for firms and enabled the best American companies to succeed on their merits. Before the law was enacted in 1977, some American companies felt pressured to pay bribes to compete. “Pausing” enforcement of the law, as the current presidential administration has done, increases the cost of doing business and encourages a wild west economy where chaos thrives.
Chaotic enforcement of the law also corrodes labor markets.
American companies require a strong pool of talented professionals to fuel their financial success. When legal rights are enforced arbitrarily or unjustly, the very best talent that American companies need may leave the country.
The science brain drain is already happening. American scientists have submitted 32% more applications for jobs abroad compared with last year. Nonscientists are leaving too. Ireland’s Department of Foreign Affairs has witnessed a 50% increase in Americans taking steps to obtain an Irish passport. Employers in the U.K. saw a spike in job applications from the United States.
Business from other countries will gladly accept American talent as they compete against American companies. During the Third Reich, Nazi Germany lost its best and brightest to other countries, including America. Now the reverse is happening, as highly talented Americans leave to work for firms in other nations.
Threats of arbitrary legal actions also drive away democratic allies and their prosperous populations that purchase American-made goods and services. For example, arbitrarily threatening to punish or even annex a closely allied nation does not endear its citizens to that government or the businesses it represents. So it’s no surprise that Canadians are now boycotting American goods and services. This is devastating businesses in American border towns and hurts the economy nationwide.
Similarly, the Canadian government has responded to whipsawing U.S. tariff announcements with counter-tariffs, which will slice the profits of American exporters. Close American allies and trading partners such as Japan, the U.K. and the European Union are also signaling their own willingness to impose retaliatory tariffs, increasing the costs of operations to American business even more.
Modern capitalism depends on smart regulation to thrive. Smart regulation is not an obstacle to capitalism. Smart regulation is what makes American capitalism possible. Smart regulation is what makes American freedom possible.
Clear and consistently applied legal rules allow businesses to aggressively compete, carefully plan, and generate profits. An arbitrary rule of law deprives business of the true power of capitalism – the ability to promote economic growth, spur innovation and improve the overall living standards of a free society. Americans deserve no less, and it is up to government to make that happen for everyone.
Robert Bird does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
About 600 miles off the west coast of Africa, large clusters of thunderstorms begin organizing into tropical storms every hurricane season. They aren’t yet in range of Hurricane Hunter flights, so forecasters at the National Hurricane Center rely on weather satellites to peer down on these storms and beam back information about their location, structure and intensity.
The satellite data helps meteorologists create weather forecasts that keep planes and ships safe and prepare countries for a potential hurricane landfall.
Now, meteorologists are about to lose access to three of those satellites.
On June 25, 2025, the Trump administration issued a service change notice announcing that the Defense Meteorological Satellite Program, DMSP, and the Navy’s Fleet Numerical Meteorology and Oceanography Center would terminate data collection, processing and distribution of all DMSP data no later than June 30. The data termination was postponed until July 31 following a request from the head of NASA’s Earth Science Division.
How hurricanes form. NOAA
I am a meteorologist who studies lightning in hurricanes and helps train other meteorologists to monitor and forecast tropical cyclones. Here is how meteorologists use the DMSP data and why they are concerned about it going dark.
Looking inside the clouds
At its most basic, a weather satellite is a high-resolution digital camera in space that takes pictures of clouds in the atmosphere.
These are the satellite images you see on most TV weather broadcasts. They let meteorologists see the location and some details of a hurricane’s structure, but only during daylight hours.
Hurricane Flossie spins off the Mexican coast on July 1, 2025. Images show the top of the hurricane from space as day turns to night. NOAA GOES
Meteorologists can use infrared satellite data, similar to a thermal imaging camera, at all hours of the day to find the coldest cloud-top temperatures, highlighting areas where the highest wind speeds and rainfall rates are found.
But while visible and infrared satellite imagery are valuable tools for hurricane forecasters, they provide only a basic picture of the storm. It’s like a doctor diagnosing a patient after a visual exam and checking their temperature.
Infrared bands show more detail of Hurricane Flossie’s structure on July 1, 2025. NOAA GOES
For more accurate diagnoses, meteorologists rely on the DMSP satellites.
The three satellites orbit Earth 14 times per day with special sensor microwave imager/sounder instruments, or SSMIS. These let meteorologists look inside the clouds, similar to how an MRI in a hospital looks inside a human body. With these instruments, meteorologists can pinpoint the storm’s low-pressure center and identify signs of intensification.
Precisely locating the center of a hurricane improves forecasts of the storm’s future track. This lets meteorologists produce more accurate hurricane watches, warnings and evacuations.
About 80% of major hurricanes – those with wind speeds of at least 111 mph (179 kilometers per hour) – rapidly intensify at some point, ramping up the risks they pose to people and property on land. Finding out when storms are about to undergo intensification allows meteorologists to warn the public about these dangerous hurricanes.
Where are the defense satellites going?
NOAA’s Office of Satellite and Product Operations described the reason for turning off the flow of data as a need to mitigate “a significant cybersecurity risk.”
The three satellites have already operated for longer than planned.
The DMSP satellites were launched between 1999 and 2009 and were designed to last for five years. They have now been operating for more than 15 years. The United States Space Force recently concluded that the DMSP satellites would reach the end of their lives between 2023 and 2026, so the data would likely have gone dark soon.
The advanced technology microwave sounder, or ATMS, can provide data similar to the special sensor microwave imager/sounder, or SSMIS, but at a lower resolution. It provides a more washed-out view that is less useful than the SSMIS for pinpointing a storm’s location or estimating its intensity.
Images of Hurricane Erick off the coast of Mexico, viewed from NOAA-20’s ATMS (left) and DMPS SSMIS (right) on June 18 show the difference in resolution and the higher detail provided by the SSMIS data. U.S. Naval Research Laboratory, via Michael Lowry
ML-1A is a microwave satellite that will help replace some of the DMSP satellites’ capabilities. However, the government hasn’t announced whether the ML-1A data will be available to forecasters, including those at the National Hurricane Center.
Why are satellite replacements last minute?
Satellite programs are planned over many years, even decades, and are very expensive. The current geostationary satellite program launched its first satellite in 2016 with plans to operate until 2038. Development of the planned successor for GOES-R began in 2019.
Scientists prepare a GOES-R satellite for packing aboard a rocket in 2016. NASA/Charles Babir
Delays in developing the satellite instruments and funding cuts caused the National Polar-orbiting Operational Environmental Satellite System and Defense Weather Satellite System to be canceled in 2010 and 2012 before any of their satellites could be launched.
The 2026 NOAA budget request includes an increase in funding for the next-generation geostationary satellite program, so it can be restructured to reuse spare parts from existing geostationary satellites. The budget also terminates contracts for ocean color, atmospheric composition and advanced lightning mapper instruments.
Hurricane forecasters will continue to use all available tools, including satellite, radar, weather balloon and dropsonde data, to monitor the tropics and issue hurricane forecasts. But the loss of satellite data, along with other cuts to data, funding and staffing, could ultimately put more lives at risk.
Chris Vagasky is a member of the American Meteorological Society and the National Weather Association.
On June 26, 2025, the U.S. Supreme Court handed down a 6-3 ruling that preserves free preventive care under the Affordable Care Act, a popular benefit that helps approximately 150 million Americans stay healthy.
The case, Kennedy v. Braidwood, was the fourth major legal challenge to the Affordable Care Act. The decision, written by Justice Brett Kavanaugh with the support of Justices Amy Coney Barrett, Elena Kagan, Ketanji Brown Jackson and Sonia Sotomayor, ruled that insurers must continue to cover at no cost any preventive care approved by a federal panel called the U.S. Preventive Services Task Force.
Members of the task force are independent scientific experts, appointed for four-year terms. The panel’s role had been purely advisory until the ACA, and the plaintiffs contended that the members lacked the appropriate authority as they had not been appointed by the President and confirmed by the Senate. The Supreme Court rejected this argument, saying that members simply needed to be appointed by the Health and Human Services Secretary – currently, Robert F. Kennedy Jr. – which they had been, under his predecessor during the Biden administration.
This ruling seemingly safeguards access to preventive care. But as public health researchers who study health insurance and sexual health, we see another concern: It leaves preventive care vulnerable to how Kennedy and future HHS secretaries will choose to exercise their power over the task force and its recommendations.
What is the US Preventive Services Task Force?
The U.S. Preventive Services Task Force was initially created in 1984 to develop recommendations about prevention for primary care doctors. It is modeled after the Canadian Task Force on Preventive Health Care, which was established in 1976.
Under the ACA, insurers must fully cover all screenings and interventions endorsed by the U.S. Preventive Services Task Force. SDI Productions/E+ via Getty Images
The task force makes new recommendations and updates existing ones by reviewing clinical and policy evidence on a regular basis and weighing the potential benefits and risks of a wide range of health screenings and interventions. These include mammograms; blood pressure, colon cancer, diabetes and osteoporosis screenings; and HIV prevention. Over 150 million Americans have benefited from free coverage of these recommended services under the ACA, and around 60% of privately insured people use at least one of the covered services each year.
The task force plays such a crucial role in health care because it is one of three federal groups whose recommendations insurers must abide by. Section 2713 of the Affordable Care Act requires insurers to offer full coverage of preventive services endorsed by three federal groups: the U.S. Preventive Services Task Force, the Advisory Committee on Immunization Practices, and the Health Resources and Services Administration. For example, the coronavirus relief bill, which passed in March 2020 and allocated emergency funding in response to the COVID-19 pandemic, used this provision to ensure COVID-19 vaccines would be free for many Americans.
The Braidwood case and HIV prevention
This case, originally filed in Texas in 2020, was brought by Braidwood Management, a Christian for-profit corporation owned by Steven Hotze, a Texas physician and Republican activist who has previously filed multiple lawsuits against the ACA. Braidwood and its co-plaintiffs argued on religious grounds against being forced to offer preexposure prophylaxis, or PrEP, a medicine that prevents HIV infection, in their insurance plans.
At issue in Braidwood was whether task force members – providers and researchers who provide independent and nonpartisan expertise – were appropriately appointed and supervised under the appointments clause of the Constitution, which specifies how various government positions are appointed. The case called into question free coverage of all recommendations made by the task force since the Affordable Care Act was passed in March 2010.
In the ruling, Kavanaugh wrote that “the Task Force members’ appointments are fully consistent with the Appointments Clause in Article II of the Constitution.” In laying out his reasoning, he wrote, “The Task Force members were appointed by and are supervised and directed by the Secretary of HHS. And the Secretary of HHS, in turn, answers to the President of the United States.”
Concerns over political influence
The U.S. Preventive Services Task Force is meant to operate independently of political influence, and its decisions are technically not directly reviewable. However, the task force is appointed by the HHS secretary, who may remove any of its members at any time for any reason, even if such actions are highly unusual.
Kennedy recently took the unprecedented step of removing all members of the Advisory Committee on Immunization Practices, which debates vaccine safety but also, crucially, helps decide what immunizations are free to Americans guaranteed by the Affordable Care Act. The newly constituted committee, appointed in weeks rather than years, includes several vaccine skeptics and has already moved to rescind some vaccine recommendations, such as routine COVID-19 vaccines for pregnant women and children.
Kennedy has also proposed restructuring out of existence the agency that supports the task force, the Agency for Healthcare Research and Quality. That agency has been subject to massive layoffs within the Department of Health and Human Services. For full disclosure, one of the authors is currently funded by the Agency for Healthcare Research and Quality and previously worked there.
The decision to safeguard the U.S. Preventive Services Task Force as a body and, by extension, free preventive care under the ACA, doesn’t come without risks and highlights the fragility of long-standing, independent advisory systems in the face of the politicization of health. Kennedy could simply remove the existing task force members and replace them with members who may reshape the types of care recommended to Americans by their doctors and insurance plans based on debunked science and misinformation.
Partisanship and the politicization of health threaten trust in evidence. Already, signs are emerging that Americans on both side of the political divide are losing confidence in government health agencies. This ruling preserves a crucial part of the Affordable Care Act, yet federal health guidelines and access to lifesaving care could still swing dramatically in Kennedy’s hands – or with each subsequent transition of power.
Paul Shafer receives research funding from the National Institutes of Health, Agency for Healthcare Research and Quality, and Department of Veterans Affairs. The views expressed in this article are those of the authors and do not necessarily reflect the position or policy of these agencies or the United States government.
Kristefer Stojanovski receives funding from the Robert Wood Johnson Foundation. The views expressed in this article are those of the authors and do not necessarily reflect the position or policy of these agencies or the United States government.
On June 26, 2025, the U.S. Supreme Court handed down a 6-3 ruling that preserves free preventive care under the Affordable Care Act, a popular benefit that helps approximately 150 million Americans stay healthy.
The case, Kennedy v. Braidwood, was the fourth major legal challenge to the Affordable Care Act. The decision, written by Justice Brett Kavanaugh with the support of Justices Amy Coney Barrett, Elena Kagan, Ketanji Brown Jackson and Sonia Sotomayor, ruled that insurers must continue to cover at no cost any preventive care approved by a federal panel called the U.S. Preventive Services Task Force.
Members of the task force are independent scientific experts, appointed for four-year terms. The panel’s role had been purely advisory until the ACA, and the plaintiffs contended that the members lacked the appropriate authority as they had not been appointed by the President and confirmed by the Senate. The Supreme Court rejected this argument, saying that members simply needed to be appointed by the Health and Human Services Secretary – currently, Robert F. Kennedy Jr. – which they had been, under his predecessor during the Biden administration.
This ruling seemingly safeguards access to preventive care. But as public health researchers who study health insurance and sexual health, we see another concern: It leaves preventive care vulnerable to how Kennedy and future HHS secretaries will choose to exercise their power over the task force and its recommendations.
What is the US Preventive Services Task Force?
The U.S. Preventive Services Task Force was initially created in 1984 to develop recommendations about prevention for primary care doctors. It is modeled after the Canadian Task Force on Preventive Health Care, which was established in 1976.
Under the ACA, insurers must fully cover all screenings and interventions endorsed by the U.S. Preventive Services Task Force. SDI Productions/E+ via Getty Images
The task force makes new recommendations and updates existing ones by reviewing clinical and policy evidence on a regular basis and weighing the potential benefits and risks of a wide range of health screenings and interventions. These include mammograms; blood pressure, colon cancer, diabetes and osteoporosis screenings; and HIV prevention. Over 150 million Americans have benefited from free coverage of these recommended services under the ACA, and around 60% of privately insured people use at least one of the covered services each year.
The task force plays such a crucial role in health care because it is one of three federal groups whose recommendations insurers must abide by. Section 2713 of the Affordable Care Act requires insurers to offer full coverage of preventive services endorsed by three federal groups: the U.S. Preventive Services Task Force, the Advisory Committee on Immunization Practices, and the Health Resources and Services Administration. For example, the coronavirus relief bill, which passed in March 2020 and allocated emergency funding in response to the COVID-19 pandemic, used this provision to ensure COVID-19 vaccines would be free for many Americans.
The Braidwood case and HIV prevention
This case, originally filed in Texas in 2020, was brought by Braidwood Management, a Christian for-profit corporation owned by Steven Hotze, a Texas physician and Republican activist who has previously filed multiple lawsuits against the ACA. Braidwood and its co-plaintiffs argued on religious grounds against being forced to offer preexposure prophylaxis, or PrEP, a medicine that prevents HIV infection, in their insurance plans.
At issue in Braidwood was whether task force members – providers and researchers who provide independent and nonpartisan expertise – were appropriately appointed and supervised under the appointments clause of the Constitution, which specifies how various government positions are appointed. The case called into question free coverage of all recommendations made by the task force since the Affordable Care Act was passed in March 2010.
In the ruling, Kavanaugh wrote that “the Task Force members’ appointments are fully consistent with the Appointments Clause in Article II of the Constitution.” In laying out his reasoning, he wrote, “The Task Force members were appointed by and are supervised and directed by the Secretary of HHS. And the Secretary of HHS, in turn, answers to the President of the United States.”
Concerns over political influence
The U.S. Preventive Services Task Force is meant to operate independently of political influence, and its decisions are technically not directly reviewable. However, the task force is appointed by the HHS secretary, who may remove any of its members at any time for any reason, even if such actions are highly unusual.
Kennedy recently took the unprecedented step of removing all members of the Advisory Committee on Immunization Practices, which debates vaccine safety but also, crucially, helps decide what immunizations are free to Americans guaranteed by the Affordable Care Act. The newly constituted committee, appointed in weeks rather than years, includes several vaccine skeptics and has already moved to rescind some vaccine recommendations, such as routine COVID-19 vaccines for pregnant women and children.
Kennedy has also proposed restructuring out of existence the agency that supports the task force, the Agency for Healthcare Research and Quality. That agency has been subject to massive layoffs within the Department of Health and Human Services. For full disclosure, one of the authors is currently funded by the Agency for Healthcare Research and Quality and previously worked there.
The decision to safeguard the U.S. Preventive Services Task Force as a body and, by extension, free preventive care under the ACA, doesn’t come without risks and highlights the fragility of long-standing, independent advisory systems in the face of the politicization of health. Kennedy could simply remove the existing task force members and replace them with members who may reshape the types of care recommended to Americans by their doctors and insurance plans based on debunked science and misinformation.
Partisanship and the politicization of health threaten trust in evidence. Already, signs are emerging that Americans on both side of the political divide are losing confidence in government health agencies. This ruling preserves a crucial part of the Affordable Care Act, yet federal health guidelines and access to lifesaving care could still swing dramatically in Kennedy’s hands – or with each subsequent transition of power.
Paul Shafer receives research funding from the National Institutes of Health, Agency for Healthcare Research and Quality, and Department of Veterans Affairs. The views expressed in this article are those of the authors and do not necessarily reflect the position or policy of these agencies or the United States government.
Kristefer Stojanovski receives funding from the Robert Wood Johnson Foundation. The views expressed in this article are those of the authors and do not necessarily reflect the position or policy of these agencies or the United States government.
Source: The Conversation – France – By Jérémy Lemarié, Maître de conférences à l’Université de Reims Champagne-Ardenne, Université de Reims Champagne-Ardenne (URCA)
Invented in Hawaii, surfing gained popularity in the United States and Australia in the 1950s before becoming a global phenomenon. Now practiced in more than 150 countries, its spread has been driven by media and tourism. Surf tourism involves travelling to destinations to catch waves, either with a surfboard or through activities such as body surfing or bodyboarding. Tourists range from seasoned surfers to beginners eager to learn.
The allure of California
For many, surf tourism evokes exotic imagery shaped by California production companies. Columbia Pictures in 1959 and Paramount Pictures in 1961 introduced surfing to the middle class, showcasing the sport as a gateway to summer adventure and escape. However, it was the 1966 movie The Endless Summer, directed and produced by Bruce Brown, that became a box office success. The film follows two Californians travelling the globe in search of the perfect wave, which they ultimately find in South Africa. Beneath the seemingly lighthearted portrayal of a “surf safari”, it carries undertones of colonial ambition.
In the film, the Californians tell people in Africa that waves are untapped resources ready to be named and conquered. This sense of Western cultural dominance over populations in poorer countries has permeated surf tourism. Since the 1970s, French surfers have flocked to Morocco for its long-breaking waves, Australians have flocked to Indonesia and Californians to Mexico. The expansion of surfing to Africa, Asia and Latin America was enabled by easier international travel and economic disparities between visitors and hosts.
Surfing’s impact on local communities
Indonesia, for instance, became a surfing hotspot after Australian surfers started to explore the waves of Bali and the Mentawai Islands in the 1970s. Once remote regions with modest living standards, these areas saw tourism infrastructure mushroom to meet demand. Today, destinations such as Uluwatu in Bali and Padang Padang in Sumatra attract surfers of all skill levels.
Similarly, Morocco has experienced a surge in surf tourism, with spots such as Taghazout drawing European visitors in search of affordable waves and sunshine. While this has boosted local economies, it has also raised concerns about environmental degradation and the strain of tourism on previously untouched areas.
The challenges of overtourism in coastal areas
Although surfing is often seen as an activity in harmony with nature, mass tourism has created tensions between local surfers and visitors. Overtourism refers to the negative impact of excessive tourist numbers on natural environments and local communities.
One response to overtourism is localism – where local surfers assert ownership of waves, sometimes discouraging or even intimidating outsiders. This has been particularly pronounced in economically dependent surf destinations. For example, in Hawaii during the 1970s and 1980s, local surfers protested against the influx of professional Australian surfers and international competitions. Today, localism persists globally, from Maroubra in Sydney to Boucau-Tarnos in France’s Nouvelle-Aquitaine region. These places are not systematically off-limits to beginners, but major conflicts can arise during peak tourist seasons.
Surf schools, while crucial for teaching newcomers, also exacerbate crowding. During high seasons, beaches such as Côte des Basques in Biarritz become overcrowded, straining relations between experienced surfers, instructors and novices. Beginners, often unaware of surf etiquette and safety rules, contribute to frustrations among seasoned surfers.
A weekly e-mail in English featuring expertise from scholars and researchers. It provides an introduction to the diversity of research coming out of the continent and considers some of the key issues facing European countries. Get the newsletter!
The role of public authorities
In response to these challenges, public initiatives have emerged to promote sustainable surf tourism. For instance, the Costa Rican government has established marine protected areas and regulated tourism activities to preserve a part of the coastal environment. Local authorities have also begun capping the number of surf schools and making access to the practice more difficult.
In southwestern France, municipalities use public service delegations (DSP), temporary occupation authorisations (AOT) and other tools to regulate surf schools operating on public beaches. Environmental awareness programmes have been launched to educate tourists on responsible behaviour toward beaches and oceans.
Gaps in regulation
Despite these measures, many coastal regions face insufficient action to address the environmental and social challenges posed by surf tourism. In Fiji, a 2010 decree deregulated the surf tourism industry, eliminating traditional indigenous rights to coastal and reef areas. This allowed unregulated development of tourism infrastructure, often ignoring long-term ecological impacts.
Similar issues are seen in Morocco, where lax regulations allow foreign investors to exploit coastal land for hotel development, often providing little benefit to local communities.
Yet, there are success stories. In Santa Cruz, California, the initiative Save Our Shores mobilises citizens and tourists to protect beaches through anti-pollution campaigns and regular cleanups.
Surf tourism has brought significant economic benefits to many coastal regions. However, it has also introduced social and environmental challenges, including localism, overcrowding and ecological strain. Managing these issues requires a collaborative approach, with governments, local stakeholders and tourists working together to preserve the sport’s connection to nature.
This article was published as part of the 2024 Fête de la Science, of which The Conversation France was a partner. The year’s theme, “Oceans of Knowledge,” explored the wonders of the marine world.
Jérémy Lemarié is a member of the Fulbright network, as the recipient of the “Chercheuses et Chercheurs” grant from the Franco-American Commission in 2022-2023.