The Commons order paper published on June 26 revealed that 126 Labour MPs had signed an amendment opposing a second reading for the bill, which proposes restricting disability benefits to levels they find unacceptable. Cleverly, the amendment stated that they accept “the need for the reform of the social security system” but they then listed a plethora of reasons as to why they declined to give the bill a second reading when it is due for a vote on July 1.
Many of these reasons related to the government’s own assessment of the impact of the bill. It openly admits, for example, that an estimated 250,000 people, including 50,000 children, would be pushed into poverty by the changes being made to the social security system.
Want more politics coverage from academic experts? Every week, we bring you informed analysis of developments in government and fact check the claims being made.
Faced with the possibility of losing a vote to his own MP in the week marking the first anniversary of his arrival in Downing Street, prime minister Keir Starmer is promising to make concessions. These reportedly include exempting people currently receiving disability benefits from the changes.
But whether or not this is enough to stop the rebellion, significant damage has been done. Securing the second reading on half-promised and lukewarm concessions that cannot be sustained simply stores up future strife.
Collision course
How did the government reach a position where it was at risk of losing a vote on one of its key bills in the week in which it celebrates a year in office? Why has it been pushing a bill so obviously lacking in support among its own MPs? Why has no-one rolled with the political pitch and controlled the narrative?
This is not a muscle flexing exercise of the kind seen in December 1997, when Labour sought to show how tough it could be by cutting benefits for lone parents. It is not a macho attempt to see off a resurgent left flank, because effectively there isn’t one. The troublesome hard left is now tiny. Nor is it a putative rebellion that can be dismissed as dominated by the usual suspects. It is a rebellion of the mainstream core of the backbench parliamentary Labour party (PLP). Among the 126 MPs openly speaking out against the bill, 11 are Labour select committee chairs and 62 of them were only elected last year. In short, these are not the usual suspects. Their complaints cannot be readily dismissed.
There were allegedly noises off from some whips suggesting this might be a confidence issue – implying that the government could be in trouble so pressure is being piled on rebels to withdraw or risk bringing down the government. I was a government whip from 1999 to 2002, and I can attest that no whip should be running around declaring this a potential “confidence vote”. And no MP should believe that it is. It is not. Were there to be any truth in these rumours then it indicates a whips’ office either vastly inexperienced, overconfident and arrogant, or simply grossly incompetent and panicked. Both the chief whip and the No.10 political operation will come under intense scrutiny whatever happens now. How did they not see this coming?
The truth is that the only serious option at this point should be to bury the bill. It should be pulled before the vote and resurrected in the context of developing an anti-poverty strategy, including a child poverty alleviation plan. It might be that a sufficient number of “rebel signatories” are persuaded to let the second reading happen with a promise of further changes building on the concessions already announced, but this does not mean a safe passage later in the process. Many of the signatories will have already been disheartened and worried by the scrapping of the winter fuel allowance and the continuation of two-child benefit limit. They may have acquiesced on the latter and pocketed the change in policy on the former, but their disquiet and anger has not gone away.
The government should never have been in a position of seriously considering pushing the bill through hoping it will secure Conservative support for its second reading. To do so would seriously threaten if not Starmer’s position, then certainly the position of the work and pensions secretary Liz Kendall – and even perhaps that of the chancellor, Rachel Reeves. All three will still emerge from this week damaged in some fashion.
Rebellions such as this can take on a dynamic and life of their own and are likely to grow rather than diminish. Some 106 Labour MPs signed the amendment initially – only to be joined by more in short order. Backbenchers will have been worried about being asked “what did you do in the war?” by their grassroots members had they not enlisted their support.
There is also a danger that once blooded by rebellion, some of the 120 plus MPs will get a taste for it – and that spells a real danger for the government, even one with a majority of 165.
Either way, the government, which was relying on the bill to make £5bn worth of savings that would supposedly obviate the need for tax rises in the autumn, is going to have to somehow salvage both its economic and its political strategy in the wake of this crisis – and start to take its backbenchers more seriously.
It’s not how anyone would have wanted to mark a year in office. Happy birthday, one and all.
This article includes links to bookshop.org. If you click on one of the links and go on to buy something from bookshop.org The Conversation UK may earn a commission.
The Commons order paper published on June 26 revealed that 126 Labour MPs had signed an amendment opposing a second reading for the bill, which proposes restricting disability benefits to levels they find unacceptable. Cleverly, the amendment stated that they accept “the need for the reform of the social security system” but they then listed a plethora of reasons as to why they declined to give the bill a second reading when it is due for a vote on July 1.
Many of these reasons related to the government’s own assessment of the impact of the bill. It openly admits, for example, that an estimated 250,000 people, including 50,000 children, would be pushed into poverty by the changes being made to the social security system.
Want more politics coverage from academic experts? Every week, we bring you informed analysis of developments in government and fact check the claims being made.
Faced with the possibility of losing a vote to his own MP in the week marking the first anniversary of his arrival in Downing Street, prime minister Keir Starmer is promising to make concessions. These reportedly include exempting people currently receiving disability benefits from the changes.
But whether or not this is enough to stop the rebellion, significant damage has been done. Securing the second reading on half-promised and lukewarm concessions that cannot be sustained simply stores up future strife.
Collision course
How did the government reach a position where it was at risk of losing a vote on one of its key bills in the week in which it celebrates a year in office? Why has it been pushing a bill so obviously lacking in support among its own MPs? Why has no-one rolled with the political pitch and controlled the narrative?
This is not a muscle flexing exercise of the kind seen in December 1997, when Labour sought to show how tough it could be by cutting benefits for lone parents. It is not a macho attempt to see off a resurgent left flank, because effectively there isn’t one. The troublesome hard left is now tiny. Nor is it a putative rebellion that can be dismissed as dominated by the usual suspects. It is a rebellion of the mainstream core of the backbench parliamentary Labour party (PLP). Among the 126 MPs openly speaking out against the bill, 11 are Labour select committee chairs and 62 of them were only elected last year. In short, these are not the usual suspects. Their complaints cannot be readily dismissed.
There were allegedly noises off from some whips suggesting this might be a confidence issue – implying that the government could be in trouble so pressure is being piled on rebels to withdraw or risk bringing down the government. I was a government whip from 1999 to 2002, and I can attest that no whip should be running around declaring this a potential “confidence vote”. And no MP should believe that it is. It is not. Were there to be any truth in these rumours then it indicates a whips’ office either vastly inexperienced, overconfident and arrogant, or simply grossly incompetent and panicked. Both the chief whip and the No.10 political operation will come under intense scrutiny whatever happens now. How did they not see this coming?
The truth is that the only serious option at this point should be to bury the bill. It should be pulled before the vote and resurrected in the context of developing an anti-poverty strategy, including a child poverty alleviation plan. It might be that a sufficient number of “rebel signatories” are persuaded to let the second reading happen with a promise of further changes building on the concessions already announced, but this does not mean a safe passage later in the process. Many of the signatories will have already been disheartened and worried by the scrapping of the winter fuel allowance and the continuation of two-child benefit limit. They may have acquiesced on the latter and pocketed the change in policy on the former, but their disquiet and anger has not gone away.
The government should never have been in a position of seriously considering pushing the bill through hoping it will secure Conservative support for its second reading. To do so would seriously threaten if not Starmer’s position, then certainly the position of the work and pensions secretary Liz Kendall – and even perhaps that of the chancellor, Rachel Reeves. All three will still emerge from this week damaged in some fashion.
Rebellions such as this can take on a dynamic and life of their own and are likely to grow rather than diminish. Some 106 Labour MPs signed the amendment initially – only to be joined by more in short order. Backbenchers will have been worried about being asked “what did you do in the war?” by their grassroots members had they not enlisted their support.
There is also a danger that once blooded by rebellion, some of the 120 plus MPs will get a taste for it – and that spells a real danger for the government, even one with a majority of 165.
Either way, the government, which was relying on the bill to make £5bn worth of savings that would supposedly obviate the need for tax rises in the autumn, is going to have to somehow salvage both its economic and its political strategy in the wake of this crisis – and start to take its backbenchers more seriously.
It’s not how anyone would have wanted to mark a year in office. Happy birthday, one and all.
This article includes links to bookshop.org. If you click on one of the links and go on to buy something from bookshop.org The Conversation UK may earn a commission.
Source: The Conversation – UK – By Professor Manda Banerji, Professor of Astrophysics, School of Physics & Astronomy, University of Southampton
We are entering a new era of cosmic exploration. The new Vera C Rubin Observatory in Chile will transform astronomy with its extraordinary ability to map the universe in breathtaking detail. It is set to reveal secrets previously beyond our grasp. Here, we delve into the first images taken by Rubin’s telescope and what they are already showing us.
These images vividly showcase the unprecedented power that Rubin will use to
revolutionise astronomy and our understanding of the Universe. Rubin is truly transformative, thanks to its unique combination of sensitivity, vast sky area coverage and exceptional image quality.
These pictures powerfully demonstrate those attributes. They reveal not only bright objects in exquisite detail but also faint structures, both near and far, across a large area of sky.
Cosmic nurseries – nebulae in detail
The stunning pink and blue clouds in this image are the Lagoon (lower left) and Trifid (upper right) nebulae. The word nebula comes from the Latin for cloud, and these giant clouds are truly enormous – so vast it takes light decades to travel across them. They are stellar nurseries, the very birth sites for the next generation of stars and planets in our Milky Way galaxy.
The intense radiation from hot, young stars energises the gas particles, causing
them to glow pink. Further from these nascent stars, colder regions consist of
microscopic dust grains. These reflect starlight (a process known in astronomy as
“scattering”), much like our atmosphere, creating the beautiful blue hues. Darker filaments within are much denser regions of dust, obscuring all but the brightest background stars.
To detect these colours, astronomers use filters over their instruments, allowing only certain wavelengths of light onto the detectors. Rubin has six such filters, spanning from short ultraviolet (UV) wavelengths through the visible spectrum to longer near-infrared light. Combining information from these different filters enables detailed measurements of the properties of stars and gas, such as their temperature and size.
Rubin’s speed – its ability to take an image with one filter and then quickly move to the next – combined with the sheer area of sky it can see at any one time, is what makes it so unique and so exciting. The level of detail, revealing the finest and faintest structures, will enable it to map the substructure and satellite galaxies of the Milky Way like never before.
Mapping galaxies across billions of light years
This image captures a small section of NSF–DOE Vera C. Rubin Observatory’s view of the Virgo Cluster, offering a vivid glimpse of the variety in the cosmos. Credit: NSF–DOE Vera C. Rubin Observatory
The images of galaxies powerfully demonstrate the scale at which the Rubin
observatory will map the universe beyond our own Milky Way. The large galaxies
visible here (such as the two bright spiral shaped galaxies visible in the lower right quarter of the picture) belong to the Virgo cluster, a giant structure containing more than 1,000 galaxies, each holding billions to trillions of stars.
This image beautifully showcases the huge diversity of shapes, sizes and colours of galaxies in our universe revealed by Rubin in their full technicolour glory. Inside these galaxies, bright dots are visible – these are star-forming regions, just like the Lagoon and Trifid nebulae, but remarkably, these are millions of light years away from us.
The still image captures just 2% of the area of a full Rubin image revealing a universe that is teeming with celestial bodies. The full image, which contains around ten million galaxies, would need several hundred ultra high-definition TV screens to display in all its detail. By the end of its ten-year survey, Rubin will catalogue the properties of some 20 billion galaxies, their colours and locations on the sky containing information about even more mysterious components of our universe such as dark matter and dark energy. Dark matter makes up most of the matter in the cosmos, but does not reflect or emit light. Dark energy seems to be responsible for the accelerating expansion of the universe.
The UK’s role
These unfathomable numbers demand data processing on a whole new scale.
Uncovering new discoveries from this data requires a giant collaborative effort, in which UK astronomy is playing a major role. The UK will process around 1.5 million Rubin images and hosts one of three international data access centres for the project, providing scientists across the globe with access to the vast Rubin data. Here at the University of Southampton, we are leading two critical software
development contributions to Rubin.
First of these is the capability to combine the Rubin images with those at longer infrared wavelengths. This extends the colours that Rubin sees, providing key diagnostic information about the properties of stars and galaxies. Second is the software that will link Rubin observations to another new instrument called 4MOST, soon to be installed at the Vista telescope in Chile.
Part of 4MOST’s job will be to snap up and classify rapidly changing “sources”, or objects, in the sky that have been discovered by Rubin. One such type of rapidly changing source is a stellar explosion known as a supernova. We expect to have catalogued more supernova explosions within just two years than have ever been made previously. Our contributions to the Rubin project will therefore lead to a totally new understanding of how the stars and galaxies in our universe live and die, offering an unprecedented glimpse into the grand cosmic cycle.
The Rubin observatory isn’t just a new telescope – it’s a new pair of eyes on the
universe, revealing the cosmos in unprecedented detail. A treasure trove of
discoveries await, but most interesting among them will be the hidden secrets of the universe that we are yet to contemplate. The first images from Rubin have been a spectacular demonstration of the vastness of the universe. What might we find in
this gargantuan dataset of the cosmos as the ultimate timelapse movie of our
universe unfolds?
Professor Manda Banerji receives funding from the Royal Society and the Science and Technology Facilities Council.
Dr Philip Wiseman receives funding from the Science and Technology Facilities Council
Source: The Conversation – UK – By Leonie Fleischmann, Senior Lecturer in International Politics, City St George’s, University of London
With all eyes on the ceasefire between Israel and Iran, which came into effect 12 days after Israel launched a major attack on Iran’s nuclear and military structure, attention towards Gaza has waned. This is at a time when attempting to gain access to food under a new model of aid distribution has been described by the United Nations as a “death trap”.
According to the UN World Food Programme, more than 470,000 people are facing “catastrophic” hunger and the entire population is experiencing “acute” food insecurity. This was exacerbated when Israel imposed a blockade on the Strip in mid-March 2025, preventing the entry of food, medication and other aid for a period of 70 days.
Following international pressure, Israel’s prime minister, Benjamin Netanyahu, ordered the resumption of humanitarian aid through a new model of distribution, which bypasses the existing UN and NGO channels. It was devised by Israel and handed to a United States-backed organisation, the Gaza Humanitarian Foundation (GHF) to operate.
According to Netanyahu, taking control of aid delivery would prevent Hamas from seizing and selling supplies. Two of his cabinet ministers, far-right politicians Bezalel Smotrich and Itamar Ben Gvir, objected to any aid entering Gaza, due to the risk of it serving to bolster Hamas.
A video was circulated on social media on June 26 allegedly showing armed men from Hamas commandeering aid trucks in northern Gaza. Smotrich threatened to leave the coalition if supplies continued to reach the hands of Hamas. In response, Netanyahu has since halted the entry of humanitarian aid into the north of Gaza.
GHF was ostensibly established to improve the distribution of aid in Gaza. But the UN swiftly condemned its new distribution model as “inadequate, dangerous and a violation of impartiality rules”.
Reports from one distribution site on its first day of operation on May 27 showed scenes of chaos and confusion. The site outside Rafah was described as overwhelmed with hundreds of people rushing towards the aid boxes. The New York Times reported that Israel Defense Force (IDF) personnel fired several warning shots, which sent the crowed running away in panic.
In the past two months, there have been continued reports of violence and chaos at the distribution sites, with deadly incidents a near daily occurrence. On the day the ceasefire between Iran and Israel was confirmed (June 24) at least 46 Palestinians waiting for aid in Gaza were shot by Israeli forces in two separate incidents, according to Gaza’s civil defence agency. Over 400 Palestinians have been killed around the four aid distribution centres since they began operating.
A letter signed by leading aid and human rights organisations criticised the GHF for not meeting the four universally recognised principles for humanitarian action: humanity, neutrality, impartiality and independence.
Critics say that the GHF system effectively militarises aid distribution. GHF’s leadership is made up of retired military officers and private security contractors, with some humanitarian aid officials. It coordinates with a private US security company on the ground in Gaza. Meanwhile the IDF patrols the perimeters at what it calls “secure distribution sites”.
Critics argued that the proposed model would be insufficient. The plan called for only four aid distribution centres to be established in the southern part of the Gaza Strip, compared with about 400 UN-led sites in operation across Gaza prior to October 7 2023.
The reduced number and location of the aid sites can be understood as a mechanism of forced displacement. It appears to be consistent with Netanyahu’s plan to relocate Palestinians to a “sterile zone” in Gaza’s far south. UN officials argued that the requirement for civilians to travel long distances and to cross Israeli military lines and combat zones to collect aid from the sites would “put civilian lives in danger and cause mass displacement while using aid as ‘bait’”. Forced displacement is illegal under international law.
Countering the criticisms
The GHF rejected claims that the IDF have attacked Palestinians at the aid sites. Reports from Israeli news outlets have also countered the widespread media claims.
Israel Hayom, a free Israeli Hebrew-language daily newspaper criticised “inflammatory” reports that the IDF had opened fire on Palestinians lining up for food. The right-leaning news outlet, argued that it was Hamas which had shot at Gazan civilians.
The broadcaster 7 Israel National News reported that Hamas killed eight aid workers from the GHF in early June. A more positive spin from the same news outlet highlighted that improvements that have been made to security at the centres and that enough supplies for 1.4 million meals had been distributed in a single day on June 5.
Despite these claims from within Israel, evidence presented by the UN has suggested that the aid mechanisms are not only failing to meet the humanitarian needs in Gaza, but are making “a desperate situation worse”.
Following two months in operation, 15 human rights and legal organisations have called for the GHF to be suspended. They argue that “this new model of privatised, militarised aid distribution constitutes a radical and dangerous shift away from established international humanitarian relief operations”.
As a consequence of both the controversial establishment of the GHF and its failures on the ground, they believe that its operations may amount to grave violations of international humanitarian, human rights and criminal law.
Leonie Fleischmann does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Source: The Conversation – UK – By Luca Stroppa, Postdoctoral Fellow on the project “Early Diagnosis – Handling Knowing”, University of St Andrews
The current heel-prick test checks for nine rare genetic conditions,antibydni/Shutterstock
By 2030, every baby born in the UK could have their entire genome sequenced under a new NHS initiative to “predict and prevent illness”. This would dramatically expand the current heel-prick test, which checks for nine rare genetic conditions, into a far more extensive screen of hundreds of potential risks.
On the surface, the idea sounds like an obvious win for public health: spot problems early, intervene sooner and save lives. But genetic testing on this scale carries real risks, especially if the results are misunderstood or poorly communicated.
The new plan builds on a recent NHS pilot study that sequenced the genomes of 100,000 newborns in England to identify more than 200 genetic conditions. However, these tests don’t provide clear cut answers. They don’t offer a diagnosis or certainty, just an estimate of risk.
A genetic result might suggest a child has a higher (or lower) probability of developing a certain disease later in life. But risk is not prediction. If parents, or even clinicians, misinterpret that nuance, the consequences could be serious.
Some families may come to see a child flagged as “at risk” as a patient-in-waiting. In extreme cases, they may treat a probability as a certainty; assuming, for instance, that a child “has the gene” and will inevitably become ill. That assumption could reshape how children are raised, how they’re treated and how they could see themselves.
When testing is indiscriminate and communication unclear, the fallout can be wide ranging. Children identified as “high risk” may undergo years of monitoring, unnecessary medical appointments, or even treatment for diseases they never develop. In some cases, this leads to physical harms, from unnecessary medications to procedures with side effects. In others, the damage is psychological: shaping a child’s identity around an anticipated future of illness. These psychological effects can be lasting. Being told you’re likely to develop a condition like dementia may influence how a person plans their life, even if that illness never materialises.
False positives
There are also broader issues with applying this kind of screening to everyone. Risk based testing works best when it’s targeted; for example, among those with symptoms or a strong family history. But in the general population, where most people are healthy, false positives can far outnumber accurate results. Even well designed tests can produce misleading outcomes when applied at scale.
This is a well-known statistical effect, discussed during the COVID pandemic. In populations where a disease is rare, even highly accurate tests produce more false positives than true ones. If DNA screening is rolled out universally, many families will be told their child is at risk when they are not. These false positives can lead to a cascade of further tests, stress and unnecessary clinical interventions; all of which consume time and resources and may cause real harm.
This issue already affects adult testing. For example, Alzheimer’s tests that measure early changes in the brain work well in memory clinics, where patients already show symptoms. But when these same tests are used on the general population, where most people are healthy, they produce false positives in up to two-thirds of cases. If genetic screening in newborns is rolled out in the same way, it could lead to similar problems: mislabelling healthy children as sick, and causing unnecessary worry and follow-up tests.
So what’s the solution? It’s not to abandon genetic testing altogether – far from it. When used carefully, genomic data can offer real benefits, particularly for patients with symptoms or in research settings. But if we’re going to roll this out to every newborn, the surrounding infrastructure needs to be robust.
That includes:
Clear, consistent communication: Risk scores must be explained in ways that emphasise uncertainty, not oversold as definitive predictions.
Support for parents: For consent to be truly informed, parents need help understanding that a genetic flag is not a diagnosis – and that many people with elevated risk never go on to develop the condition.
Training for clinicians: Many doctors still lack the tools to interpret and explain genetic information accurately and responsibly.
A national network of genetic counsellors Genetic counsellors are essential for supporting families through testing and interpretation. But current numbers in the UK fall far short of what universal newborn screening would require.
Genomic data holds great promise. But using it as a blanket tool for all newborns demands caution, clarity, and investment in communication and care. Without these safeguards, we risk turning healthy babies into patients-in-waiting.
The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.
Public support for reforming the UK’s first past the post electoral system has risen markedly of late. So is there any serious chance that such reform could actually happen?
The annual British Social Attitudes survey (BSA) has been tracking public attitudes to electoral reform (and other issues) since 1983. It found consistent majorities for the status quo up to 2017, but charts a dramatic shift since then. In the latest BSA, support for reform has risen to 60%, with just 36% backing the current arrangements.
It’s true that these views are unlikely to be deeply held: most people rarely think about electoral systems. But they do reflect a profound disillusionment with the way the political system is working.
Significant electoral reforms are very rare outside times of regime change. When I wrote a book on the subject in 2010, there had been just six major reforms (from one system type to another) in national parliaments in established democracies since the second world war. That number has increased a little since then, but only because Italy has got into a pattern of endless tinkering. The basic pattern is one of stability.
The main reason for that is obvious: those who gain power through the existing system rarely want to change it.
Yet the cases where reform has happened reveal two basic routes through which such change can take place.
First, those in power can conclude that a different system would better serve their interests. In 1985, for example, France’s president François Mitterrand replaced the system for electing the National Assembly because he feared heavy losses for his Socialist party in the looming elections.
Second, leaders can cave into public demands for reform because they fear that failing to do so will add to their unpopularity. This requires a scandal that affects people in their daily lives, and campaigners who successfully pin blame for that scandal on the voting system. It typically also needs at least a few reform advocates within government.
Want more politics coverage from academic experts? Every week, we bring you informed analysis of developments in government and fact check the claims being made.
These conditions characterised three major reforms in the 1990s, in Italy, Japan, and New Zealand. In the first two cases, rampant corruption fed economic woes and was attributed to the voting system. In New Zealand, first past the post enabled extreme concentration of power, which allowed successive governments to unleash radical, and widely disliked, economic restructuring.
Prospects for reform in the UK
If Labour continues to lag in the polls and votes remain fragmented across multiple parties, we might imagine reform by the first route in the UK. Ministers could calculate that a more proportional system would cut Labour’s losses, clip Nigel Farage’s wings, and reduce uncertainty.
Yet majority parties facing heavy defeat almost never change the system in this way. Mitterrand’s reform of 1985 was a rare exception. Such parties always hope things will turn around. They don’t want to look like they have given up. And they are used to playing a game of alternation in power: they want to hold all the levers some of the time, and will tolerate years in the wilderness to get that.
Reform by the second route is equally improbable. Notwithstanding great public dissatisfaction with the state of politics in the UK, there is little narrative that the electoral system is the source of the problem.
But, depending on the results, the chances of reform could grow after the next general election.
Change by the first route is most likely if no party comes close to a majority and a coalition is formed from multiple fragments. Those parties might all see reform as in their interests. Perhaps more likely, the smaller parties in such a coalition might push their larger partner into conceding a referendum – much as the Liberal Democrats did with the Conservatives in 2010. If support for the two big parties is disintegrating, referendum voters might opt for change – though that is not guaranteed.
As for the second route, a majority victory for Reform UK that was generated by first past the post from a small vote share could – given the party’s marmite quality – trigger widespread public rejection of the voting system. A clear path to change might open up if Reform then lost a subsequent election, particularly if it lost to a coalition of parties, some of which backed reform already.
In short, the shifting sands of politics are making electoral reform more likely. But almost certainly not before the 2030s. And much will depend on how the party system evolves in the years to come.
This article includes links to bookshop.org. If you click on one of the links and go on to buy something from bookshop.org The Conversation UK may earn a commission.
Alan Renwick does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Political scientists first identified a phenomenon known as the “rally round the flag” effect in the 1970s . This refers to the tendency for the US public to increase their support for a president when the county becomes involved in conflicts abroad. After the massive air strikes on Iran’s nuclear sites, the question is whether the US bombing missions will boost support for Donald Trump.
An Economist/YouGov poll conducted between June 19 and June 23 suggests that it is unlikely that the Trump administration will experience a “rally round the flag” event after the US air strikes on June 22.
The survey asked: “Do you think the U.S. military should or should not bomb Iranian nuclear facilities?” Some of those surveyed would have answered before the raids took place, while others were responding afterwards.
Donald Trump makes a public announcement of the US air strikes on Iran.
Altogether around 29% supported the bombing, with 46% opposed and 25% not sure. The chart identifies big differences between groups in their opinions about the raid though. There’s a considerable gender divide. with 38% of men supporting the action (44% opposed), but only 21% of women in favour (48% opposed).
In relation to ethnicity, 34% of white people supported it and 42% opposed the raid. In contrast black people were much more likely to oppose (66%), with just 7% supportive. Among Hispanics 26% supported and 43% opposed the bombing.
There was also a wide divide in opinions among age groups, with only 15% of those aged between 18 to 29 supporting the air strikes and 59% opposing them. This was the highest level of opposition from any age group. This chimes with a general lack of support for Trump from this generation, with a massive 70% saying, in the same poll, that the country was heading in the wrong direction.
In contrast, those over the age of 65 were more in favour, with 42% supporting the military action and 37% opposing. This was the only age group in which supporters outnumbered opponents.
The group most opposed to the bombings were those with annual incomes over US$100,000 (£72,813), with 53% opposing and only 25% supporting. The lowest income group (those earning less than US$50,000) and middle income group (earning more than US$50,000 and less than US$100,000) had very similar views, with 30% and 31% supporting the attack respectively, and 45% and 46% opposing it.
Should the US military bomb Iranian nuclear facilities?
Author’s graph based on Economist/YouGov data, CC BY-ND
Perhaps the most interesting statistic is what those who voted for Trump in the presidential election last year thought about the president’s decision to attack Iran. Around half, 51%, of them supported the bombing, with 24% opposed. In the case of Harris voters only 10% supported the action while 70% opposed it.
We can get some idea of what prompts these responses by probing into the overall confidence the American people currently have in the Trump administration. There has been a gradual decline in the president’s job approval ratings, currently about 40% approve and 54% disapprove of his performance in the job. This compares with 43% approving and 51% disapproving in the Economist/YouGov survey published a month ago on May 19. Back on March 20, 48% of Americans approved of his job performance, while 49% disapproved.
When asked if they have a favourable or unfavourable view of Trump, 41% say the former and 54% the latter. This has also become slightly more negative since the Economist’s survey in May, when 44% felt favourably and 53% unfavourably.
Worries about a world war
It appears than many Americans are becoming afraid for the future of their country’s role in a war. Respondents were asked if they thought there was a greater or lesser chance of a world war compared with five years ago. Around 58% thought the chances were greater, compared with only 11% who thought they were lower.
A similar question asked if they thought the chances of a nuclear war were greater or lesser than five years ago. This produced a rather similar set of responses. No less than 52% thought there was a greater chance with only 12% thinking that the chances were lower.
The final and in many ways the most striking responses of all related to the question: Do you think that things in this country today are under control or out of control? A surprising 65% thought they were out of control and only 21% thought the opposite. This suggests that Trump’s erratic behaviour has started to spook Americans on a large scale, since they do not know, in line with national leaders around the world, what he will do next.
Paul Whiteley has received funding from the British Academy and the ESRC
A federal vaccine panel, recently reshaped by US health secretary Robert F. Kennedy Jr., has voted to discourage the use of flu vaccines containing thimerosal, a mercury-based preservative. The decision marks a dramatic shift in vaccine policy, as thimerosal has long been considered safe by health agencies worldwide, with its use already limited to a few multi-dose flu shots.
RFK Jr. has long linked thimerosal to autism – a connection that extensive scientific research has thoroughly debunked.
Thimerosal is an organic chemical containing mercury, used as a preservative in vaccines since the 1930s. Its effect comes from the mercury that disrupts the function of enzymes in microbes, such as bacteria and fungi. This prevents contamination of vaccines while they are stored in vials. Mercury, however, is also well-known as a potent toxin acting on cells the brain.
Much of mercury’s toxicity to brain cells stems from the same attributes that make thimerosal such a useful preservative. It disrupts the basic biological function of cells by changing the structure of proteins and enzymes.
In the brain, this can lead neurons to become excessively active, can impair the way they use energy, it can increase inflammation and lead to the death of neurons. While mercury poisoning can damage brain function in adults, babies are even more vulnerable.
People have long understood that mercury is toxic. But in the latter half of the 20th century, scientists discovered that industrial mercury entered rivers and seas, accumulating in the tissues of fish and shellfish. The neurological consequences of consuming too much contaminated seafood could be severe. This led environmental scientists to determine safe levels of mercury exposure.
Anxiety about mercury in vaccines intensified when it was noticed that some children receiving multiple vaccines could exceed established safety limits for mercury exposure. These limits were based on environmental toxicity studies. How mercury affects the brain, though, depends very much on the chemical form in which it is ingested.
In the 20th century, scientists discovered that mercury accumulates in the fish that we eat. J nel/Shutterstock.com
Methylmercury v ethylmercury
The form of mercury that contaminates the environment as a consequence of industrial processes is methylmercury. The form that is part of thimerosal is ethylmercury.
The structure of these molecules differs in subtle but important ways. Methylmercury has one more carbon atom and two more hydrogen atoms than ethylmercury. These small differences significantly affect how each compound behaves in the body, particularly, in how easily they dissolve in fats.
Fat solubility is a key consideration in pharmacokinetics – the science of how drugs and other molecules travel through the body. Briefly, because cell membranes are made of fatty substances, a molecule’s ability to dissolve in fats strongly influences how it crosses these membranes and moves through the body.
It affects how a molecule is absorbed into the blood, how it is distributed to different tissues, how it is broken down by the body into other chemicals and how it is excreted.
Methylmercury from environmental contamination is more fat-soluble than ethylmercury from thimerosal. This means that it accumulates more easily in tissues, and is excreted from the body more slowly.
It also means that it can more easily cross into the brain and accumulate at greater concentrations for longer. For this reason, the safety guidelines that were established for methylmercury were unlikely to accurately predict the safety of ethylmercury.
Global policy shift amid public fear
Nevertheless, concerns about vaccine hesitancy, rising autism diagnoses and fears of a potential link to childhood vaccines led to thimerosal being almost entirely removed from childhood vaccines in the US by 2001 and in the UK between 2003 and 2005.
Beyond biological considerations, policymakers were also responding to concerns about how vaccine fears could undermine immunisation efforts and fuel the spread of infectious diseases.
Denmark, which removed thimerosal from childhood vaccines in 1992, provided an early opportunity to study the issue. Researchers compared the rates of autism before and after thimerosal’s removal as well as compared with similar countries still using it. Several large studies demonstrated conclusively that thimerosal was not causing autism or neurodevelopmental harm.
Despite the overwhelming evidence that thimerosal is safe, it is no longer widely used in childhood vaccines in high-income countries, replaced by preservative-free vaccines, which must be stored as a single dose per vial.
Storing multiple doses of a vaccine in the same vial, however, is still an extremely useful approach in resource-limited settings, in pandemics and where diseases require rapid, large-scale vaccination campaigns – common with influenza.
International health bodies, including the World Health Organization, continue to support thimerosal’s use. They emphasise that the benefits of immunisation far outweigh the theoretical risks from low-dose ethylmercury exposure.
Edward Beamer does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Wimbledon is all about strawberries and cream (and of course tennis). The club itself describes strawberries and cream as “a true icon of The Championships”.
While a meal at one of the club’s restaurants can set you back £130 or more, a bowl of the iconic dish is a modest £2.70 (up from £2.50 in 2024 – the first price rise in 15 years). In 2024 visitors munched their way through nearly 2 million berries.
Strawberries and cream has a long association with Wimbledon. Even before lawn tennis was added to its activities, the All England Croquet Club (now the All England Lawn Tennis & Croquet Club) was serving strawberries and cream to visitors. They would have expected no less. Across Victorian Britain, strawberries and cream was a staple of garden parties of all sorts. Private affairs, political fundraisers and county cricket matches all typically served the dish.
Alongside string bands and games of lawn tennis, strawberries and cream were among the pleasures that Victorians expected to encounter at a fête or garden party. As a result, one statistician wrote in the Dundee Evening Telegraph in 1889, Londoners alone consumed 12 million berries a day over the summer. At that rate, he explained, if strawberries were available year-round, Britons would spend 24 times more on strawberries than on missionary work, and twice as much as on education.
Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.
But of course strawberries and cream were not available year-round. They were a delightful treat of the summer and the delicate berries did not last. Victorian newspapers, such as the Illustrated London News, complained that even the fruits on sale in London were a sad, squashed travesty of those eaten in the countryside, to say nothing of London’s cream, which might have been watered down.
Wimbledon’s lawn tennis championships were held in late June or early July – in the midst, in other words, of strawberry season.
Eating strawberries and cream had long been a distinctly seasonal pleasure. Seventeenth-century menu plans for elegant banquets offered strawberries, either with cream or steeped (rather deliciously, and I recommend you try this) in rose water, white wine, and sugar – as a suitable dish for the month of June.
They were, in the view of the 17th-century gardener John Parkinson, “a cooling and pleasant dish in the hot summer season”. They were, in short, a summer food. That was still the case in the 1870s, when the Wimbledon tennis championship was established.
This changed dramatically with the invention of mechanical refrigeration. From the late 19th century, new technologies enabled the global movement of chilled and frozen foods across vast oceans and spaces.
Domestic ice-boxes and refrigerators followed. These modern devices were hailed as freeing us from the tyranny of seasons. As the Ladies Home Journal magazine proclaimed triumphantly in 1929: “Refrigeration wipes out seasons and distances … We grow perishable products in the regions best suited to them instead of being forced to stick close to the large markets.” Eating seasonally, or locally, was a tiresome constraint and it was liberating to be able to enjoy foods at whatever time of year we desired.
As a result, points out historian Susan Friedberg, our concept of “freshness” was transformed. Consumers “stopped expecting fresh food to be just-picked or just-caught or just-killed. Instead, they expected to find and keep it in the refrigerator.”
Today, when we can buy strawberries year round, we have largely lost the excitement that used to accompany advent of the strawberry season. Colour supplements and supermarket magazines do their best to drum up some enthusiasm for British strawberries, but we are far from the days when poets could rhapsodise about dairy maids “dreaming of their strawberries and cream” in the month of May.
Strawberries and cream, once a “rare service” enjoyed in the short months from late April to early July, are now a season-less staple, available virtually year round from the global networks of commercial growers who supply Britain’s food. The special buzz about Wimbledon’s iconic dish of strawberries and cream is a glimpse into an earlier time, and reminds us that it was not always so.
Rebecca Earle does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Portrait of a young boy with a paletta and a ball, late 16th century, artist unknown.Wiki Commons/Canva
In 1570, a Frenchman was arrested for smuggling clandestine correspondence between France and England. A passing comment in his interrogation document reveals that he also happened to be carrying a leather bag “in which there were three or four dozen balls of wool for playing tennis”.
The French term used was jeu de paume. This sport was played with the hand (palm), often gloved, rather than a racquet. This developed into the game that in English we usually refer to as “real tennis” (a different beast to the lawn tennis played at Wimbledon).
The interrogator believed that this cheap merchandise was simply a ruse for the man’s true purpose of communicating with Huguenot exiles. I have written a book, Huguenot Networks, based on this interrogation document, which will be published by Cambridge University Press later this year. But, as a historian, I was intrigued by both the number and makeup of the goods he was transporting. The wool, if wrapped tightly, could certainly have made these balls bouncy.
Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.
By chance, I encountered similar objects in a small display in the Palazzo Te in Mantua in Italy. These balls had apparently been retrieved from the palace roof and several others had come from a nearby church. They were variously made of leather, cloth and string rather than wool, probably stuffed with earth or animal hair. Just like the handmade “real tennis” balls of today, they were harder and more variable in size than regular tennis balls, and usually not so colourful, although sometimes having a simple painted design on the outside.
Today, “real tennis” is known as the “sport of kings”, praised for testing agility and athletic prowess. The most famous court in England is at Hampton Court, but many others survive in the UK. For instance, there is one down the road from where I work at the University of Warwick, at Moreton Morrell in Warwickshire.
In the 16th century, real tennis attracted gamblers, meaning it became a later target for Puritans. Anne Boleyn is said to have placed a wager on a match she was watching on the day of her arrest. And Henry VIII, fittingly, supposedly played a match on the day Boleyn was executed.
And if there is any doubt about how dangerous tennis could be, several royal deaths in France are attributed to it. King Louis X of France was a keen player of jeu de paume. He was the first ruler to order enclosed indoor courts to be constructed. This later became popular across Europe.
In June 1316, after a particularly exhausting game, Louis X is said to have drunk a large quantity of chilled wine and soon afterwards died – probably of pleurisy, although there was some suspicion of poisoning.
Likewise, in August 1536, the death of the 18-year-old dauphin, eldest son of Francis I, was blamed on his Italian secretary, the Count of Montecuccoli, who had brought him a glass of cold water after a match. The count was subsequently executed despite a post-mortem suggesting that the prince had died of natural causes.
By the 16th century, there were two courts at the Louvre and many more around the city of Paris as well as at other royal residences. Ambassadors’ accounts describe frequent games between high-ranking courtiers and the king which could sometimes result in injury, especially if struck by one of the hard balls.
Our man carrying many tennis balls in 1570 had probably spotted a lucrative opportunity in response to rising demand. The French game had become increasingly popular in England under the Tudors.
By the Tudor period, no self-respecting European court was without its own purpose-built tennis courts where monarchs and their entourages tested their prowess and skill. They often did so before ambassadors, who could report back to their own rulers, making it a truly competitive international sport.
Thankfully, today’s game has far fewer dangers – there’s no risk of being hit by a ball full of earth or the fear of mortal retribution after beating an exhausted high-ranking opponent.
This article features references to books that have been included for editorial reasons, and may contain links to bookshop.org. If you click on one of the links and go on to buy something from bookshop.org The Conversation UK may earn a commission.
Penny Roberts does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Source: The Conversation – USA – By Brian J. Yanites, Associate Professor of Earth and Atmospheric Science. Professor of Surficial and Sedimentary Geology, Indiana University
The Carter Lodge hangs precariously over the flood-scoured bank of the Broad River in Chimney Rock Village, N.C., on May 13, 2025, eight months after Hurricane Helene.AP Photo/Allen G. Breed
Hurricane Helene lasted only a few days in September 2024, but it altered the landscape of the Southeastern U.S. in profound ways that will affect the hazards local residents face far into the future.
Helene was a powerful reminder that natural hazards don’t disappear when the skies clear – they evolve.
These transformations are part of what scientists call cascading hazards. They occur when one natural event alters the landscape in ways that lead to future hazards. A landslide triggered by a storm might clog a river, leading to downstream flooding months or years later. A wildfire can alter the soil and vegetation, setting the stage for debris flows with the next rainstorm.
Satellite images before (top) and after Hurricane Helene (bottom) show how the storm altered landscape near Pensacola, N.C., in the Blue Ridge Mountains. Google Earth, CC BY
I study these disasters as a geomorphologist. In a new paper in the journal Science, I and a team of scientists from 18 universities and the U.S. Geological Survey explain why hazard models – used to help communities prepare for disasters – can’t just rely on the past. Instead, they need to be nimble enough to forecast how hazards evolve in real time.
The science behind cascading hazards
Cascading hazards aren’t random. They emerge from physical processes that operate continuously across the landscape – sediment movement, weathering, erosion. Together, the atmosphere, biosphere and the earth are constantly reshaping the conditions that cause natural disasters.
For instance, earthquakes fracture rock and shake loose soil. Even if landslides don’t occur during the quake itself, the ground may be weakened, leaving it primed for failure during later rainstorms.
A strong aftershock after a 7.8 magnitude earthquake in Sichuan province, China, in May 2008 triggered more landslides in central China. AP Photo/Andy Wong
Earth’s surface retains a “memory” of these events. Sediment disturbed in an earthquake, wildfire or severe storm will move downslope over years or even decades, reshaping the landscape as it goes.
These risks present challenges for everything from emergency planning to home insurance. After repeated wildfire-mudslide combinations in California, some insurers pulled out of the state entirely, citing mounting risks and rising costs among the reasons.
Cascading hazards are not new, but their impact is intensifying.
Yet climate change is only part of the equation. Earth processes – such as earthquakes and volcanic eruptions – also trigger cascading hazards, often with long-lasting effects.
Mount St. Helens is a powerful example: More than four decades after its eruption in 1980, the U.S. Army Corps of Engineers continues to manage ash and sediment from the eruption to keep it from filling river channels in ways that could increase the flood risk in downstream communities.
Rethinking risk and building resilience
Traditionally, insurance companies and disaster managers have estimated hazard risk by looking at past events.
But when the landscape has changed, the past may no longer be a reliable guide to the future. To address this, computer models based on the physics of how these events work are needed to help forecast hazard evolution in real time, much like weather models update with new atmospheric data.
A March 2024 landslide in the Oregon Coast Range wiped out trees in its path. Brian Yanites, June 2025 A drone image of the same March 2024 landslide in the Oregon Coast Range shows where it temporarily dammed the river below. Brian Yanites, June 2025
Thanks to advances in Earth observation technology, such as satellite imagery, drone and lidar, which is similar to radar but uses light, scientists can now track how hillslopes, rivers and vegetation change after disasters. These observations can feed into geomorphic models that simulate how loosened sediment moves and where hazards are likely to emerge next.
Cascading hazards reveal that Earth’s surface is not a passive backdrop, but an active, evolving system. Each event reshapes the stage for the next.
Understanding these connections is critical for building resilience so communities can withstand future storms, earthquakes and the problems created by debris flows. Better forecasts can inform building codes, guide infrastructure design and improve how risk is priced and managed. They can help communities anticipate long-term threats and adapt before the next disaster strikes.
Most importantly, they challenge everyone to think beyond the immediate aftermath of a disaster – and to recognize the slow, quiet transformations that build toward the next.
Brian J. Yanites receives funding from the National Science Foundation.
Twenty-five years ago, “The Perfect Storm” roared into movie theaters. The disaster flick, starring George Clooney and Mark Wahlberg, was a riveting, fictionalized account of commercial swordfishing in New England and a crew who went down in a violent storm.
The anniversary of the film’s release, on June 30, 2000, provides an opportunity to reflect on the real-life changes to New England’s commercial fishing industry.
Fishing was once more open to all
In the true story behind the movie, six men lost their lives in late October 1991 when the commercial swordfishing vessel Andrea Gail disappeared in a fierce storm in the North Atlantic as it was headed home to Gloucester, Massachusetts.
At the time, and until very recently, almost all commercial fisheries were open access, meaning there were no restrictions on who could fish.
There were permit requirements and regulations about where, when and how you could fish, but anyone with the means to purchase a boat and associated permits, gear, bait and fuel could enter the fishery. Eight regional councils established under a 1976 federal law to manage fisheries around the U.S. determined how many fish could be harvested prior to the start of each fishing season.
Fishing has been an integral part of coastal New England culture since its towns were established. In this 1899 photo, a New England community weighs and packs mackerel. Charles Stevenson/Freshwater and Marine Image Bank
Fishing started when the season opened and continued until the catch limit was reached. In some fisheries, this resulted in a “race to the fish” or a “derby,” where vessels competed aggressively to harvest the available catch in short amounts of time. The limit could be reached in a single day, as happened in the Pacific halibut fishery in the late 1980s.
As populations declined, managers responded by cutting catch limits to allow more fish to survive and reproduce. Fishing seasons were shortened, as it took less time for the fleets to harvest the allowed catch. It became increasingly hard for fishermen to catch enough fish to earn a living.
Saving fisheries changed the industry
In the early 2000s, as these economic and environmental challenges grew, fisheries managers started limiting access. Instead of allowing anyone to fish, only vessels or individuals meeting certain eligibility requirements would have the right to fish.
The most common method of limiting access in the U.S. is through limited entry permits, initially awarded to individuals or vessels based on previous participation or success in the fishery. Another approach is to assign individual harvest quotas or “catch shares” to permit holders, limiting how much each boat can bring in.
Today, limited access is common, and there are positive signs that the management change is helping achieve the law’s environmental goal of preventing overfishing. Since 2000, the populations of 50 major fishing stocks have been rebuilt, meaning they have recovered to a level that can once again support fishing.
Forty fish stocks are currently being managed under rebuilding plans that limit catch to allow the stock to grow, including Atlantic cod, which has struggled to recover due to a complex combination of factors, including climatic changes.
The lingering effect on communities today
While many fish stocks have recovered, the effort came at an economic cost to many individual fishermen. The limited-access Northeast groundfish fishery, which includes Atlantic cod, haddock and flounder, shed nearly 800 crew positions between 2007 and 2015.
The loss of jobs and revenue from fishing impacts individual family income and relationships, strains other businesses in fishing communities, and affects those communities’ overall identity and resilience, as illustrated by a recent economic snapshot of the Alaska seafood industry.
When original limited-access permit holders leave the business – for economic, personal or other reasons – their permits are either terminated or sold to other eligible permit holders, leading to fewer active vessels in the fleet. As a result, the number of vessels fishing for groundfish has declined from 719 in 2007 to 194 in 2023, meaning fewer jobs.
A fisherman unloads a portion of his catch for the day of 300 pounds of groundfish, including flounder, in January 2006 in Gloucester, Mass. AP Photo/Lisa Poole
Because of their scarcity, limited-access permits can cost upward of US$500,000, which is often beyond the financial means of a small businesses or a young person seeking to enter the industry. The high prices may also lead retiring fishermen to sell their permits, as opposed to passing them along with the vessels to the next generation.
These economic forces have significantly altered the fishing industry, leading to more corporate and investor ownership, rather than the family-owned operations that were more common in the Andrea Gail’s time.
Similar to the experience of small family farms, fishing captains and crews are being pushed into corporate arrangements that reduce their autonomy and revenues.
Consolidation can threaten the future of entire fleets, as New Bedford, Massachusetts, saw when Blue Harvest Fisheries, backed by a private equity firm, bought up vessels and other assets and then declared bankruptcy a few years later, leaving a smaller fleet and some local business and fishermen unpaid for their work. A company with local connections bought eight vessels from Blue Harvest along with 48 state and federal permits the company held.
New challenges and unchanging risks
While there are signs of recovery for New England’s fisheries, challenges continue.
Warming water temperatures have shifted the distribution of some species, affecting where and when fish are harvested. For example, lobsters have moved north toward Canada. When vessels need to travel farther to find fish, that increases fuel and supply costs and time away from home.
Fisheries managers will need to continue to adapt to keep New England’s fisheries healthy and productive.
One thing that, unfortunately, hasn’t changed is the dangerous nature of the occupation. Between 2000 and 2019, 414 fishermen died in 245 disasters.
Stephanie Otts receives funding from the NOAA National Sea Grant College Program through the U.S. Department of Commerce. Previous support for fisheries management legal research provided by The Nature Conservancy.
Source: The Conversation – USA – By Brian J. Yanites, Associate Professor of Earth and Atmospheric Science. Professor of Surficial and Sedimentary Geology, Indiana University
The Carter Lodge hangs precariously over the flood-scoured bank of the Broad River in Chimney Rock Village, N.C., on May 13, 2025, eight months after Hurricane Helene.AP Photo/Allen G. Breed
Hurricane Helene lasted only a few days in September 2024, but it altered the landscape of the Southeastern U.S. in profound ways that will affect the hazards local residents face far into the future.
Helene was a powerful reminder that natural hazards don’t disappear when the skies clear – they evolve.
These transformations are part of what scientists call cascading hazards. They occur when one natural event alters the landscape in ways that lead to future hazards. A landslide triggered by a storm might clog a river, leading to downstream flooding months or years later. A wildfire can alter the soil and vegetation, setting the stage for debris flows with the next rainstorm.
Satellite images before (top) and after Hurricane Helene (bottom) show how the storm altered landscape near Pensacola, N.C., in the Blue Ridge Mountains. Google Earth, CC BY
I study these disasters as a geomorphologist. In a new paper in the journal Science, I and a team of scientists from 18 universities and the U.S. Geological Survey explain why hazard models – used to help communities prepare for disasters – can’t just rely on the past. Instead, they need to be nimble enough to forecast how hazards evolve in real time.
The science behind cascading hazards
Cascading hazards aren’t random. They emerge from physical processes that operate continuously across the landscape – sediment movement, weathering, erosion. Together, the atmosphere, biosphere and the earth are constantly reshaping the conditions that cause natural disasters.
For instance, earthquakes fracture rock and shake loose soil. Even if landslides don’t occur during the quake itself, the ground may be weakened, leaving it primed for failure during later rainstorms.
A strong aftershock after a 7.8 magnitude earthquake in Sichuan province, China, in May 2008 triggered more landslides in central China. AP Photo/Andy Wong
Earth’s surface retains a “memory” of these events. Sediment disturbed in an earthquake, wildfire or severe storm will move downslope over years or even decades, reshaping the landscape as it goes.
These risks present challenges for everything from emergency planning to home insurance. After repeated wildfire-mudslide combinations in California, some insurers pulled out of the state entirely, citing mounting risks and rising costs among the reasons.
Cascading hazards are not new, but their impact is intensifying.
Yet climate change is only part of the equation. Earth processes – such as earthquakes and volcanic eruptions – also trigger cascading hazards, often with long-lasting effects.
Mount St. Helens is a powerful example: More than four decades after its eruption in 1980, the U.S. Army Corps of Engineers continues to manage ash and sediment from the eruption to keep it from filling river channels in ways that could increase the flood risk in downstream communities.
Rethinking risk and building resilience
Traditionally, insurance companies and disaster managers have estimated hazard risk by looking at past events.
But when the landscape has changed, the past may no longer be a reliable guide to the future. To address this, computer models based on the physics of how these events work are needed to help forecast hazard evolution in real time, much like weather models update with new atmospheric data.
A March 2024 landslide in the Oregon Coast Range wiped out trees in its path. Brian Yanites, June 2025 A drone image of the same March 2024 landslide in the Oregon Coast Range shows where it temporarily dammed the river below. Brian Yanites, June 2025
Thanks to advances in Earth observation technology, such as satellite imagery, drone and lidar, which is similar to radar but uses light, scientists can now track how hillslopes, rivers and vegetation change after disasters. These observations can feed into geomorphic models that simulate how loosened sediment moves and where hazards are likely to emerge next.
Cascading hazards reveal that Earth’s surface is not a passive backdrop, but an active, evolving system. Each event reshapes the stage for the next.
Understanding these connections is critical for building resilience so communities can withstand future storms, earthquakes and the problems created by debris flows. Better forecasts can inform building codes, guide infrastructure design and improve how risk is priced and managed. They can help communities anticipate long-term threats and adapt before the next disaster strikes.
Most importantly, they challenge everyone to think beyond the immediate aftermath of a disaster – and to recognize the slow, quiet transformations that build toward the next.
Brian J. Yanites receives funding from the National Science Foundation.
Source: The Conversation – USA – By Brian J. Yanites, Associate Professor of Earth and Atmospheric Science. Professor of Surficial and Sedimentary Geology, Indiana University
The Carter Lodge hangs precariously over the flood-scoured bank of the Broad River in Chimney Rock Village, N.C., on May 13, 2025, eight months after Hurricane Helene.AP Photo/Allen G. Breed
Hurricane Helene lasted only a few days in September 2024, but it altered the landscape of the Southeastern U.S. in profound ways that will affect the hazards local residents face far into the future.
Helene was a powerful reminder that natural hazards don’t disappear when the skies clear – they evolve.
These transformations are part of what scientists call cascading hazards. They occur when one natural event alters the landscape in ways that lead to future hazards. A landslide triggered by a storm might clog a river, leading to downstream flooding months or years later. A wildfire can alter the soil and vegetation, setting the stage for debris flows with the next rainstorm.
Satellite images before (top) and after Hurricane Helene (bottom) show how the storm altered landscape near Pensacola, N.C., in the Blue Ridge Mountains. Google Earth, CC BY
I study these disasters as a geomorphologist. In a new paper in the journal Science, I and a team of scientists from 18 universities and the U.S. Geological Survey explain why hazard models – used to help communities prepare for disasters – can’t just rely on the past. Instead, they need to be nimble enough to forecast how hazards evolve in real time.
The science behind cascading hazards
Cascading hazards aren’t random. They emerge from physical processes that operate continuously across the landscape – sediment movement, weathering, erosion. Together, the atmosphere, biosphere and the earth are constantly reshaping the conditions that cause natural disasters.
For instance, earthquakes fracture rock and shake loose soil. Even if landslides don’t occur during the quake itself, the ground may be weakened, leaving it primed for failure during later rainstorms.
A strong aftershock after a 7.8 magnitude earthquake in Sichuan province, China, in May 2008 triggered more landslides in central China. AP Photo/Andy Wong
Earth’s surface retains a “memory” of these events. Sediment disturbed in an earthquake, wildfire or severe storm will move downslope over years or even decades, reshaping the landscape as it goes.
These risks present challenges for everything from emergency planning to home insurance. After repeated wildfire-mudslide combinations in California, some insurers pulled out of the state entirely, citing mounting risks and rising costs among the reasons.
Cascading hazards are not new, but their impact is intensifying.
Yet climate change is only part of the equation. Earth processes – such as earthquakes and volcanic eruptions – also trigger cascading hazards, often with long-lasting effects.
Mount St. Helens is a powerful example: More than four decades after its eruption in 1980, the U.S. Army Corps of Engineers continues to manage ash and sediment from the eruption to keep it from filling river channels in ways that could increase the flood risk in downstream communities.
Rethinking risk and building resilience
Traditionally, insurance companies and disaster managers have estimated hazard risk by looking at past events.
But when the landscape has changed, the past may no longer be a reliable guide to the future. To address this, computer models based on the physics of how these events work are needed to help forecast hazard evolution in real time, much like weather models update with new atmospheric data.
A March 2024 landslide in the Oregon Coast Range wiped out trees in its path. Brian Yanites, June 2025 A drone image of the same March 2024 landslide in the Oregon Coast Range shows where it temporarily dammed the river below. Brian Yanites, June 2025
Thanks to advances in Earth observation technology, such as satellite imagery, drone and lidar, which is similar to radar but uses light, scientists can now track how hillslopes, rivers and vegetation change after disasters. These observations can feed into geomorphic models that simulate how loosened sediment moves and where hazards are likely to emerge next.
Cascading hazards reveal that Earth’s surface is not a passive backdrop, but an active, evolving system. Each event reshapes the stage for the next.
Understanding these connections is critical for building resilience so communities can withstand future storms, earthquakes and the problems created by debris flows. Better forecasts can inform building codes, guide infrastructure design and improve how risk is priced and managed. They can help communities anticipate long-term threats and adapt before the next disaster strikes.
Most importantly, they challenge everyone to think beyond the immediate aftermath of a disaster – and to recognize the slow, quiet transformations that build toward the next.
Brian J. Yanites receives funding from the National Science Foundation.
Source: The Conversation – USA – By Brian J. Yanites, Associate Professor of Earth and Atmospheric Science. Professor of Surficial and Sedimentary Geology, Indiana University
The Carter Lodge hangs precariously over the flood-scoured bank of the Broad River in Chimney Rock Village, N.C., on May 13, 2025, eight months after Hurricane Helene.AP Photo/Allen G. Breed
Hurricane Helene lasted only a few days in September 2024, but it altered the landscape of the Southeastern U.S. in profound ways that will affect the hazards local residents face far into the future.
Helene was a powerful reminder that natural hazards don’t disappear when the skies clear – they evolve.
These transformations are part of what scientists call cascading hazards. They occur when one natural event alters the landscape in ways that lead to future hazards. A landslide triggered by a storm might clog a river, leading to downstream flooding months or years later. A wildfire can alter the soil and vegetation, setting the stage for debris flows with the next rainstorm.
Satellite images before (top) and after Hurricane Helene (bottom) show how the storm altered landscape near Pensacola, N.C., in the Blue Ridge Mountains. Google Earth, CC BY
I study these disasters as a geomorphologist. In a new paper in the journal Science, I and a team of scientists from 18 universities and the U.S. Geological Survey explain why hazard models – used to help communities prepare for disasters – can’t just rely on the past. Instead, they need to be nimble enough to forecast how hazards evolve in real time.
The science behind cascading hazards
Cascading hazards aren’t random. They emerge from physical processes that operate continuously across the landscape – sediment movement, weathering, erosion. Together, the atmosphere, biosphere and the earth are constantly reshaping the conditions that cause natural disasters.
For instance, earthquakes fracture rock and shake loose soil. Even if landslides don’t occur during the quake itself, the ground may be weakened, leaving it primed for failure during later rainstorms.
A strong aftershock after a 7.8 magnitude earthquake in Sichuan province, China, in May 2008 triggered more landslides in central China. AP Photo/Andy Wong
Earth’s surface retains a “memory” of these events. Sediment disturbed in an earthquake, wildfire or severe storm will move downslope over years or even decades, reshaping the landscape as it goes.
These risks present challenges for everything from emergency planning to home insurance. After repeated wildfire-mudslide combinations in California, some insurers pulled out of the state entirely, citing mounting risks and rising costs among the reasons.
Cascading hazards are not new, but their impact is intensifying.
Yet climate change is only part of the equation. Earth processes – such as earthquakes and volcanic eruptions – also trigger cascading hazards, often with long-lasting effects.
Mount St. Helens is a powerful example: More than four decades after its eruption in 1980, the U.S. Army Corps of Engineers continues to manage ash and sediment from the eruption to keep it from filling river channels in ways that could increase the flood risk in downstream communities.
Rethinking risk and building resilience
Traditionally, insurance companies and disaster managers have estimated hazard risk by looking at past events.
But when the landscape has changed, the past may no longer be a reliable guide to the future. To address this, computer models based on the physics of how these events work are needed to help forecast hazard evolution in real time, much like weather models update with new atmospheric data.
A March 2024 landslide in the Oregon Coast Range wiped out trees in its path. Brian Yanites, June 2025 A drone image of the same March 2024 landslide in the Oregon Coast Range shows where it temporarily dammed the river below. Brian Yanites, June 2025
Thanks to advances in Earth observation technology, such as satellite imagery, drone and lidar, which is similar to radar but uses light, scientists can now track how hillslopes, rivers and vegetation change after disasters. These observations can feed into geomorphic models that simulate how loosened sediment moves and where hazards are likely to emerge next.
Cascading hazards reveal that Earth’s surface is not a passive backdrop, but an active, evolving system. Each event reshapes the stage for the next.
Understanding these connections is critical for building resilience so communities can withstand future storms, earthquakes and the problems created by debris flows. Better forecasts can inform building codes, guide infrastructure design and improve how risk is priced and managed. They can help communities anticipate long-term threats and adapt before the next disaster strikes.
Most importantly, they challenge everyone to think beyond the immediate aftermath of a disaster – and to recognize the slow, quiet transformations that build toward the next.
Brian J. Yanites receives funding from the National Science Foundation.
Source: The Conversation – USA – By Brian J. Yanites, Associate Professor of Earth and Atmospheric Science. Professor of Surficial and Sedimentary Geology, Indiana University
The Carter Lodge hangs precariously over the flood-scoured bank of the Broad River in Chimney Rock Village, N.C., on May 13, 2025, eight months after Hurricane Helene.AP Photo/Allen G. Breed
Hurricane Helene lasted only a few days in September 2024, but it altered the landscape of the Southeastern U.S. in profound ways that will affect the hazards local residents face far into the future.
Helene was a powerful reminder that natural hazards don’t disappear when the skies clear – they evolve.
These transformations are part of what scientists call cascading hazards. They occur when one natural event alters the landscape in ways that lead to future hazards. A landslide triggered by a storm might clog a river, leading to downstream flooding months or years later. A wildfire can alter the soil and vegetation, setting the stage for debris flows with the next rainstorm.
Satellite images before (top) and after Hurricane Helene (bottom) show how the storm altered landscape near Pensacola, N.C., in the Blue Ridge Mountains. Google Earth, CC BY
I study these disasters as a geomorphologist. In a new paper in the journal Science, I and a team of scientists from 18 universities and the U.S. Geological Survey explain why hazard models – used to help communities prepare for disasters – can’t just rely on the past. Instead, they need to be nimble enough to forecast how hazards evolve in real time.
The science behind cascading hazards
Cascading hazards aren’t random. They emerge from physical processes that operate continuously across the landscape – sediment movement, weathering, erosion. Together, the atmosphere, biosphere and the earth are constantly reshaping the conditions that cause natural disasters.
For instance, earthquakes fracture rock and shake loose soil. Even if landslides don’t occur during the quake itself, the ground may be weakened, leaving it primed for failure during later rainstorms.
A strong aftershock after a 7.8 magnitude earthquake in Sichuan province, China, in May 2008 triggered more landslides in central China. AP Photo/Andy Wong
Earth’s surface retains a “memory” of these events. Sediment disturbed in an earthquake, wildfire or severe storm will move downslope over years or even decades, reshaping the landscape as it goes.
These risks present challenges for everything from emergency planning to home insurance. After repeated wildfire-mudslide combinations in California, some insurers pulled out of the state entirely, citing mounting risks and rising costs among the reasons.
Cascading hazards are not new, but their impact is intensifying.
Yet climate change is only part of the equation. Earth processes – such as earthquakes and volcanic eruptions – also trigger cascading hazards, often with long-lasting effects.
Mount St. Helens is a powerful example: More than four decades after its eruption in 1980, the U.S. Army Corps of Engineers continues to manage ash and sediment from the eruption to keep it from filling river channels in ways that could increase the flood risk in downstream communities.
Rethinking risk and building resilience
Traditionally, insurance companies and disaster managers have estimated hazard risk by looking at past events.
But when the landscape has changed, the past may no longer be a reliable guide to the future. To address this, computer models based on the physics of how these events work are needed to help forecast hazard evolution in real time, much like weather models update with new atmospheric data.
A March 2024 landslide in the Oregon Coast Range wiped out trees in its path. Brian Yanites, June 2025 A drone image of the same March 2024 landslide in the Oregon Coast Range shows where it temporarily dammed the river below. Brian Yanites, June 2025
Thanks to advances in Earth observation technology, such as satellite imagery, drone and lidar, which is similar to radar but uses light, scientists can now track how hillslopes, rivers and vegetation change after disasters. These observations can feed into geomorphic models that simulate how loosened sediment moves and where hazards are likely to emerge next.
Cascading hazards reveal that Earth’s surface is not a passive backdrop, but an active, evolving system. Each event reshapes the stage for the next.
Understanding these connections is critical for building resilience so communities can withstand future storms, earthquakes and the problems created by debris flows. Better forecasts can inform building codes, guide infrastructure design and improve how risk is priced and managed. They can help communities anticipate long-term threats and adapt before the next disaster strikes.
Most importantly, they challenge everyone to think beyond the immediate aftermath of a disaster – and to recognize the slow, quiet transformations that build toward the next.
Brian J. Yanites receives funding from the National Science Foundation.
As a research chef and educator at Drexel University in Philadelphia, I am following the Michelin developments closely.
Having eaten in Michelin restaurants in other cities, I am confident that Philly has at least a few star-worthy restaurants. Our innovative dining scene was named one of the top 10 in the U.S. by Food & Wine in 2025.
Researchers have convincingly shown that Michelin ratings can boost tourism, so Philly gaining some starred restaurants could bring more revenue for the city.
But as the lead author of the textbook “Culinary Improvisation,” which teaches creativity, I also worry the Michelin scrutiny could make chefs more focused on delivering a consistent experience than continuing along the innovative trajectory that attracts Michelin in the first place.
Ingredients for culinary innovation
In “Culinary Improvisation” we discuss three elements needed to foster innovation in the kitchen.
The first is mastery of culinary technique, both classical and modern. Simply stated, this refers to good cooking.
The second is access to a diverse range of ingredients and flavors. The more colors the artist has on their palette, the more directions the creation can take.
And the third, which is key to my concerns, is a collaborative and supportive environment where chefs can take risks and make mistakes. Research shows a close link between risk-taking workplaces and innovation.
According to the Michelin Guide, stars are awarded to outstanding restaurants based on: “quality of ingredients, mastery of cooking techniques and flavors, the personality of the chef as expressed in the cuisine, value for money, and consistency of the dining experience both across the menu and over time.”
The criteria do not mention innovation.
It’s possible the high-stakes lure of a Michelin star, which awards consistent excellence, could lead Philly’s most vibrant and creative chefs and restaurateurs to pull back on the risks that led to the city’s culinary excellence in the first place.
Philadelphia’s preeminent restaurant critic Craig LaBan and journalist and former restaurateur Kiki Aranita discussed local contenders for Michelin stars in a recent article in the Philadelphia Inquirer.
The 19 restaurants LaBan and Aranita discuss as possible star contenders average just over a one-mile walk from the Pennsylvania Convention Center.
Together they have received 78 James Beard nominations or awards, which are considered the “Oscars” of the food industry. That’s an average of over four per restaurant.
And when I tried to book a table for two on a Wednesday and Saturday before 9 p.m., about half were already fully booked for dinner two weeks out, in July, which is the slow season for dining in Philadelphia.
If LaBan’s and Aranita’s predictions are right, Michelin will be an added recognition for restaurants that are already successful and centrally located.
Black Dragon Takeout fuses Black American cuisine with the aesthetics of classic Chinese American takeout. Jeff Fusco/The Conversation, CC BY-SA
Off the beaten path
When the Michelin Guide started in France at the turn of the 19th century, it encouraged diners to take the road less traveled to their next gastronomic experience.
Consider Jacob Trinh’s Vietnamese-tinged seafood tasting menu at Little Fish in Queen Village; Kurt Evans’ gumbo lo mein at Black Dragon Takeout in West Philly; the beef cheek confit with avocado mousse at Temir Satybaldiev’s Ginger in the Northeast; and the West African XO sauce at Honeysuckle, owned by Omar Tate and Cybille St.Aude-Tate, on North Broad Street.
I hope the Michelin inspectors will venture far beyond the obvious candidates to experience more of what Philadelphia has to offer.
In the frenzy surrounding the Michelin scrutiny, chef friends have invited me to dine at their restaurants and share my feedback as they refine their menus in anticipation of visits from anonymous Michelin inspectors.
Restaurateurs have been asking my colleagues and me for talent suggestions to replace well-liked and capable cooks, servers and managers whom owners perceive to be just not Michelin-star level.
And managers are texting us names of suspected reviewers, triggered by some tell-tale signs – a solo diner with a weeknight tasting menu reservation, no dietary restrictions or special requests, and a conspicuously light internet presence.
In all, I am excited about Philadelphians being excited about Michelin. Any opportunity to spotlight the city’s restaurant community and tighten its food and service quality raises the bar among local chefs and restaurateurs and makes the experience better for diners. And the prospect of business travelers and culinary tourists enjoying lunches and early-week dinners can help restaurants, their workers and the city earn more revenue.
But in the din of the press events and hype, let’s not forget that Philadelphians don’t need an outside arbiter to tell us what we already know: Philly is a great place to eat and drink.
Jonathan Deutsch does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Source: The Conversation – USA – By Thomas A. DuBois, Professor of Scandinavian Studies, Folklore, and Religious Studies, University of Wisconsin-Madison
In May 2025, Tapio Luoma, archbishop of the Evangelical Lutheran Church of Finland, delivered an apology to the Sámi, the only recognized Indigenous people in the European Union.
Speaking on behalf of the church to which more than 6 in 10 of the Finnish populace belong, including most Sámi, Luoma acknowledged its role in past activities that stigmatized Sámi language and culture.
The church “has not respected the rights to self-determination of the Sámi people,” his address began. “Before God and all of you here assembled, we express our regret and ask forgiveness of the Sámi people.”
Luoma’s words were the latest in a series of apologies through which the former state churches in Scandinavia have sought to reset their relations with the Indigenous population of Sápmi, the natural and cultural area of Sámi people. Today, the region is divided between Finland, Norway, Sweden and Russia.
For thousands of years, the Sámi population lived by hunting, fishing and reindeer husbandry along the northern edges of Scandinavia. The Sámi possessed their own languages and maintained distinctive spiritual traditions and healing practices, drawing on traditional ecological knowledge that they had accrued over countless generations. In times of crisis or uncertainty, for example, communities used ceremonial drums to communicate with the spirit world and divine the future.
Conflicts emerged by the 13th century, however, as Christian realms expanded north. Christian clerics condemned Sámi spiritual traditions as “heathen devilry.”
During the 16th-century Protestant Reformation, Scandinavian rulers shifted from Catholicism to Lutheranism. In addition to tending to the souls of their flocks, ministers were tasked with keeping track of the comings and goings of congregation members, collecting taxes, and administering justice for lesser crimes.
They aimed to stamp out the spiritual practices that many Sámi continued to practice alongside Christianity. Church authorities arrested, fined and sometimes even executed practitioners, while confiscating sacred drums to be destroyed or sent to distant museums.
The church’s ritual of confirmation, which marks the passage from adolescence into adulthood, also acquired legal status. Being confirmed required the ability to read and interpret the Bible and Martin Luther’s Catechism, a summary of the Lutheran Church’s beliefs. As the church became part of the state, people who had not received confirmation could not represent themselves in court, own land or even marry.
And where Luther had called for religious instruction to occur in one’s native language, most Nordic clergy provided catechesis only in the majority language, considering Sámi language and traditions impediments to true conversion.
Assimilation efforts
During the late 19th and early 20th centuries, the new “nation states” of Finland, Norway and Sweden emerged on the world stage. In each country, political leaders conflated what the ancient Greeks called the “demos” – members of a political nation – with an “ethnos,” a cultural group. In order to belong to the Finnish, Norwegian and Swedish political nations, political and cultural leaders of these new states asserted that it was necessary to belong to the majority linguistic and cultural community.
Finland’s 1919 constitution made provision for Swedish, which is still used by about 5% of the population, as a national language alongside Finnish. However, the government accorded no such status to Sámi.
Both state-run residential boarding schools and schools run by churches included Lutheranism as a subject and strove relentlessly to assimilate Sámi into the majority culture, language and worldview, teaching children to see their culture as backward and shameful. Some church and school authorities cooperated with pseudoscientific racial researchers measuring students’ heads and excavating Sámi graves.
A ‘nomad school’ for Sami children in Jukkasjarvi, Sweden, 250 miles north of the Arctic Circle, in 1956. John Firth/BIPs/Getty Images
As a result, many students ceased to identify as Sámi and adopted the majority language as their primary mode of communication. Today, only about half the people who identify as Sámi have any facility in Sámi languages, which are considered endangered.
After World War II, church attendance in all the Nordic countries began to plummet. Where 98% of the Finnish population belonged to the state church in 1900, by 2024 that percentage had dropped to 62%. The bulk of defections consisted of people who registered as having no religious affiliation. Membership in the national church shifted from compulsory to voluntary.
Yet as anthropologist David Koester shows, some elements of Lutheran tradition remain extremely popular in all the Nordic countries, particularly Confirmation. The ritual remains a key rite of passage for most Sámi today, yet many of them wrestle with whether they should remain faithful to a church that had worked to suppress their community’s language and culture.
For example, in a church in the northern Swedish town of Jukkasjärvi, an image of the sun as it appeared on Sámi ceremonial drums now faces the altar, providing a vivid reminder of the spiritual history and past worldview of the church’s Sámi congregation. The symbol now encloses an image of a communion wafer carved of reindeer antler.
In 2005, Sunna created a traveling art exhibit that portrayed Sámi Christianization as an act of cultural violence. The exhibit, designed for temporary installation in church sanctuaries, aimed to provoke discussion and encourage open dialogue about the past.
Similarly, in 2008, Norwegian Sámi filmmaker Nils Gaup produced “Kautokeino Rebellion,” a film recounting clergy’s role in suppressing religious activism among followers of a Swedish Sámi minister, Lars Levi Laestadius. The so-called uprising in 1852 led to the imprisonment of several dozen Sámi and the execution of two men – whose skulls were deposited in a research institute and did not receive proper burial until 1997.
Since church attendance is infrequent in Nordic countries, art and film serve as important vehicles for raising awareness of the church’s past. In November 2021, the archbishop of Sweden, Antje Jackelén, issued a formal apology to the Sámi. Sámi artist and activist Anders Sunna was invited to temporarily redecorate the sanctuary of the Cathedral of Uppsala for the occasion. His decorations included reminders of past Sámi sacrificial traditions that took place both outdoors and around hearth fires. In place of a grand altar, Sunna erected a simple table, surrounded by an octagon of benches where the bishop and members of the Sámi community would sit face to face with a sense of equality and respect.
When the Finnish archbishop apologized in May 2025, Sámi in attendance at the Turku Cathedral were appreciative, but they were eager to see what actions might follow, according to reporters at the ceremony. The same wait-and-see attitude characterizes Sámi responses to state-run Truth and Reconciliation processes, which occurred in Norway in 2023 and are currently ongoing in Swedenand Finland.
The process of healing a society injured by colonialism is difficult and slow, requiring extensive discussion – much of it uncomfortable. With Luoma’s words of apology and the arrival of Sámi to listen and witness, an important step in that process occurred.
Thomas A. DuBois does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Source: The Conversation – USA – By Flora Cassen, Senior Faculty, Hartman Institute and Associate Professor of History and Jewish Studies, Washington University in St. Louis
Every few years, a story about Columbus resurfaces: Was the Genoese navigator who claimed the Americas for Spain secretly Jewish, from a Spanish family fleeing the Inquisition?
This tale became widespread around the late 19th century, when large numbers of Jews came from Russia and Eastern Europe to the United States. For these immigrants, 1492 held double significance: the year of Jews’ expulsion from Spain, as well as Columbus’ voyage of discovery. At a time when many Americans viewed the explorer as a hero, the idea that he might have been one of their own offered Jewish immigrants a link to the beginnings of their new country and the American story of freedom from Old World tyranny.
The problem with the Columbus-was-a-Jew theory isn’t just that it’s based on flimsy evidence. It also distracts from the far more complex and true story of Spanish Jews in the Americas.
In the 15th century, the kingdom’s Jews faced a wrenching choice: convert to Christianity or leave the land their families had called home for generations. Portugal’s Jews faced similar persecution. Whether they sought a new place to settle or stayed and hoped to be accepted as members of Christian society, both groups were searching for belonging.
The story of the New World is not complete without the voices of Jewish communities that engaged with it from the very beginning.
Double consciousness
The first Jews in the Americas were, in fact, not Jews but “conversos,” meaning “converts,” and their descendants.
After a millennium of relatively peaceful and prosperous life on Iberian soil, the Jews of Spain were attacked by a wave of mob violence in the summer of 1391. Afterward, thousands of Jews were forcibly converted.
While conversos were officially members of the Catholic Church, neighbors looked at them with suspicion. Some of these converts were “crypto-Jews,” who secretly held on to their ancestral faith. Spanish authorities formed the Inquisition to root out anyone the church considered heretics, especially people who had converted from Judaism and Islam.
In 1492, after conquering the last Muslim stronghold in Spain, monarchs Ferdinand and Isabella gave the remaining Spanish Jews the choice of conversion or exile. Eventually, people who converted from Islam would be expelled as well.
Among Jews who converted, some sought new lives within the rapidly expanding Spanish empire. As the historian Jonathan Israel wrote, Jews and conversos were both “agents and victims of empire.” Their familiarity with Iberian language and culture, combined with the dispersion of their community, positioned them to participate in the new global economy: trade in sugar, textiles, spices – and the trade in human lives, Atlantic slavery.
Yet conversos were also far more vulnerable than their compatriots: They could lose it all, even end up burned alive at the stake, because of their beliefs. This double consciousness – being part of the culture, yet apart from it – is what makes conversos vital to understanding the complexities of colonial Latin America.
By the 17th century, once the Dutch and the English conquered parts of the Americas, Jews would be able to live there. Often, these were families whose ancestors had been expelled from the Iberian peninsula. In the first Spanish and Portuguese colonies, however, Jews were not allowed to openly practice their faith.
Secret spirituality
One of these conversos was Luis de Carvajal. His uncle, the similarly named Luis de Carvajal y de la Cueva, was a merchant, slave trader and conquistador. As a reward for his exploits he was named governor of the New Kingdom of León, in the northeast of modern-day Mexico. In 1579 he brought over a large group of relatives to help him settle and administer the rugged territory, which was made up of swamps, deserts and silver mines.
The uncle was a devout Catholic who attempted to shed his converso past, integrating himself into the landed gentry of Spain’s New World empire. Luis the younger, however, his potential heir, was a passionate crypto-Jew who spent his free time composing prayers to the God of Israel and secretly following the commandments of the Torah.
When Luis and his family were arrested by the Inquisition in 1595, his book of spiritual writings was discovered and used as evidence of his secret Jewish life. Luis, his mother and sister were burned at the stake, but the small, leather-bound diary survived.
Luis’ religious thought drew on a wide range of early modern Spanish culture. He used a Latin Bible and drew inspiration from the inwardly focused spirituality of Catholic thinkers such as Fray Luis de Granada, a Dominican theologian. He met with the hermit and mystic Gregorio López. He discovered passages from Maimonides and other rabbis quoted in the works of Catholic theologians whom he read at the famed monastery of Santiago de Tlatelolco, in Mexico City, where he worked as an assistant to the rector.
His spiritual writings are deeply American: The wide deserts and furious hurricanes of Mexico were the setting of his spiritual awakenings, and his encounters with the people and cultures of the emerging Atlantic world shaped his religious vision. This little book is a unique example of the brilliant, creative culture that developed in the crossing from Old World to New, born out of the exchange and conflict between diverse cultures, languages and faiths.
A glimpse of Luis de Carvajal’s spiritual writings, photographed in New York City. Ronnie Perelis
More than translation
Spanish Jews who refused to convert in 1492, meanwhile, had been forced into exile and barred from the kingdom’s colonies.
The journey of Joseph Ha-Kohen’s family illustrates the hardships. After the expulsion, his parents moved to Avignon, the papal city in southern France, where Joseph was born in 1496. From there, they made their way to Genoa, the Italian merchant city, hoping to establish themselves. But it was not to be. The family was repeatedly expelled, permitted to return, and then expelled again.
Despite these upheavals, Ha-Kohen became a doctor and a merchant, a leader in the Jewish community – earning the respect of the Christian community, too. Toward the end of his life, he settled in a small mountain town beyond the city’s borders and turned to writing.
Ha-Kohen’s work was the first Hebrew-language book about the Americas. The text was hundreds of pages long – and he copied his entire manuscript nine times by hand. He had never seen the Americas, but his own life of repeated uprooting may have led him to wonder whether Jews would one day seek refuge there.
Ha-Kohen wanted his readers to have access to the text’s geographical, botanical and anthropological information, but not to Spain’s triumphalist narrative. So he created an adapted, hybrid translation. The differences between versions reveal the complexities of being a European Jew in the age of exploration.
Ha-Kohen omitted references to the Americas as Spanish territory and criticized the conquistadors for their brutality toward Indigenous peoples. At times, he compared Native Americans with the ancient Israelites of the Bible, feeling a kinship with them as fellow victims of oppression. Yet at other moments he expressed estrangement and even revulsion at Indigenous customs and described their religious practices as “darkness.”
Translating these men’s writing is not just a matter of bringing a text from one language into another. It is also a deep reflection on the complex position of Jews and conversos in those years. Their unique vantage point offers a window into the intertwined histories of Europe, the Americas and the in-betweenness that marked the Jewish experience in the early modern world.
The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.
Source: The Conversation – USA – By Flora Cassen, Senior Faculty, Hartman Institute and Associate Professor of History and Jewish Studies, Washington University in St. Louis
Every few years, a story about Columbus resurfaces: Was the Genoese navigator who claimed the Americas for Spain secretly Jewish, from a Spanish family fleeing the Inquisition?
This tale became widespread around the late 19th century, when large numbers of Jews came from Russia and Eastern Europe to the United States. For these immigrants, 1492 held double significance: the year of Jews’ expulsion from Spain, as well as Columbus’ voyage of discovery. At a time when many Americans viewed the explorer as a hero, the idea that he might have been one of their own offered Jewish immigrants a link to the beginnings of their new country and the American story of freedom from Old World tyranny.
The problem with the Columbus-was-a-Jew theory isn’t just that it’s based on flimsy evidence. It also distracts from the far more complex and true story of Spanish Jews in the Americas.
In the 15th century, the kingdom’s Jews faced a wrenching choice: convert to Christianity or leave the land their families had called home for generations. Portugal’s Jews faced similar persecution. Whether they sought a new place to settle or stayed and hoped to be accepted as members of Christian society, both groups were searching for belonging.
The story of the New World is not complete without the voices of Jewish communities that engaged with it from the very beginning.
Double consciousness
The first Jews in the Americas were, in fact, not Jews but “conversos,” meaning “converts,” and their descendants.
After a millennium of relatively peaceful and prosperous life on Iberian soil, the Jews of Spain were attacked by a wave of mob violence in the summer of 1391. Afterward, thousands of Jews were forcibly converted.
While conversos were officially members of the Catholic Church, neighbors looked at them with suspicion. Some of these converts were “crypto-Jews,” who secretly held on to their ancestral faith. Spanish authorities formed the Inquisition to root out anyone the church considered heretics, especially people who had converted from Judaism and Islam.
In 1492, after conquering the last Muslim stronghold in Spain, monarchs Ferdinand and Isabella gave the remaining Spanish Jews the choice of conversion or exile. Eventually, people who converted from Islam would be expelled as well.
Among Jews who converted, some sought new lives within the rapidly expanding Spanish empire. As the historian Jonathan Israel wrote, Jews and conversos were both “agents and victims of empire.” Their familiarity with Iberian language and culture, combined with the dispersion of their community, positioned them to participate in the new global economy: trade in sugar, textiles, spices – and the trade in human lives, Atlantic slavery.
Yet conversos were also far more vulnerable than their compatriots: They could lose it all, even end up burned alive at the stake, because of their beliefs. This double consciousness – being part of the culture, yet apart from it – is what makes conversos vital to understanding the complexities of colonial Latin America.
By the 17th century, once the Dutch and the English conquered parts of the Americas, Jews would be able to live there. Often, these were families whose ancestors had been expelled from the Iberian peninsula. In the first Spanish and Portuguese colonies, however, Jews were not allowed to openly practice their faith.
Secret spirituality
One of these conversos was Luis de Carvajal. His uncle, the similarly named Luis de Carvajal y de la Cueva, was a merchant, slave trader and conquistador. As a reward for his exploits he was named governor of the New Kingdom of León, in the northeast of modern-day Mexico. In 1579 he brought over a large group of relatives to help him settle and administer the rugged territory, which was made up of swamps, deserts and silver mines.
The uncle was a devout Catholic who attempted to shed his converso past, integrating himself into the landed gentry of Spain’s New World empire. Luis the younger, however, his potential heir, was a passionate crypto-Jew who spent his free time composing prayers to the God of Israel and secretly following the commandments of the Torah.
When Luis and his family were arrested by the Inquisition in 1595, his book of spiritual writings was discovered and used as evidence of his secret Jewish life. Luis, his mother and sister were burned at the stake, but the small, leather-bound diary survived.
Luis’ religious thought drew on a wide range of early modern Spanish culture. He used a Latin Bible and drew inspiration from the inwardly focused spirituality of Catholic thinkers such as Fray Luis de Granada, a Dominican theologian. He met with the hermit and mystic Gregorio López. He discovered passages from Maimonides and other rabbis quoted in the works of Catholic theologians whom he read at the famed monastery of Santiago de Tlatelolco, in Mexico City, where he worked as an assistant to the rector.
His spiritual writings are deeply American: The wide deserts and furious hurricanes of Mexico were the setting of his spiritual awakenings, and his encounters with the people and cultures of the emerging Atlantic world shaped his religious vision. This little book is a unique example of the brilliant, creative culture that developed in the crossing from Old World to New, born out of the exchange and conflict between diverse cultures, languages and faiths.
A glimpse of Luis de Carvajal’s spiritual writings, photographed in New York City. Ronnie Perelis
More than translation
Spanish Jews who refused to convert in 1492, meanwhile, had been forced into exile and barred from the kingdom’s colonies.
The journey of Joseph Ha-Kohen’s family illustrates the hardships. After the expulsion, his parents moved to Avignon, the papal city in southern France, where Joseph was born in 1496. From there, they made their way to Genoa, the Italian merchant city, hoping to establish themselves. But it was not to be. The family was repeatedly expelled, permitted to return, and then expelled again.
Despite these upheavals, Ha-Kohen became a doctor and a merchant, a leader in the Jewish community – earning the respect of the Christian community, too. Toward the end of his life, he settled in a small mountain town beyond the city’s borders and turned to writing.
Ha-Kohen’s work was the first Hebrew-language book about the Americas. The text was hundreds of pages long – and he copied his entire manuscript nine times by hand. He had never seen the Americas, but his own life of repeated uprooting may have led him to wonder whether Jews would one day seek refuge there.
Ha-Kohen wanted his readers to have access to the text’s geographical, botanical and anthropological information, but not to Spain’s triumphalist narrative. So he created an adapted, hybrid translation. The differences between versions reveal the complexities of being a European Jew in the age of exploration.
Ha-Kohen omitted references to the Americas as Spanish territory and criticized the conquistadors for their brutality toward Indigenous peoples. At times, he compared Native Americans with the ancient Israelites of the Bible, feeling a kinship with them as fellow victims of oppression. Yet at other moments he expressed estrangement and even revulsion at Indigenous customs and described their religious practices as “darkness.”
Translating these men’s writing is not just a matter of bringing a text from one language into another. It is also a deep reflection on the complex position of Jews and conversos in those years. Their unique vantage point offers a window into the intertwined histories of Europe, the Americas and the in-betweenness that marked the Jewish experience in the early modern world.
The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.
Source: The Conversation – USA – By Skip York, Nonresident Fellow in Energy and Global Oil, Baker Institute for Public Policy, Rice University
Stock and commodities traders found themselves dealing with various price swings as energy markets responded to Israeli and U.S. attacks on Iran.Timothy A. Clary/AFP via Getty Imagesf
Global energy markets, such as those for oil, gas and coal, tend to be sensitive to a wide range of world events – especially when there is some sort of crisis. Having worked in the energy industry for over 30 years, I’ve seen how war, political instability, pandemics and economic sanctions can significantly disrupt energy markets and impede them from functioning efficiently.
A look at the basics
First, consider the economic fundamentals of supply and demand. The risk most people imagine in the current crisis between Israel, the U.S. and Iran is that Iran, which is itself a major oil-producing country, might suddenly expand the conflict by threatening the ability of neighboring countries to supply oil to the world.
Oil wells, refineries, pipelines and shipping lanes are the backbone of energy markets. They can be vulnerable during a crisis: Whether there is deliberate sabotage or collateral damage from military action, energy infrastructure often takes a hit.
For instance, after Saddam Hussein invaded Kuwait in August 1990, Iraqi forces placed explosive charges on Kuwaiti oil wells and began detonating them in January 1991. It took months for all the resulting fires to be put out, and millions of barrels of oil and hundreds of millions of cubic meters of natural gas were released into the environment – rather than being sold and used productively somewhere around the world.
Scenes of Kuwaiti life during and after the Gulf War of 1990 and 1991 include images of oil wells burning as a result of Iraqi sabotage.
Whether supply is lost from decreased production or blocked transportation routes, the effect is less oil available to the market, which not only causes prices to rise in general, but it also makes them more volatile – tending to change more frequently and by larger amounts.
On the flip side, demand can also shift radically. During the 1990-1991 Gulf War, demand rose: U.S. forces alone used more than 2 billion gallons of fuel, according to an Army analysis. By contrast, during the COVID-19 pandemic, industries shut down, travel came to a halt and energy demand plummeted.
When crisis looms, countries and companies often start stockpiling oil and other raw materials rather than buying only what they need right now. That creates even more imbalance, resulting in price volatility that leaves everyone, both consumers and producers, with a headache.
Regional considerations
In addition to uncertainties around market fundamentals, it’s important to note that many of the world’s energy reserves are located in regions that have not been models of stability. In the Middle East, wars, revolutions and diplomatic disputes there can raise concerns about supply, demand or both.
Governments’ economic sanctions, such as those restricting trade with Iran, Russia or Venezuela, can distort production and investment decisions and disrupt trade flows. Sometimes markets react even before sanctions are officially in place: Just the rumor of a possible embargo can cause prices to spike as buyers scramble to secure resources.
In 2008, for example, India and Vietnam imposed rice export bans, and rumors of additional restrictions fueled panic buying and nearly doubled prices in months.
In those scrambles, the role of investor speculation enters the picture. Energy commodities, such as oil and gas, aren’t just physical resources; they’re also traded as financial assets like stocks and bonds. During uncertain times, traders don’t wait around for actual changes in supply and demand. They react to news and forecasts, sometimes in large groups, which can shift the market just with the actions that result from their fears or hopes.
The events on June 22, 2025, are a good example of how this dynamic works. The Iranian parliament passed a resolution authorizing the country’s Supreme Council to close the Strait of Hormuz. Immediately, oil prices started rising, even though the strait was still open, with oil tankers steaming through unimpeded.
The next day, Iran launched a missile strike on Qatar, but coordinated in advance with Qatari officials to minimize damage and casualties. Traders and analysts perceived the action as a de-escalatory signal and anticipated that the Supreme Council was not going to close the strait. So prices started to fall.
It was a price roller coaster, fueled by speculation rather than reality. And computer algorithms and artificial intelligence, which assist in making automated trades, only add to the chaos of price changes.
Shipping activity in the Persian Gulf and the Strait of Hormuz decreased after Israel’s attacks on Iranian nuclear facilities.
A broader look
International crises can also cause wider changes in countries’ economies – or the global economy as a whole – which in turn affect the energy market.
If a crisis sparks a recession, rising inflation or high unemployment, those tend to cause people and businesses to use less energy. When the underlying situation stabilizes, recovery efforts can mean energy consumption resumes. But it’s like a pendulum swinging back and forth, with energy markets caught in the middle.
Renewable energy is not immune to international crisis and chaos. The supply is less affected by market forces: The amount of available sunlight and wind isn’t tied to geopolitical relations. But overall economic conditions still affect demand, and a crisis can disrupt the supply chains for the equipment needed to harness renewable energy, like solar panels and wind turbines.
It’s no wonder energy markets are so jittery during international crises. A mix of imbalances between supply and demand, vulnerable infrastructure, political tensions, corporate worries and speculative trading all weave together into a complex web of volatility.
For policymakers, investors and consumers, understanding these dynamics is key to navigating the ups and downs of energy markets in a crisis-prone world. The solutions aren’t simple, but being informed is the first step toward stability.
Skip York is a nonresident fellow for Global Oil and Energy with the Center for Energy Studies at Rice University’s Baker Institute for Public Policy. He also is the Chief Energy Strategist at Turner Mason & Company, an energy consulting firm.
To us, former EPA leaders – one a longtime career employee and the other a political appointee – the budget proposal reveals a lot about what Trump and EPA Administrator Lee Zeldin want to accomplish.
According to the administration’s Budget in Brief document, total EPA funding for the fiscal year beginning October 2025 would drop from US$9.14 billion to $4.16 billion – a 54% decrease from the budget enacted by Congress for fiscal 2025 and less than half of EPA’s budget in any year of the first Trump administration.
Federal budgeting is complicated, and EPA’s budget is particularly so. Here are some basics:
Each year, the president and Congress determine how much money will be spent on what things, and by which agencies. The familiar aphorism that “the president proposes, Congress disposes” captures the Constitution’s process for the federal budget, with Congress firmly holding the “power of the purse.”
EPA’s budget can be difficult to understand because individual programs may be funded from different sources. It is useful to consider it as a pie sliced into five main pieces:
Environmental programs and management: the day-to-day work of protecting air, water and land.
Science and technology: research on pollution, health effects and new environmental tools.
Superfund and trust funds: cleaning up contaminated sites and responding to emergency releases of pollution.
State and Tribal operating grants: supporting local implementation of environmental laws.
State capitalization grants: revolving loans for water infrastructure.
The Trump administration’s budget proposals for EPA represent a striking retreat from the national goals of clean air and clean water enacted in federal laws over the past 55 years. In the budget document, the administration argues that the federal government has done enough and that the protection of gains already achieved, as well as any further progress, should not be paid for with federal money.
This budget would reduce EPA’s ability to protect public health and the environment to a bare minimum at best. Most dramatic and, in our view, most significant are the elimination of operating grants to state governments, drastic reductions in funding for science of all kinds, and elimination of EPA programs relating to climate change and environmental justice, which addresses situations of disproportionate environmental harm to vulnerable populations. It would cut regulatory and enforcement activities that the administration sees as inconsistent with fossil energy development. Other proposed changes, notably for Superfund and capitalization grants, are more nuanced.
These changes to EPA’s regular budget allocation are separate from changes to supplementary EPA funding that have also been in the news, including for projects specified in the Inflation Reduction Act and other specific laws.
Environmental programs and management
Funding for basic work to protect the environment and prevent pollution would be cut by 22%. The reductions are not spread equally, however. All activities related to climate change would be eliminated, including the Energy Star program and greenhouse gas reporting and tracking. Funding for civil and criminal enforcement of environmental laws and regulations would be cut by 69% and 50%, respectively.
Scientific support functions would be cut by 34%. The Office of Research and Development would go from about 1,500 staff to about 500 and would be redistributed throughout the agency. This would diminish science that supports not just EPA’s work but that of organizations, industries, health care professionals and public and private researchers who benefit from EPA’s research.
Superfund is by far the largest of EPA’s cleanup trust funds. It allows EPA to clean up contaminated sites. It also forces the parties responsible for the contamination to either perform cleanups or reimburse the government for EPA-led cleanup work. When there is no viable responsible party, Superfund gives EPA the funds and authority to clean up contaminated sites.
Prior to 2021, Superfund was funded through EPA’s annual budget. In 2021 and 2022, Congress restored taxes on selected chemicals and petroleum products to help pay for Superfund. During the Biden administration, EPA reduced the Superfund’s line in the general budget, with the expectation that the Superfund tax revenues would more than make up for the reduction. Administrator Zeldin, who has said that site cleanup is a priority, is proposing to shift virtually all funding for cleanups to these new tax revenues.
There is risk in this approach, however. The Superfund tax expires in 2031 and has raised less than Treasury Department predictions in both 2023 and 2024. In fiscal year 2024, available tax receipts were predicted to be $2.5 billion, but only $1.4 billion was collected. Future funding is uncertain because it depends on the amounts of various chemicals that companies actually use. Experts disagree on whether this is significant for the Superfund program. The petrochemical industry, on whom this tax largely falls, is lobbying for its repeal.
Funds to address leaks at gas station tanks would be cut nearly in half. Funds to clean up oil and petroleum spills would be cut by 24%.
EPA has long delegated some of its powers to state environmental agencies, including permitting, inspections and enforcement of regulations that govern air, water and soil pollution. Since the 1970s, EPA has helped fund those activities through basic operating grants that require minimum state contributions and reward larger state investments with additional federal dollars.
The proposed budget would eliminate all of those grants to states – totaling $1 billion. The document itself explains that federal funding over decades has totaled “hundreds of billions of dollars” and has resulted in programs that “are mature or have accomplished their purpose.”
States disagree. They note that EPA has delegated 90% of the nation’s environmental protection work to state authorities, and states have accepted that workload based on the expectation of federal funding. The states say reduced funding would greatly diminish the actual work of environmental protection – site inspections, air and water monitoring, and enforcement – across the country.
State capitalization grants
Since 1987, EPA has given states money for revolving loan programs that provide low-interest loans to state and local governments to clean up waterways and provide safe drinking water. The proposed budget would cut that funding by 89%, from $2.8 billion to $305 million.
These capitalization grants were originally envisioned as seed money, with future loans available as the initial and subsequent loans were repaid. But the need for water infrastructure continues to grow, and Congress has for many years allocated additional money to the program.
In protecting the environment, you get what you pay for. In past years, Congress has refused to accept proposed drastic cuts to EPA’s budget. It remains to be seen whether this Congress will go along with these proposed rollbacks.
Stan Meiburg is a volunteer with the Environmental Protection Network. He was an employee of the Environmental Protection Agency from 1977 to 2017.
i have worked at the US EPA twice. During the Obama Administration, i was first principal deputy to the Assistant Administrator of the Office of Air and Radiation and then Acting Assistant Administrator. During the Biden Administration, I was Deputy Administrator. I am also a volunteer with the Environmental Protection Network.
American democracy runs on trust, and that trust is cracking.
Nearly half of Americans, both Democrats and Republicans, question whether elections are conducted fairly. Some voters accept election results only when their side wins. The problem isn’t just political polarization – it’s a creeping erosion of trust in the machinery of democracy itself.
Commentators blame ideological tribalism, misinformation campaigns and partisan echo chambers for this crisis of trust. But these explanations miss a critical piece of the puzzle: a growing unease with the digital infrastructure that now underpins nearly every aspect of how Americans vote.
The digital transformation of American elections has been swift and sweeping. Just two decades ago, most people voted using mechanical levers or punch cards. Today, over 95% of ballots are counted electronically. Digital systems have replaced poll books, taken over voter identity verification processes and are integrated into registration, counting, auditing and voting systems.
This technological leap has made voting more accessible and efficient, and sometimes more secure. But these new systems are also more complex. And that complexity plays into the hands of those looking to undermine democracy.
In recent years, authoritarian regimes have refined a chillingly effective strategy to chip away at Americans’ faith in democracy by relentlessly sowing doubt about the tools U.S. states use to conduct elections. It’s a sustained campaign to fracture civic faith and make Americans believe that democracy is rigged, especially when their side loses.
This is not cyberwar in the traditional sense. There’s no evidence that anyone has managed to break into voting machines and alter votes. But cyberattacks on election systems don’t need to succeed to have an effect. Even a single failed intrusion, magnified by sensational headlines and political echo chambers, is enough to shake public trust. By feeding into existing anxiety about the complexity and opacity of digital systems, adversaries create fertile ground for disinformation and conspiracy theories.
Just before the 2024 presidential election, Director of the Cybersecurity and Infrastructure Security Agency Jen Easterly explains how foreign influence campaigns erode trust in U.S. elections.
Testing cyber fears
To test this dynamic, we launched a study to uncover precisely how cyberattacks corroded trust in the vote during the 2024 U.S. presidential race. We surveyed more than 3,000 voters before and after election day, testing them using a series of fictional but highly realistic breaking news reports depicting cyberattacks against critical infrastructure. We randomly assigned participants to watch different types of news reports: some depicting cyberattacks on election systems, others on unrelated infrastructure such as the power grid, and a third, neutral control group.
The results, which are under peer review, were both striking and sobering. Mere exposure to reports of cyberattacks undermined trust in the electoral process – regardless of partisanship. Voters who supported the losing candidate experienced the greatest drop in trust, with two-thirds of Democratic voters showing heightened skepticism toward the election results.
But winners too showed diminished confidence. Even though most Republican voters, buoyed by their victory, accepted the overall security of the election, the majority of those who viewed news reports about cyberattacks remained suspicious.
The attacks didn’t even have to be related to the election. Even cyberattacks against critical infrastructure such as utilities had spillover effects. Voters seemed to extrapolate: “If the power grid can be hacked, why should I believe that voting machines are secure?”
Strikingly, voters who used digital machines to cast their ballots were the most rattled. For this group of people, belief in the accuracy of the vote count fell by nearly twice as much as that of voters who cast their ballots by mail and who didn’t use any technology. Their firsthand experience with the sorts of systems being portrayed as vulnerable personalized the threat.
It’s not hard to see why. When you’ve just used a touchscreen to vote, and then you see a news report about a digital system being breached, the leap in logic isn’t far.
Our data suggests that in a digital society, perceptions of trust – and distrust – are fluid, contagious and easily activated. The cyber domain isn’t just about networks and code. It’s also about emotions: fear, vulnerability and uncertainty.
Firewall of trust
Does this mean we should scrap electronic voting machines? Not necessarily.
Every election system, digital or analog, has flaws. And in many respects, today’s high-tech systems have solved the problems of the past with voter-verifiable paper ballots. Modern voting machines reduce human error, increase accessibility and speed up the vote count. No one misses the hanging chads of 2000.
But technology, no matter how advanced, cannot instill legitimacy on its own. It must be paired with something harder to code: public trust. In an environment where foreign adversaries amplify every flaw, cyberattacks can trigger spirals of suspicion. It is no longer enough for elections to be secure − voters must also perceive them to be secure.
That’s why public education surrounding elections is now as vital to election security as firewalls and encrypted networks. It’s vital that voters understand how elections are run, how they’re protected and how failures are caught and corrected. Election officials, civil society groups and researchers can teach how audits work, host open-source verification demonstrations and ensure that high-tech electoral processes are comprehensible to voters.
We believe this is an essential investment in democratic resilience. But it needs to be proactive, not reactive. By the time the doubt takes hold, it’s already too late.
Just as crucially, we are convinced that it’s time to rethink the very nature of cyber threats. People often imagine them in military terms. But that framework misses the true power of these threats. The danger of cyberattacks is not only that they can destroy infrastructure or steal classified secrets, but that they chip away at societal cohesion, sow anxiety and fray citizens’ confidence in democratic institutions. These attacks erode the very idea of truth itself by making people doubt that anything can be trusted.
If trust is the target, then we believe that elected officials should start to treat trust as a national asset: something to be built, renewed and defended. Because in the end, elections aren’t just about votes being counted – they’re about people believing that those votes count.
And in that belief lies the true firewall of democracy.
Anthony DeMattee receives funding from National Science Foundation and various academic institutions. He is the Data Scientist in the Democracy Program at The Carter Center.
Bruce Schneier and Ryan Shandler do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.
Top Republicans and Democrats alike are talking about the sudden rise of 33-year-old Zohran Mamdani, a state representative who won the Democratic mayoral primary in New York on June 24, 2025, in a surprising victory over more established politicians.
While President Donald Trump quickly came out swinging with personal attacks against Mamdani, some establishment Democratic politicians say they are concerned about how the democratic socialist’s progressive politics could harm the broader Democratic Party and cause it to lose more centrist voters.
New York is a unique American city, with a diverse population and historically liberal politics. So, does a primary mayoral election in New York serve as any kind of harbinger of what could come in the rest of the country?
Amy Lieberman, a politics and society editor at The Conversation U.S., spoke with Lincoln Mitchell, a political strategy and campaign specialist who lectures at Columbia University, to understand what Mamdani’s primary win might indicate about the direction of national politics.
New York mayoral candidate Zohran Mamdani, center, greets voters with New York Comptroller Brad Lander, right, on the Upper West Side on June 24, 2025. Michael M. Santiago/Getty Images
Does Mamdani’s primary win offer any indication of how the Democratic Party might be transforming on a national level?
Mamdani’s win is clearly a rebuke of the more corporate wing of the Democratic Party. I know there are people who say that New York is different from the rest of the country. But from a political perspective, Democrats in New York are less different from Democrats in the rest of country than they used to be.
That’s because the rest of America is so much more diverse than it used to be. But if you look at progressive politicians now in the House of Representatives and state legislatures, they are being elected from all over – not just in big cities like New York anymore.
Andrew Cuomo, the former governor of New York, ran an absolutely terrible mayoral campaign. He tried to build a political coalition that is no longer a winning one, which was made up of majorities of African Americans, outer-borough white New Yorkers and orthodox and conservative Jews. Thirty or 40 years ago, that was a powerful coalition. Today, it could not make up a majority.
Mamdani visualized and created what a 2025 progressive coalition looks like in New York and recognized that it is going to look different than the past. Mamdani’s coalition was based around young, white people – many of them with college degrees who are worried about affordability – ideological lefties and immigrants from parts of the Global South, including the Caribbean and parts of Africa, South Asia and South America.
When you say a new kind of political coalition, what policy priorities bring Mamdani’s supporters together?
Mamdani reframed what I would call redistributive economic policies that have long been central to the progressive agenda. A pillar of his campaign is affordability – a brilliant piece of political marketing because who is against affordability? He came up with some affordability-related policies that got enough buzz, like promising free buses. Free buses are great, but it won’t help most working and poor New Yorkers get to work – they take the subway.
He has been very critical of Israel and has weathered charges of antisemitism.
In the older New York, progressive politicians such as the late Congressman Charlie Rangel were very hawkish on Israel.
What Mamdani understood is that in today’s America, the progressive wing of the Democratic Party does not care if somebody is, sounds like or comes close to being antisemitic. For those people, calling someone antisemitic sounds Trumpy, and they understand it as a right-wing hit, rather than the legitimate expression of concerns from Jewish people. Some liberals think that claims of antisemitism are simply something done just by those on the right to damage or discredit progressive politicians, but antisemitism is real.
Therefore, Mamdani’s record on the Jewish issue did not hurt him in the campaign, but he needs to build bridges to Jewish voters, or he will not be able to govern New York City.
How else did Mamdani appeal to a base of supporters?
He got the support of “limousine liberals” – including rich, high-profile, progressive people. His supporters include Ella Emhoff, a model and the stepdaughter of Kamala Harris, and the actress Cynthia Nixon, but there were many others. Supporting Mamdani became stylish – almost de rigueur – among certain segments of affluent New York.
Mamdani is also a true New Yorker and the voice of a new kind of immigrant. His parents are from Uganda and India. But he is also the child of extreme privilege – his mother, Mira Nair, is a well-known filmmaker, and his father is an accomplished professor. Mamdani went to top schools in New York and knows how to play in elite circles, and with white people. He is a Muslim man whose roots are in the Global South, not threatening because he knows how to speak their language.
But to people of color and immigrants, Mamdani is also one of them. Because of Mamdani’s interesting background, he brought the limousine liberals together with the aunties from Bangladesh.
Finally, on the charisma scale, Mamdani was so far ahead of other Democratic candidates. Who is going to make better TikTok videos – the good-looking, young man whose mother is a world-famous movie producer, or the older guy who is a loving father and husband but gives off dependable dad, rather than hip young guy, vibes?
People arrive to vote in the New York mayoral primary in Brooklyn on June 24, 2025. Spencer Platt/Getty Images
Is New York City so distinct that you cannot compare politics there to what happens nationwide?
I think that nationwide or at the state level there is a potential for something similar to a Mamdani coalition, but not a Mamdani coalition exactly. But in a place like Oklahoma, there are people who are in bad economic shape and who will also respond positively to an affordability-focused, Democratic political campaign. Mamdani remade a progressive New York coalition for this moment. Other progressives politicians should copy the spirit of that and reimagine a winning coalition in their city, state or district.
When Trump was campaigning, he focused at least on making groceries cheaper. Mamdani is one of the few Democrats who took the affordability issue back from Trump and addressed it head on and in a much more honest and relevant way. Trump has the phrase, “Make America Great Again!” That’s a popular slogan on baseball caps for Trump supporters.
If Mamdani wanted to make a baseball cap, he could just print “Affordability” on it. Boom.
Other Democratic politicians can take that approach of affordability and reframe it in a way that works in Kansas City or elsewhere.
Lincoln Mitchell supported Brad Lander in the primary election.
Source: The Conversation – USA – By Brian J. Yanites, Associate Professor of Earth and Atmospheric Science. Professor of Surficial and Sedimentary Geology, Indiana University
The Carter Lodge hangs precariously over the flood-scoured bank of the Broad River in Chimney Rock Village, N.C., on May 13, 2025, eight months after Hurricane Helene.AP Photo/Allen G. Breed
Hurricane Helene lasted only a few days in September 2024, but it altered the landscape of the Southeastern U.S. in profound ways that will affect the hazards local residents face far into the future.
Helene was a powerful reminder that natural hazards don’t disappear when the skies clear – they evolve.
These transformations are part of what scientists call cascading hazards. They occur when one natural event alters the landscape in ways that lead to future hazards. A landslide triggered by a storm might clog a river, leading to downstream flooding months or years later. A wildfire can alter the soil and vegetation, setting the stage for debris flows with the next rainstorm.
Satellite images before (top) and after Hurricane Helene (bottom) show how the storm altered landscape near Pensacola, N.C., in the Blue Ridge Mountains. Google Earth, CC BY
I study these disasters as a geomorphologist. In a new paper in the journal Science, I and a team of scientists from 18 universities and the U.S. Geological Survey explain why hazard models – used to help communities prepare for disasters – can’t just rely on the past. Instead, they need to be nimble enough to forecast how hazards evolve in real time.
The science behind cascading hazards
Cascading hazards aren’t random. They emerge from physical processes that operate continuously across the landscape – sediment movement, weathering, erosion. Together, the atmosphere, biosphere and the earth are constantly reshaping the conditions that cause natural disasters.
For instance, earthquakes fracture rock and shake loose soil. Even if landslides don’t occur during the quake itself, the ground may be weakened, leaving it primed for failure during later rainstorms.
A strong aftershock after a 7.8 magnitude earthquake in Sichuan province, China, in May 2008 triggered more landslides in central China. AP Photo/Andy Wong
Earth’s surface retains a “memory” of these events. Sediment disturbed in an earthquake, wildfire or severe storm will move downslope over years or even decades, reshaping the landscape as it goes.
These risks present challenges for everything from emergency planning to home insurance. After repeated wildfire-mudslide combinations in California, some insurers pulled out of the state entirely, citing mounting risks and rising costs among the reasons.
Cascading hazards are not new, but their impact is intensifying.
Yet climate change is only part of the equation. Earth processes – such as earthquakes and volcanic eruptions – also trigger cascading hazards, often with long-lasting effects.
Mount St. Helens is a powerful example: More than four decades after its eruption in 1980, the U.S. Army Corps of Engineers continues to manage ash and sediment from the eruption to keep it from filling river channels in ways that could increase the flood risk in downstream communities.
Rethinking risk and building resilience
Traditionally, insurance companies and disaster managers have estimated hazard risk by looking at past events.
But when the landscape has changed, the past may no longer be a reliable guide to the future. To address this, computer models based on the physics of how these events work are needed to help forecast hazard evolution in real time, much like weather models update with new atmospheric data.
A March 2024 landslide in the Oregon Coast Range wiped out trees in its path. Brian Yanites, June 2025 A drone image of the same March 2024 landslide in the Oregon Coast Range shows where it temporarily dammed the river below. Brian Yanites, June 2025
Thanks to advances in Earth observation technology, such as satellite imagery, drone and lidar, which is similar to radar but uses light, scientists can now track how hillslopes, rivers and vegetation change after disasters. These observations can feed into geomorphic models that simulate how loosened sediment moves and where hazards are likely to emerge next.
Cascading hazards reveal that Earth’s surface is not a passive backdrop, but an active, evolving system. Each event reshapes the stage for the next.
Understanding these connections is critical for building resilience so communities can withstand future storms, earthquakes and the problems created by debris flows. Better forecasts can inform building codes, guide infrastructure design and improve how risk is priced and managed. They can help communities anticipate long-term threats and adapt before the next disaster strikes.
Most importantly, they challenge everyone to think beyond the immediate aftermath of a disaster – and to recognize the slow, quiet transformations that build toward the next.
Brian J. Yanites receives funding from the National Science Foundation.
A satellite image from Aug. 13, 2024, shows an algal bloom covering approximately 320 square miles (830 square km) of Lake Erie. By Aug. 22, it had nearly doubled in size.NASA Earth Observatory
Federal scientists released their annual forecast for Lake Erie’s harmful algal blooms on June 26, 2025, and they expect a mild to moderate season. However, anyone who comes in contact with the blooms can face health risks, and it’s worth remembering that 2014, when toxins from algae blooms contaminated the water supply in Toledo, Ohio, was considered a moderate year, too.
The National Oceanic and Atmospheric Administration’s prediction for harmful algal bloom severity in Lake Erie compared with past years. NOAA
1. What causes harmful algal blooms?
Harmful algal blooms are dense patches of excessive algae growth that can occur in any type of water body, including ponds, reservoirs, rivers, lakes and oceans. When you see them in freshwater, you’re typically seeing cyanobacteria, also known as blue-green algae.
These photosynthetic bacteria have inhabited our planet for billions of years. In fact, they were responsible for oxygenating Earth’s atmosphere, which enabled plant and animal life as we know it.
The leading source of harmful algal blooms today is nutrient runoff from fertilized farm fields. Michigan Sea Grant
Algae are natural components of ecosystems, but they cause trouble when they proliferate to high densities, creating what we call blooms.
The main sources of harmful algal blooms are excess nutrients in the water, typically phosphorus and nitrogen.
Historically, these excess nutrients mainly came from sewage and phosphorus-based detergents used in laundry machines and dishwashers that ended up in waterways. U.S. environmental laws in the early 1970s addressed this by requiring sewage treatment and banning phosphorus detergents, with spectacular success.
How pollution affected Lake Erie in the 1960s, before clean water regulations.
Today, agriculture is the main source of excess nutrients from chemical fertilizer or manure applied to farm fields to grow crops. Rainstorms wash these nutrients into streams and rivers that deliver them to lakes and coastal areas, where they fertilize algal blooms. In the U.S., most of these nutrients come from industrial-scale corn production, which is largely used as animal feed or to produce ethanol for gasoline.
2. What does your team’s DNA testing tell us about Lake Erie’s harmful algal blooms?
Harmful algal blooms contain a mixture of cyanobacterial species that can produce an array of different toxins, many of which are still being discovered.
These novel molecules cannot be detected with traditional methods and show some signs of causing toxicity, though further studies are needed to confirm their human health effects.
Blue-green algae blooms in freshwater, like this one near Toledo in 2014, can be harmful to humans, causing gastrointestinal symptoms, headache, fever and skin irritation. They can be lethal for pets. Ty Wright for The Washington Post via Getty Images
We also found organisms responsible for producing saxitoxin, a potent neurotoxin that is well known for causing paralytic shellfish poisoning on the Pacific Coast of North America and elsewhere.
Our research suggests warmer water temperatures could boost its production, which raises concerns that saxitoxin will become more prevalent with climate change. However, the controls on toxin production are complex, and more research is needed to test this hypothesis. Federal monitoring programs are essential for tracking and understanding emerging threats.
The toxins can cause acute health problems such as gastrointestinal symptoms, headache, fever and skin irritation. Dogs can die from ingesting lake water with harmful algal blooms. Emerging science suggests that long-term exposure to harmful algal blooms, for example over months or years, can cause or exacerbate chronic respiratory, cardiovascular and gastrointestinal problems and may be linked to liver cancers, kidney disease and neurological issues.
The water intake system for the city of Toledo, Ohio, is surrounded by an algae bloom in 2014. Toxic algae got into the water system, resulting in residents being warned not to touch or drink their tap water for three days. AP Photo/Haraz N. Ghanbari
In addition to exposure through direct ingestion or skin contact, recent research also indicates that inhaling toxins that get into the air may harm health, raising concerns for coastal residents and boaters, but more research is needed to understand the risks.
The Toledo drinking water crisis of 2014 illustrated the vast potential for algal blooms to cause harm in the Great Lakes. Toxins infiltrated the drinking water system and were detected in processed municipal water, resulting in a three-day “do not drink” advisory. The episode affected residents, hospitals and businesses, and it ultimately cost the city an estimated US$65 million.
4. Blooms seem to be starting earlier in the year and lasting longer – why is that happening?
Warmer waters are extending the duration of the blooms.
Scientific studies of western Lake Erie show that the potential cyanobacterial growth rate has increased by up to 30% and the length of the bloom season has expanded by up to a month from 1995 to 2022, especially in warmer, shallow waters. These results are consistent with our understanding of cyanobacterial physiology: Blooms like it hot – cyanobacteria grow faster at higher temperatures.
5. What can be done to reduce the likelihood of algal blooms in the future?
The best and perhaps only hope of reducing the size and occurrence of harmful algal blooms is to reduce the amount of nutrients reaching the Great Lakes.
In Lake Erie, where nutrients come primarily from agriculture, that means improving agricultural practices and restoring wetlands to reduce the amount of nutrients flowing off of farm fields and into the lake. Early indications suggest that Ohio’s H2Ohio program, which works with farmers to reduce runoff, is making some gains in this regard, but future funding for H2Ohio is uncertain.
In places like Lake Superior, where harmful algal blooms appear to be driven by climate change, the solution likely requires halting and reversing the rapid human-driven increase in greenhouse gases in the atmosphere.
Gregory J. Dick receives funding for harmful algal bloom research from the National Oceanic and Atmospheric Administration, the National Science Foundation, the United States Geological Survey, and the National Institutes for Health. He serves on the Science Advisory Council for the Environmental Law and Policy Center.
The former head of Human Rights Watch — and son of a Holocaust survivor — says Israel’s military campaign in Gaza will likely meet the legal definition of genocide, citing large-scale killings, the targeting of civilians, and the words of senior Israeli officials.
Speaking on 30′ with Guyon Espiner, Ken Roth agreed Hamas committed “blatant war crimes” in its attack on Israel on October 7 last year, which included the abduction and murder of civilians.
But he said it was a “basic rule” that war crimes by one side do not justify war crimes by the other.
There was indisputable evidence Israel had committed war crimes in Gaza and might also be pursuing tactics that fit the international legal standard for genocide, Roth said.
30′ with Guyon Espiner Kenneth Roth Video: RNZ
“The acts are there — mass killing, destruction of life-sustaining conditions. And there are statements from senior officials that point clearly to intent,” Roth said.
He cited comments immediately after the October 7 attack by Hamas from Israel’s former Defence Minister Yoav Gallant, who referred to Gazans as “human animals”.
Israeli President Isaac Herzog also said “an entire nation” was responsible for the attack and the notion of “unaware, uninvolved civilians is not true,” referring to the Palestinean people. Herzog subsequently said his words were taken out of context during a case at the International Court of Justice.
The accusation of genocide is hotly contested. Israel says it is fighting a war of self-defence against Hamas after it killed 1200 people, mostly civilians. It claims it adheres to international law and does its best to protect civilians.
It blames Hamas for embedding itself in civilian areas.
But Roth believes a ruling may ultimately come from the International Court of Justice, especially if a forthcoming judgment on Myanmar sets a precedent.
“It’s very similar to what Myanmar did with the Rohingya,” he said. “Kill about 30,000 to send 730,000 fleeing. It’s not just about mass death. It’s about creating conditions where life becomes impossible.”
‘Apartheid’ alleged in Israel’s West Bank Roth has been described as the ‘Godfather of Human Rights’, and is credited with vastly expanding the influence of the Human Rights Watch group during a 29-year tenure in charge of the organisation.
In the full interview with Guyon Espiner, Roth defended the group’s 2021 report that accused Israel of enforcing a system of apartheid in the occupied West Bank.
“This was not a historical analogy,” he said, implying it was a mistake to compare it with South Africa’s former apartheid regime.
“It was a legal analysis. We used the UN Convention against Apartheid and the Rome Statute, and laid out over 200 pages of evidence.”
Kenneth Roth appears via remote link in studio for an interview on season 3 of 30′ with Guyon Espiner. Image: RNZ
He said the Israeli government was unable to offer a factual rebuttal.
“They called us biased, antisemitic — the usual. But they didn’t contest the facts.”
The ‘cheapening’ of antisemitism charges Roth, who is Jewish and the son of a Holocaust refugee, said it was disturbing to be accused of antisemitism for criticising a government.
“There is a real rise in antisemitism around the world. But when the term is used to suppress legitimate criticism of Israel, it cheapens the concept, and that ultimately harms Jews everywhere.”
Roth said Israeli Prime Minister Benjamin Netanyahu had long opposed a two-state solution and was now pursuing a status quo that amounted to permanent subjugation of Palestinians, a situation human rights groups say is illegal.
“The only acceptable outcome is two states, living side by side. Anything else is apartheid, or worse,” Roth said.
While the international legal process around charges of genocide may take years, Roth is convinced the current actions in Gaza will not be forgotten.
“This is not just about war,” he said. “It’s about the deliberate use of starvation, displacement and mass killing to achieve political goals. And the law is very clear — that’s a crime.”
Roth’s criticism of Israel saw him initially denied a fellowship at Harvard University in 2023. The decision was widely seen as politically motivated, and was later reversed after public and academic backlash.
This article is republished under a community partnership agreement with RNZ.
Source: The Conversation – France – By Jacques Rupnik, Directeur de recherche émérite, Centre de recherches internationales (CERI), Sciences Po
Karol Nawrocki in the Oval Office with Donald Trump on May 25th 2025, ten days before the first round of the Polish presidential election. It is very rare for a sitting US president to receive a candidate in a foreign election. White House X account
Nawrocki’s narrow victory (50.89%) over Trzaskowski, the mayor of Warsaw and candidate of the government coalition, illustrates and reinforces the political polarisation of Poland and the rise of the populist “Trumpist” right in Central and Eastern Europe. Since the start of the war in Ukraine, there has been much speculation about whether Europe’s geopolitical centre of gravity is shifting eastwards. The Polish election seems to confirm that the political centre of gravity is shifting to the right.
A narrow victory
We are witnessing a relative erosion of the duopoly of the two major parties, Civic Platform (PO) and Law and Justice (PiS), whose leaders – the current Prime Minister, Donald Tusk, and Jarosław Kaczyński respectively – have dominated the political landscape for over twenty years.
Kaczyński’s skill lay in propelling a candidate with no responsibilities in his party, who was little known to the general public a few months ago, and, above all, who is from a different generation, to the presidency (a position held since 2015 by a PiS man, Andrzej Duda). Nawrocki, a historian by training and director of the Polish Institute of National Remembrance, has helped shape PiS’s memory policy. He won the second round, despite his troubled past as a hooligan, by appealing to voters on the right.
In the first round, he won 29.5% of the vote, compared to Trzaskowski’s 31.36%, but the two far-right candidates, Sławomir Mentzen (an ultra-nationalist and economic libertarian) and Grzegorz Braun (a monarchist, avowed reactionary, and anti-Semite), won a total of 21% of the vote. They attracted a young electorate (60% of 18–29-year-olds), who overwhelmingly transferred their votes to Nawrocki in the second round.
A weekly email in English featuring expertise from scholars and researchers. It provides an introduction to the diversity of research coming out of the continent and considers some of the key issues facing European countries. Get the newsletter!
Despite a high turnout of 71% and favourable votes from the Polish diaspora (63%), Trzaskowski was unable to secure enough votes from the first-round candidates linked to the governing coalition, including those on the left (who won 10% between them) and the centre-right (Szymon Hołownia’s Third Way movement, which won 5% in the first round).
A Tusk government struggling to implement its programme
There are two Polands facing each other: the big cities, where incomes and levels of education are higher, and the more rural small towns, which are more conservative on social issues and more closely linked to the Catholic Church.
The themes of nationhood – Nawrocki’s campaign slogan was “Poland first, Poles first” – family, and traditional values continue to resonate strongly with an electorate that has been loyal to PiS for more than twenty years. The electoral map, which shows a clear north-west/south-east divide, is similar to those of previous presidential elections and even echoes the partition of Poland at the end of the eighteenth century. The PiS vote is strongest in the part of the country that was under Russian rule until 1918. A more traditional Catholicism in these less developed regions, coupled with a strong sense of national identity, partly explains these historical factors.
The economic explanation for the vote is unconvincing. Over the past 25 years, Poland has undergone tremendous transformation, driven by steady economic growth. GDP per capita has risen from 25% to 80% of the EU average, although this growth has been unevenly distributed. Nevertheless, a relatively generous welfare state has been preserved.
Clearly, however, this growth, driven by investment from Western Europe (primarily Germany) and European structural funds (3% of GDP), does not provide a sufficient electoral base for a liberal, centrist, pro-European government.
It is precisely the government’s performance that may hold the key to Trzaskowski’s failure. Having come to power at the end of 2023 with a reformist agenda, Donald Tusk’s government has only been able to implement part of its programme, and it is difficult to be the candidate of an unpopular government. Conversely, the governing coalition has been weakened by the failure of its candidate.
The main reason for the stalling of reforms is the presidential deadlock. Although the president has limited powers, he countersigns laws and overriding his veto requires a three fifth majority in parliament, which the governing coalition lacks.
The president also plays a role in foreign policy by representing the country, and above all by appointing judges, particularly to the Supreme Court. This has hindered the judicial reforms expected after eight years of PiS rule. It is mainly in this area that Duda has obstructed progress. The election of Nawrocki, who is known for his combative nature, suggests that the period of cohabitation will be turbulent.
What are the main international implications of Nawrocki’s election?
Donald Tusk is now more popular in Europe than in Poland; in this respect, we can speak of a “Gorbachev syndrome”. In Central Europe, the Visegrad Group (comprising Hungary, Poland, the Czech Republic, and Slovakia) is deeply divided by the war in Ukraine, but it could find common ground around a populist sovereignty led by Hungary’s Viktor Orbán. Orbán was the first to congratulate Nawrocki on his victory, followed by his Slovak neighbour Robert Fico. The Czech Republic could also see a leader from this movement come to power if Andrej Babiš wins the parliamentary elections this autumn. Nawrocki would fit right into this picture.
Since Donald Tusk returned to power, particularly during Poland’s EU presidency, which ends on 30 June, the focus has been on Poland’s “return” to the heart of the European process. Against the backdrop of the war in Ukraine and Poland’s pivotal role in coordinating a European response, the Weimar Group (comprising Paris, Berlin, and Warsaw) has emerged as a key player. Three converging factors have made this possible: the French president’s firm stance toward Russia; the new German chancellor, Friedrich Merz, breaking a few taboos on defence and budgetary discipline; and Donald Tusk, the former president of the European Council, regaining a place at the heart of the EU that his predecessors had abandoned. A framework for a strategic Europe was taking shape.
However, President Nawrocki, and the PiS more generally, are taking a different approach to the EU: they are positioning themselves as Eurosceptic opponents defending sovereignty. They are playing on anti-German sentiment by demanding reparations 80 years after the end of the Second World War and asserting Poland’s sovereignty in the face of a “Germany-dominated Europe”. The Weimar Triangle, recently strengthened by the bilateral treaty between France and Poland signed on 9 May 2025, could be weakened on the Polish–German flank.
As a historian and former director of the Second World War Museum in Gdansk and the Institute of National Remembrance, Nawrocki is well placed to exploit this historical resentment. He has formulated a nationalist memory policy centred on a discourse of victimhood, portraying Poland as perpetually under attack from its historic enemies, Russia and Germany.
While there is a broad consensus in Poland regarding the Russian threat, opinions differ regarding the government’s desire to separate the traumas of the past, particularly those of the last war, from the challenges of European integration today.
Memory issues also play a prominent role in relations with Ukraine. There is total consensus on the need to provide military support to Ukraine, under attack: this is obvious in Poland, given its history and geography – defending Ukraine is inseparable from Polish security. However, both Nawrocki and Trzaskowski have touched upon the idea that Ukraine should apologise for the crimes committed by Ukrainian nationalists during the last war, starting with the massacre of more than 100,000 Poles in Volyn (Volhynia), north-western Ukraine) by Stepan Bandera’s troops.
Alongside memory policy, Nawrocki and the PiS are calling for the abolition of the 800 zloty (190 euros) monthly allowance paid to Ukrainian refugees. Poland had more than one million Ukrainian workers prior to the war, and more than two million additional workers have arrived since it started, although around one million have since relocated to other countries, primarily Germany and the Czech Republic.
Prior to the second round of the presidential election, Nawrocki readily signed the eight demands of the far-right candidate Sławomir Mentzen, which included ruling out Ukraine’s future NATO membership. Playing on anti-Ukrainian (and anti-German) sentiment, alongside Euroscepticism and sovereignty, is one of the essential elements of the new president’s nationalist discourse.
A Central and Eastern European Trumpism?
Certain themes of the Polish election converge with a trend present throughout Central and Eastern Europe. We saw this at work in the Romanian presidential election, where the unsuccessful far-right nationalist candidate, George Simion, came to Warsaw to support Nawrocki, just as the winner, the pro-European centrist Nicușor Dan, lent his support to Trzaskowski. Nawrocki’s success reinforces an emerging “Trumpist” movement in Eastern Europe, with Viktor Orbán in Budapest seeing himself as its self-proclaimed leader. A year ago, Orbán coined the slogan “Over there (in the United States), it’s MAGA; here, it will be MEGA: Make Europe Great Again”. The “Patriots for Europe” group, launched by Orbán last year, is intended to unify this movement within the European Parliament.
American conservative networks, through the Conservative Political Action Conference (CPAC), a gathering of international hard-right figures, and the Trump administration are directly involved in this process. Shortly before the presidential election, Nawrocki travelled to Washington to arrange a photo opportunity with Trump in the Oval Office.
“If you (elect) a leader that will work with President Donald J. Trump, the Polish people will have a strong ally that will ensure that you will be able to fight off enemies that do not share your values. […] You will have strong borders and protect your communities and keep them safe, and ensure that your citizens are respected every single day. […] You will continue to have a U.S. presence here, a military presence. And you will have equipment that is American-made, that is high quality.”
“Fort Trump”, that is how the outgoing President Andrzej Duda named the US military base financed by Poland after a bilateral agreement was signed with Donald Trump during his first term in office, in 2018. Similarly, the US House Committee on Foreign Affairs sent a letter to the President of the European Commission accusing her of applying “double standards”, pointing out that EU funds had been blocked when the PiS was in power, and claiming that European money had been used to influence the outcome of the Polish presidential election in favour of Trzaskowski. The letter was posted online on the State Department website. Prioritising the transatlantic link at the expense of strengthening Europe was one of the issues at stake in the Warsaw presidential election.
CPAC is playing a significant role in building a Trumpist national-populist network based on rejecting the “liberal hegemony” established in the post-1989 era, regaining sovereignty from the EU, and defending conservative values against a “decadent” Europe. Beyond the Polish presidential election, the goal seems clear: to divide Europeans and weaken them at a time when the transatlantic relationship is being redefined.
Jacques Rupnik ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’a déclaré aucune autre affiliation que son organisme de recherche.
Aboriginal and Torres Strait Islander readers are advised this article contains names and images of deceased people.
The brutal homicide of 15-year-old Noongar Yamatji boy, Cassius Turvey, by a group of white men revealed the racial schisms in Western Australian society. Turvey was walking home from school in October 2022 when he was abruptly beaten to death.
On Friday, the Western Australian Supreme Court sentenced the three perpetrators. Twenty-nine-year-old Brodie Palmer and 24-year-old Jack Brearley were found guilty of murder and sentenced to life imprisonment.
A third man, 27-year-old Mitchell Forth, was convicted of manslaughter and sentenced to 12 years behind bars.
This was an opportunity for the Supreme Court to send a strong message against racial violence. While the punishment of the men involved is clear, the role of race, and what legally qualifies as racially motivated crime, is muddier.
Wrong place, wrong time?
Racism has been front and centre of the public discussion of this tragedy from the outset.
Rallies in solidarity with Turvey’s family were held across the country, with Gumbaynggirr, Bundjalung, and Dunghutti activist Lizzie Jarrett declaring:
no black child is ever, ever, ever in the wrong place at the wrong time on their own land.
Racism at trial
Over the course of the trial, the court heard Turvey and his peers, a group of Aboriginal high school students, were approached by an angry group.
This comprised the three men convicted and a woman, 23-year-old Aleesha Gilmore, who was acquitted of homicide, and 21-year-old Ethan McKenzie, who with Gilmore, was convicted of other offences relating to the attack.
Turvey was chased and Brearly fatally beat him with a metal pole.
Earlier this year, the trial of the three perpetrators heard arguments by the defendants that the actions were not racially motivated.
Rather, the defence argued they were acting out of self-defence on the basis that Brearly had his car window smashed a few days prior.
In contrast, the prosecution brought evidence of a phone call that revealed Brearley was bragging about beating Turvey, stating that “he learnt his lesson”.
The prosecution argued the homicide was not a personal gripe, but a collective response.
The prosecution didn’t allege the attack was racially motivated, but it was open to the judge to consider this basis for the homicide.
At trial, 91 witnesses came forward. Witnesses gave evidence that the accused were using racial slurs.
The killing of Turvey comes after 14-year-old Elijah Doughty was targeted and killed in Kalgoorlie in 2016.
Both cases show white male motorists seeking to avenge Aboriginal children for alleged vehicle offences.
This is reinforced by a penal system in which Aboriginal children are 53 times more likely to be detained than non-Aboriginal children.
What did the judge say?
On the morning of the sentence hearings, Cassius Turvey’s mother, who described her son as respected, bright, loving and compassionate, said the killing was a “racially motivated” and based on “discriminatory targeting”.
This sentiment has been echoed across the country, including by June Oscar, the Aboriginal and Torres Strait Islander social justice commissioner at the Australian Human Rights Commission, in 2022.
Chief Justice Peter Quinlan strongly condemned the attacks.
However, he stated the attack was not racially motivated, despite recognising that the perpetrators were “calling them n-words and black c—ts — you in particular Mr Brearley used language like that”.
He noted that it creates a “fear” of racial vilification:
it’s no surprise […] that the kids would think they were being targeted because they were Aboriginal, and the attack would create justifiable fear for them and for the broader community that this was a racially motivated attack.
This amounts to a message of general deterrence about violence and vigilante behaviour.
But messages to deter racial targeting and racial violence specifically were omitted from the public safety concerns expressed by the court.
Making racial violence invisible
Munanjahli and South Sea Islander professor Chelsea Watego, and colleagues, have remarked that the Australian psyche is more comfortable with an “abstract concern with racism; racism without actors, or rather perpetrators”.
This, they argue, sanitises racial violence and holds no one responsible.
The court demonstrated this abstract concern for racism.
This Supreme Court’s reasoning has set an impossibly high bar for racial vilification, and specifically racial violence, to be identified, denounced and redressed.
The judgement seems to relegate racism to being an unfortunate and unintended incident of co-existence, rather than willed harm.
The failure to regard the racial slurs, the targeting of a group of Aboriginal children, and the killing of one of these children, as “racially motivated”, upholds the idea that white people’s racist treatment and crimes against Aboriginal people exist in a vacuum free of a long history of colonial violence, massacres and occupation.
Thalia Anthony receives funding from the Australian Research Council.
Matthew Walsh does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Aboriginal and Torres Strait Islander readers are advised this article contains names and images of deceased people.
The brutal homicide of 15-year-old Noongar Yamatji boy, Cassius Turvey, by a group of white men revealed the racial schisms in Western Australian society. Turvey was walking home from school in October 2022 when he was abruptly beaten to death.
On Friday, the Western Australian Supreme Court sentenced the three perpetrators. Twenty-nine-year-old Brodie Palmer and 24-year-old Jack Brearley were found guilty of murder and sentenced to life imprisonment.
A third man, 27-year-old Mitchell Forth, was convicted of manslaughter and sentenced to 12 years behind bars.
This was an opportunity for the Supreme Court to send a strong message against racial violence. While the punishment of the men involved is clear, the role of race, and what legally qualifies as racially motivated crime, is muddier.
Wrong place, wrong time?
Racism has been front and centre of the public discussion of this tragedy from the outset.
Rallies in solidarity with Turvey’s family were held across the country, with Gumbaynggirr, Bundjalung, and Dunghutti activist Lizzie Jarrett declaring:
no black child is ever, ever, ever in the wrong place at the wrong time on their own land.
Racism at trial
Over the course of the trial, the court heard Turvey and his peers, a group of Aboriginal high school students, were approached by an angry group.
This comprised the three men convicted and a woman, 23-year-old Aleesha Gilmore, who was acquitted of homicide, and 21-year-old Ethan McKenzie, who with Gilmore, was convicted of other offences relating to the attack.
Turvey was chased and Brearly fatally beat him with a metal pole.
Earlier this year, the trial of the three perpetrators heard arguments by the defendants that the actions were not racially motivated.
Rather, the defence argued they were acting out of self-defence on the basis that Brearly had his car window smashed a few days prior.
In contrast, the prosecution brought evidence of a phone call that revealed Brearley was bragging about beating Turvey, stating that “he learnt his lesson”.
The prosecution argued the homicide was not a personal gripe, but a collective response.
The prosecution didn’t allege the attack was racially motivated, but it was open to the judge to consider this basis for the homicide.
At trial, 91 witnesses came forward. Witnesses gave evidence that the accused were using racial slurs.
The killing of Turvey comes after 14-year-old Elijah Doughty was targeted and killed in Kalgoorlie in 2016.
Both cases show white male motorists seeking to avenge Aboriginal children for alleged vehicle offences.
This is reinforced by a penal system in which Aboriginal children are 53 times more likely to be detained than non-Aboriginal children.
What did the judge say?
On the morning of the sentence hearings, Cassius Turvey’s mother, who described her son as respected, bright, loving and compassionate, said the killing was a “racially motivated” and based on “discriminatory targeting”.
This sentiment has been echoed across the country, including by June Oscar, the Aboriginal and Torres Strait Islander social justice commissioner at the Australian Human Rights Commission, in 2022.
Chief Justice Peter Quinlan strongly condemned the attacks.
However, he stated the attack was not racially motivated, despite recognising that the perpetrators were “calling them n-words and black c—ts — you in particular Mr Brearley used language like that”.
He noted that it creates a “fear” of racial vilification:
it’s no surprise […] that the kids would think they were being targeted because they were Aboriginal, and the attack would create justifiable fear for them and for the broader community that this was a racially motivated attack.
This amounts to a message of general deterrence about violence and vigilante behaviour.
But messages to deter racial targeting and racial violence specifically were omitted from the public safety concerns expressed by the court.
Making racial violence invisible
Munanjahli and South Sea Islander professor Chelsea Watego, and colleagues, have remarked that the Australian psyche is more comfortable with an “abstract concern with racism; racism without actors, or rather perpetrators”.
This, they argue, sanitises racial violence and holds no one responsible.
The court demonstrated this abstract concern for racism.
This Supreme Court’s reasoning has set an impossibly high bar for racial vilification, and specifically racial violence, to be identified, denounced and redressed.
The judgement seems to relegate racism to being an unfortunate and unintended incident of co-existence, rather than willed harm.
The failure to regard the racial slurs, the targeting of a group of Aboriginal children, and the killing of one of these children, as “racially motivated”, upholds the idea that white people’s racist treatment and crimes against Aboriginal people exist in a vacuum free of a long history of colonial violence, massacres and occupation.
Thalia Anthony receives funding from the Australian Research Council.
Matthew Walsh does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.