In 2018, the Australian philosopher Kate Manne coined the word “himpathy” to describe what she called “the inappropriate and disproportionate sympathy powerful men often enjoy in cases of sexual assault, intimate partner violence, homicide and other misogynistic behavior”.
What makes somebody more likely to feel himpathetic, either to somebody facing accusations in the public eye, or in their own workplace?
In this episode of The Conversation Weekly podcast, we speak to a human behaviour expert whose research seeks to understand what makes some people more inclined to support perpetrators of sexual misconduct than the victims.
Samantha Dodson is an assistant professor of organisational behaviour and human resources at the University of Calgary in Canada. She first started researching the ways people react to accusations of sexual misconduct around the time of the #MeToo movement, as women came forward with accusations of sexual harassment in the wake of the Harvey Weinstein case.
Dodson and her colleagues wanted to understand why some people are predisposed to express sympathy towards male perpetrators of sexual misconduct, or himpathy. Over a series of five studies, both analysing public comments on X related to the #MeToo movement and through lab-based psychology experiments. Her team used moral foundations theory to build a profile of the kinds of people more likely to be himpathetic.
Moral foundations theory argues that there are innate moral concerns that everybody holds to different levels. These concerns include respect for authority, loyalty, staying pure, being fair and being caring toward other people.
Don’t rock the boat
What we found is that when people strongly value things like loyalty, respect for authority and purity, they’re more likely to feel sympathy toward the man accused of sexual misconduct and feel anger toward the women who made that allegation.
Dodson says people who hold these moral values very strongly are more likely to see allegations as a threat to the stability of a company, or institution. And, as a result, they’re also less likely to believe a victim.
It also leads to people being more likely to seek punishment for the women who made the accusations and less likely to seek punishment for the men who have been accused.
Overall, Dodson found the vast majority of people in their studies were “not himpathetic” and it’s just a small subset of people who react this way.
The challenge is if those people are in positions of authority, or … if you have one person that you work with who’s himpathetic and you’re a victim you might experience some iciness from them or ostracism.
Their work also looks at how managers can better deal with accusations of sexual harassment in the workplace as a result of their findings.
Listen to Samantha Dodson talk about her research and the recommendations from it on The Conversation Weekly podcast, which also features an introduction from Eleni Vlahiotis, business and economy editor at The Conversation in Canada.
A transcript of this episode is available on Apple Podcasts.
This episode of The Conversation Weekly was written and produced by Katie Flood with assistance from Mend Mariwany. Sound design was by Michelle Macklem, and our theme music is by Neeta Sarl. Gemma Ware is the executive producer.
Last month, a delegation led by Brendan Crabb, head of the Burnet Institute, a prestigious medical research body, met Anthony Albanese in the prime minister’s parliament house office.
Its members, who included Lidia Morawska from Queensland University of Technology, a world-leading expert on air quality and health, also blitzed ministers and staffers. They were pitching for the federal government to spearhead a comprehensive policy on clean indoor air and for the issue to be put on the national cabinet’s agenda.
They pointed out to Albanese that indoor air is an outlier in our otherwise comprehensive public health framework. Despite people spending the majority of their time inside, indoor air quality is mostly unregulated, in contrast to the standards that apply to, for example, food and water.
There are multiple health and economic reasons to be concerned about this air quality but a major one is to limit the transmission of airborne diseases, such as COVID.
For many of us, COVID has become just a bad memory, despite its lasting and mixed legacies. For instance, without the pandemic, fewer people would now be working from home. More small businesses would be flourishing in our CBDs. Arguably, fewer children would be trying to catch up from inadequate schooling.
While the media have largely lost interest in COVID, and people are now rather blase about it, the disease is still taking a toll.
In 2023 there were about 4,600 deaths attributed to COVID, and almost certainly more in reality, given Australia that year had 8,400 “excess deaths” (defined as actual deaths above expected deaths).
Up to July this year there were 2,503 COVID deaths.
In nursing homes, whilst survival rates from COVID are much improved with vaccination and antivirals, as of September 19, there were 117 active outbreaks with 59 new outbreaks in that past week. There had been 900 deaths for the year so far.
Long COVID has become a serious issue, with varying respiratory, cardiac, cognitive and immunological symptoms. It is estimated between 200,000 and 900,000 people in Australia currently have long COVID.
The Albanese government is presently awaiting the report it commissioned into how the COVID pandemic was handled.
The inquiry has looked at the performance of the Morrison government, but its terms of reference didn’t include the states. That limits its usefulness, but there were politics involved, given high profile state Labor governments.
Not that the state and territory leaders of that time are around anymore (apart from the ACT’s Andrew Barr). Those faces that became so familiar from their daily news conference have disappeared into the never-never: Victoria’s Dan Andrews, Western Australia’s Mark McGowan, New South Wales’ Gladys Berejiklian, Queensland’s Annastacia Palaszczuk.
COVID variously made or tarnished leaders’ reputations. McGowan, in particular, reached stratospheric heights of popularity. Andrews deeply divided people.
In general, however, COVID boosted support for leaders and increased public trust in them and in government. In times of uncertainty, the public looked to known institutions and to authority figures. Since then, trust has eroded again.
Experts came into their own during the pandemic but then found themselves in the middle of the political bickering. In retrospect, some of them were wrong.
In the broad, especially in terms of the death rate and the economy, Australia navigated the crisis well. But drill down, and the story is more complex, as documented by two leading economists, Steven Hamilton (based in Washington and connected to the Australian National University) and Richard Holden (from UNSW).
In their just-published book, Australia’s Pandemic Exceptionalism, their bottom-line conclusion is that Australia was very impressive in its (vastly expensive) economic response but it was a mixed picture on the health side.
While Australia was quick out of the blocks in closing the national border and bringing in other measures, it fell down dramatically on two fronts. The Morrison government failed to order a wide variety of vaccines and it failed to buy enough Rapid Antigen Tests (RATs).
The “vaccine procurement strategy was an unmitigated disaster,” Hamilton and Holden write. This was not just “the greatest failure of the pandemic – it was arguably the greatest single public policy failure in Australian history”.
“We put all our vaccine eggs in just two baskets”, both of which failed to differing degrees. This was “a terrible risk to take. Pandemics are times for insurance, not gambling,” they write.
“And while our tax and statistical authorities marshalled their forces to operate much faster and more nimbly to serve the desperate needs of a government facing a once-in-a-century crisis, our medical regulatory complex repeatedly ignored international evidence and experience, and our political leaders capitulated to their advice. And then the prime minister told us that when it came to getting Australians vaccinated:‘it’s not a race’”.
The failure to order every vaccine on the horizon meant when production or supply problems arose for those that were hoped for or on order, the rollout was delayed.
After this bungle, “stunningly, we turned around and repeated these same mistakes all over again” by not obtaining and distributing freely massive numbers of RATs. In this failure, “our federal government showed the same lack of foresight, the same penny-wise but pound-foolish mindset that it had displayed in the vaccine rollout”.
The authors blame Scott Morrison, then-health minister Greg Hunt, then-chief medical officer Brendan Murphy, the Therapeutic Goods Administration (TGA), and the Australian Technical Advisory Group on Immunisation (ATAGI) for the health failures, which prolonged the lockdowns, cost lives and delayed reopening.
Urging better preparation for the next pandemic, Hamilton and Holden have a list of suggestions. They stress we need to ensure we have mRNA vaccine manufacturing capability (on which there is fairly good progress). We must get vaccine procurement “right from the start” regardless of cost. Huge quantities of RATs should be procured as soon as they become available, ready to be used immediately.
A complete overhaul of the medical-regulatory complex should be undertaken. As well, Australia should continue to invest in “economic infrastructure”. In the pandemic, the economic effort was facilitated by having a single touch payroll system. “The first obvious candidate for improvement is a real-time GST turnover reporting capability.”
Perhaps a comprehensive indoor clean air policy could be added to the infrastructure list.
The government’s review will have its own recommendations. Crabb and his colleagues hope they include attention to indoor air quality, following advice from the Chief Scientist and the National Science and Technology Council.
Members of the delegation say they received an attentive hearing from the PM.
Anna-Maria Arabia, chief executive of the Australian Academy of Science, and a member of the delegation, says Albanese “understood that improving indoor air quality is a cornerstone requirement to preparing for future pandemics and [he] was attuned to the practical implications of having good indoor air quality systems, including schools and workplaces being able to stay open and functional, reduce absenteeism and boost productivity”.
What’s needed beyond awareness, however, is timely policy action. Pandemics don’t give much notice of their arrival.
Michelle Grattan does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
In the past months, the planet has experienced the hottest months of June and August, boreal summer and day on record, with a global average temperature of 17.16°C on 22 July. While many have been getting on with their lives as best as they can, there are many more who are feeling the heat, as levels of climate anxiety continue to rise. At risk are people experiencing climate impacts in the Global South, but also professionals in the Earth sciences documenting and modelling them.
So, how can we channel our alarm in a way that doesn’t paralyse us, but propel us into action? To answer this question, The Conversation Europe spoke to one of the world’s most public-facing climate scientists, the Vice-Chair of the Intergovernmental Panel on Climate Change (IPCC), Diána Ürge-Vorsatz.
Could you start off by describing your work? According to you, what have been the highlights of your career as a climate scientist?
So I mostly work in the area of energy efficiency. I have done a lot of modelling, including to demonstrate how higher efficiency buildings could reduce carbon emissions. Among others, I have alerted the world of what we call the carbon lock-in risks of inefficient building retrofits — when fossil fuel-intensive systems perpetuate, delay, or prevent the transition to low-carbon alternatives.
I’ve always tried to concentrate on solutions which not only allow us to solve environmental issues, but also to increase human well-being and meet other societal goals. That’s because I come from a country [Hungary] where I see that while the environment and climate change are important, they typically play second fiddle to other priorities. Hence, I believe we have to solve these things in a way that makes it worthwhile.
Diána Ürge-Vorsatz, 2024. Fourni par l’auteur
My work therefore prompted lawmakers to revise the EU’s legislation to boost building energy efficiency – the Energy Performance of Buildings Directive – in 2010. On the first day the Fidesz government was reelected that year, I showed them how many jobs could be created through high efficiency building retrofits. Based on our research, they committed that the entire building stock would be refurbished to slash energy consumption by 60 %, which would have been really very ambitious, the first such commitment in the world. Unfortunately, a few months later, they changed their direction and they rather went into other energy policy priorities.
That’s one of my concerns, yes, because it’s amongst the tipping points that would exert its impact the earliest.
If we look at other Earth system tipping points, most of them require a century, several centuries, if not several millennia until they exert a full impact. If AMOC collapses, it would exert its full impact within two to three decades, potentially. These are very strong impacts predicted clearly, on Europe as well as other regions. More and more papers have shown evidence that its collapse could already be underway. That’s definitely been alarming.
When you started on this career path, would you describe yourself as prey to eco-anxiety? And if not, was there a turning point when it appeared?
No, when I started I don’t think we had any knowledge that would have amounted to any existential threat, and it was still not so tangible that so many things could go wrong.
I was studying for my PhD at UCLA, at UC Berkeley from 1992-96. In the LA Times, there was a two page advertisement calling for artists to design artwork that would scare anyone away, which they could put above the Yucca Mountain deep high-level based nuclear repository so that even if people didn’t speak English or they didn’t understand our script anymore, they could still understand that there was something really dangerous under that.
At that point, I remember thinking: “Oh my God, if you just can’t dig or walk wherever you want anymore, that’s just wrong. We cannot do that to future generations.”
Then there’s the never-ending news cycle, making it hard to pinpoint specific moments that alarm you. One that comes to mind has been the discovery over time that forever chemicals – Per and polyfluoroalkyl substances (PFAS) – are everywhere, even in the most remote parts of the earth and rain is no longer of drinking water quality even in Antarctica. This isn’t going to go away — precisely because PFAS are what we call forever chemicals. We will never be able to vacuum clean the planet from PFAS. Likewise with microplastics. When you start looking ahead with your eyes open, it can be really scary.
And how do you experience the intimate knowledge of that alarming data on the one hand, and the public’s, and above all the elites’, climate inaction on the other?
Well, I wouldn’t quite call it “climate inaction”. It’s easy to dwell on the idea that the glass is half empty. But in fact, the glass is half full. Lots has been done since the 2015 Paris Agreement, which was itself a miracle.
You were there when the deal was struck, weren’t you? Could you tell us what it was like?
Well, it was truly euphoric, because before that, if a scientist dared mentioning [the threshold of] 1.5°C [of warming above pre-industrial levels], you were a tree-hugger and an advocate, not a scientist. You did not get funding.
And suddenly that became a political reality, or at least a political goal. I think that was really amazing for me because that time we didn’t have science clearly backing that you actually could achieve 1.5°C. So in the run-up to the Paris Agreement, the United Nations Framework Convention on Climate Change (UNFCCC) asked the IPCC to produce a report on 1.5°C. I remember talking about it with colleagues at the time, who told me: “That’s crazy, this train is gone, let’s not do it”.
Then the months went by and and those voices faded. By the time we got to the plenary meeting in January there was not a single voice saying “We shouldn’t do this report”. Scientists changed course and put so much effort in on trying to say “Okay can this be done well? Let’s actually see”. Then they ran their models to figure out that actually not only can it be done — but there are so many ways we can get there. Yes, I know that it’s now increasingly unlikely that we still will meet it, but it still created a lot of momentum.
One fact that we don’t emphasize enough: we have prevented the world from warming by five to six degrees by the end of the century, and we are now at worst saying perhaps four degrees, but more likely 2.5°C to 3.5°C.
How do you communicate with your children about the climate crisis? For example, are there things that you choose not to tell them in order to protect them?
I don’t hide anything from them. We quite frequently talk about the gravity of the situation because I cannot help bearing on them in the evening all the negative experiences and facts I learned during the day and I just have to unload these for them at dinners and so on.
One of my daughters did experience quite severe environmental anxiety for almost two years when she was about nine years old. She had come with me to a TV shooting and they allowed her into the studio. And before my interview, they just played this intense clip about storms and fires – typical climate impacts. But after that, she was really very afraid for a long time.
How did that fear translate itself?
She couldn’t sleep very well. She was constantly afraid physically. She would tell me: “My god, is this going to burn around us? Are we going to have floods?”
And it’s that a nine year old cannot, of course, fully comprehend yet how these risks will unfold in the future. I think she was put in this state of fear and anxiety. So that’s why it was also hard to manage because it wasn’t anything concrete or anything that she could verbally express or phrase nicely.
And I couldn’t say, “Look darling, it’s not going to happen.”
And how did she manage to surface from that state of paralysis?
After a while, I think she understood that it wasn’t yet threatening her life. But all of my children are still concerned and many of them want to contribute to fighting climate change in some way.
For example, my eldest daughter was studying medicine, but after her second year, she spent the entire summer in tears. She was deeply passionate about climate action and believed there were only two paths forward. Either she could still save the planet by becoming an architect to design zero-energy buildings, or, if it was too late, she should focus on mitigating the damage by remaining in medicine. After two months of struggling with this dilemma, she abandoned her dream of architecture and decided to continue with medical school. It was heartbreaking for me to see how little hope they had of solving the climate crisis.
What would your advice be for parents whose children are suffering from eco-anxiety?
I think the best way is to turn anxiety into action — to explain to them that they have and we still have agency. Even though we are small, we have a very important impact. We can vote. We can choose a profession where we can change the world. We can be role models and we can influence our peers through social media and many other ways.
So if we tell them the five scenarios that the IPCC presents (investor, consumer, citizen, role model, professional) in the 6th Assessment Report as individual roles we can play to curb climate change, it’s not only through whether we choose to take a plastic bag or not. The future isn’t something that happens to us, but in our hands. We are all part of systems where each of us can influence more than we think.
If your children were to start striking for the climate, would you support them?
Yes, I think protests are one of the very important ways how we can have an impact. Besides, children often don’t have any other tools. And that’s why they also feel anxiety because they don’t yet have influence. They don’t have any money to spend, or any voting rights yet. They don’t yet have a profession through which they can influence the world. They feel powerless.
And often children’s only power is to protest. If we give them other means to where they can influence the processes, that’d be even better.
Diána Ürge-Vorsatz ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’a déclaré aucune autre affiliation que son organisme de recherche.
In the past months, the planet has experienced the hottest months of June and August, boreal summer and day on record, with a global average temperature of 17.16°C on 22 July. While many have been getting on with their lives as best as they can, there are many more who are feeling the heat, as levels of climate anxiety continue to rise. At risk are people experiencing climate impacts in the Global South, but also professionals in the Earth sciences documenting and modelling them.
So, how can we channel our alarm in a way that doesn’t paralyse us, but propel us into action? To answer this question, The Conversation Europe spoke to one of the world’s most public-facing climate scientists, the Vice-Chair of the Intergovernmental Panel on Climate Change (IPCC), Diána Ürge-Vorsatz.
Could you start off by describing your work? According to you, what have been the highlights of your career as a climate scientist?
So I mostly work in the area of energy efficiency. I have done a lot of modelling, including to demonstrate how higher efficiency buildings could reduce carbon emissions. Among others, I have alerted the world of what we call the carbon lock-in risks of inefficient building retrofits — when fossil fuel-intensive systems perpetuate, delay, or prevent the transition to low-carbon alternatives.
I’ve always tried to concentrate on solutions which not only allow us to solve environmental issues, but also to increase human well-being and meet other societal goals. That’s because I come from a country [Hungary] where I see that while the environment and climate change are important, they typically play second fiddle to other priorities. Hence, I believe we have to solve these things in a way that makes it worthwhile.
Diána Ürge-Vorsatz, 2024. Fourni par l’auteur
My work therefore prompted lawmakers to revise the EU’s legislation to boost building energy efficiency – the Energy Performance of Buildings Directive – in 2010. On the first day the Fidesz government was reelected that year, I showed them how many jobs could be created through high efficiency building retrofits. Based on our research, they committed that the entire building stock would be refurbished to slash energy consumption by 60 %, which would have been really very ambitious, the first such commitment in the world. Unfortunately, a few months later, they changed their direction and they rather went into other energy policy priorities.
That’s one of my concerns, yes, because it’s amongst the tipping points that would exert its impact the earliest.
If we look at other Earth system tipping points, most of them require a century, several centuries, if not several millennia until they exert a full impact. If AMOC collapses, it would exert its full impact within two to three decades, potentially. These are very strong impacts predicted clearly, on Europe as well as other regions. More and more papers have shown evidence that its collapse could already be underway. That’s definitely been alarming.
When you started on this career path, would you describe yourself as prey to eco-anxiety? And if not, was there a turning point when it appeared?
No, when I started I don’t think we had any knowledge that would have amounted to any existential threat, and it was still not so tangible that so many things could go wrong.
I was studying for my PhD at UCLA, at UC Berkeley from 1992-96. In the LA Times, there was a two page advertisement calling for artists to design artwork that would scare anyone away, which they could put above the Yucca Mountain deep high-level based nuclear repository so that even if people didn’t speak English or they didn’t understand our script anymore, they could still understand that there was something really dangerous under that.
At that point, I remember thinking: “Oh my God, if you just can’t dig or walk wherever you want anymore, that’s just wrong. We cannot do that to future generations.”
Then there’s the never-ending news cycle, making it hard to pinpoint specific moments that alarm you. One that comes to mind has been the discovery over time that forever chemicals – Per and polyfluoroalkyl substances (PFAS) – are everywhere, even in the most remote parts of the earth and rain is no longer of drinking water quality even in Antarctica. This isn’t going to go away — precisely because PFAS are what we call forever chemicals. We will never be able to vacuum clean the planet from PFAS. Likewise with microplastics. When you start looking ahead with your eyes open, it can be really scary.
And how do you experience the intimate knowledge of that alarming data on the one hand, and the public’s, and above all the elites’, climate inaction on the other?
Well, I wouldn’t quite call it “climate inaction”. It’s easy to dwell on the idea that the glass is half empty. But in fact, the glass is half full. Lots has been done since the 2015 Paris Agreement, which was itself a miracle.
You were there when the deal was struck, weren’t you? Could you tell us what it was like?
Well, it was truly euphoric, because before that, if a scientist dared mentioning [the threshold of] 1.5°C [of warming above pre-industrial levels], you were a tree-hugger and an advocate, not a scientist. You did not get funding.
And suddenly that became a political reality, or at least a political goal. I think that was really amazing for me because that time we didn’t have science clearly backing that you actually could achieve 1.5°C. So in the run-up to the Paris Agreement, the United Nations Framework Convention on Climate Change (UNFCCC) asked the IPCC to produce a report on 1.5°C. I remember talking about it with colleagues at the time, who told me: “That’s crazy, this train is gone, let’s not do it”.
Then the months went by and and those voices faded. By the time we got to the plenary meeting in January there was not a single voice saying “We shouldn’t do this report”. Scientists changed course and put so much effort in on trying to say “Okay can this be done well? Let’s actually see”. Then they ran their models to figure out that actually not only can it be done — but there are so many ways we can get there. Yes, I know that it’s now increasingly unlikely that we still will meet it, but it still created a lot of momentum.
One fact that we don’t emphasize enough: we have prevented the world from warming by five to six degrees by the end of the century, and we are now at worst saying perhaps four degrees, but more likely 2.5°C to 3.5°C.
How do you communicate with your children about the climate crisis? For example, are there things that you choose not to tell them in order to protect them?
I don’t hide anything from them. We quite frequently talk about the gravity of the situation because I cannot help bearing on them in the evening all the negative experiences and facts I learned during the day and I just have to unload these for them at dinners and so on.
One of my daughters did experience quite severe environmental anxiety for almost two years when she was about nine years old. She had come with me to a TV shooting and they allowed her into the studio. And before my interview, they just played this intense clip about storms and fires – typical climate impacts. But after that, she was really very afraid for a long time.
How did that fear translate itself?
She couldn’t sleep very well. She was constantly afraid physically. She would tell me: “My god, is this going to burn around us? Are we going to have floods?”
And it’s that a nine year old cannot, of course, fully comprehend yet how these risks will unfold in the future. I think she was put in this state of fear and anxiety. So that’s why it was also hard to manage because it wasn’t anything concrete or anything that she could verbally express or phrase nicely.
And I couldn’t say, “Look darling, it’s not going to happen.”
And how did she manage to surface from that state of paralysis?
After a while, I think she understood that it wasn’t yet threatening her life. But all of my children are still concerned and many of them want to contribute to fighting climate change in some way.
For example, my eldest daughter was studying medicine, but after her second year, she spent the entire summer in tears. She was deeply passionate about climate action and believed there were only two paths forward. Either she could still save the planet by becoming an architect to design zero-energy buildings, or, if it was too late, she should focus on mitigating the damage by remaining in medicine. After two months of struggling with this dilemma, she abandoned her dream of architecture and decided to continue with medical school. It was heartbreaking for me to see how little hope they had of solving the climate crisis.
What would your advice be for parents whose children are suffering from eco-anxiety?
I think the best way is to turn anxiety into action — to explain to them that they have and we still have agency. Even though we are small, we have a very important impact. We can vote. We can choose a profession where we can change the world. We can be role models and we can influence our peers through social media and many other ways.
So if we tell them the five scenarios that the IPCC presents (investor, consumer, citizen, role model, professional) in the 6th Assessment Report as individual roles we can play to curb climate change, it’s not only through whether we choose to take a plastic bag or not. The future isn’t something that happens to us, but in our hands. We are all part of systems where each of us can influence more than we think.
If your children were to start striking for the climate, would you support them?
Yes, I think protests are one of the very important ways how we can have an impact. Besides, children often don’t have any other tools. And that’s why they also feel anxiety because they don’t yet have influence. They don’t have any money to spend, or any voting rights yet. They don’t yet have a profession through which they can influence the world. They feel powerless.
And often children’s only power is to protest. If we give them other means to where they can influence the processes, that’d be even better.
Diána Ürge-Vorsatz ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’a déclaré aucune autre affiliation que son organisme de recherche.
Source: The Conversation – Africa – By Danny Bradlow, Professor/Senior Research Fellow, Centre for Advancement of Scholarship, University of Pretoria
The statistics are stark: 54 governments, of which 25 are African, are spending at least 10% of their revenues on servicing their debts; 48 countries, home to 3.3 billion people, are spending more on debt service than on health or education.
Among them, 23 African countries are spending more on debt service than on health or education.
While the international community stands by, these countries are servicing their debts and defaulting on their development goals.
It requires the debtor to first discuss its problems with the International Monetary Fund (IMF) and obtain its assessment of how much debt relief it needs. Then it must negotiate with its official creditors – international organisations, governments and government agencies – over how much debt relief they will provide. Only then can the debtor reach an agreement – on comparable terms to the official creditors – with its commercial creditors.
Unfortunately, this process has been sub-optimal.
One reason is that it works too slowly to meet the urgent needs of distressed borrowers. As a result, it condemns debtor countries to financial limbo. The resulting uncertainty is not in anyone’s interest. For example, Zambia has been working through the G20’s cumbersome process for more than three and a half years and has not yet finalised agreements with all its creditors.
We propose a two-part approach that would improve the situation of sovereign debtors and their creditors. This proposal is based on the lessons we have learned from our work on the legal and economic aspects of developing country debt, particularly African debt.
First, we suggest that official creditors and the IMF create a strategic buyer of “last resort” that can purchase the bonds of debt distressed countries and refinance them on better terms.
Second, we recommend that all parties involved in sovereign debt restructurings adopt a set of principles that they can use to guide the debtor and its creditors in reaching an optimal agreement and monitoring its implementation.
The current approach fails to deal effectively and fairly with both the concerns of the creditors and all the debtor’s legal obligations and responsibilities. Our proposed solution would offer debtors debt relief that does not undermine their ability to meet their other legal obligations and responsibilities, while also accommodating private creditors’ preference for cash payments.
Our proposal is not risk-free. And buybacks are not appropriate for all debtors. Nevertheless it offers a principled and feasible approach to dealing with a silent debt crisis that threatens to undermine international efforts to address global challenges such as climate, poverty and inequality.
It uses the IMF’s existing resources to meet both the bondholders’ preferences for immediate cash and the developing countries’ need to reduce their debt burdens in a transparent and principled way.
It also helps the international community avoid a widespread default on debt and development.
Bondholders are a major problem
Foreign bondholders, who are the major creditors of many developing countries, have proven to be particularly challenging in providing substantive debt relief in a timely manner. In theory, they should be more flexible than official creditors.
Developing countries have been paying bondholders a premium to compensate them for providing financing to borrowers that are perceived to be risky. As a result, bondholders have already received larger payouts than official creditors. Therefore, they should be better placed than official creditors to assist the debtor in the restructuring processes.
However, despite having received large returns from defaulted bonds, bondholders have remained obstinate in debt restructurings.
Our proposal seeks to overcome this hurdle in a way that is fair to debtors, creditors and their respective stakeholders.
How it would work
First, the official creditors and the IMF should create and fund a strategic buyer “of last resort” who can purchase distressed (and expensive) debt at a discount from bondholders. The buyer, now the creditor of the country in distress, can repackage the debt and sell it to the debtor country on more manageable terms. The net result is that the bondholders receive cash for their bonds, while the debtor country benefits from substantial debt relief. In addition, the debtor and its remaining official creditors benefit from a simplified debt restructuring process.
This concept has precedent. In 1989, as part of the Highly Indebted Poor Countries Initiative, the international community’s effort to deal with the then existing debt burdens of poor countries, the World Bank Group established the Debt Reduction Facility, which helped eligible governments repurchase their external commercial debts at deep discounts. It completed 25 transactions which helped erase approximately US$10.3 billion in debt principal and over US$3.5 billion in interest arrears.
Some individual countries have also bought back their own debt. In 2009, Ecuador repurchased 93% of its defaulted debt at a deep discount. This enabled the government to reduce its debt stock by 27% and promote economic growth in subsequent years.
Unfortunately, the countries currently in debt distress lack sufficient foreign reserves to pursue such a strategy. Hence, they need to find a “friendly” buyer of last resort.
The IMF is well positioned to play this role. It has the mandate to support countries during financial crises. It also has the resources to fund such a facility. It can use a mix of its own resources, including its gold reserves, and donor funding, such as a portion of the US$100 billion in Special Drawing Rights (SDR), the IMF’s own reserve currency, which rich economies committed to reallocate for development purposes.
Such a facility, for example, would have enabled Kenya to refinance its debts at the SDR interest rate, currently at 3.75% per year, rather than at the 10.375% rate it paid in the financial markets.
It is noteworthy that the 47 low-income countries identified as in need of debt relief have just US$60 billion in outstanding debts owed to bondholders. Our proposed buyer of last resort would help reduce the burden of these countries to manageable levels.
Second, we propose that both debtors and creditors should commit to the following set of shared principles, based on internationally accepted norms and standards for debt restructurings.
Guiding principles
1. Guiding norms: Sovereign debt restructurings should be guided by six norms: credibility, responsibility, good faith, optimality, inclusiveness and effectiveness.
Optimality means that the negotiating parties should aim to achieve an outcome that, considering the circumstances in which the parties are negotiating and their respective rights, obligations and responsibilities, offers each of them the best possible mix of economic, financial, environmental, social, human rights and governance benefits.
2. Transparency: All parties should have access to the information that they need to make informed decisions.
3. Due diligence: The sovereign debtor and its creditors should each undertake appropriate due diligence before concluding a sovereign debt restructuring process.
4. Optimal outcome assessment: The parties should publicly disclose why they expect their restructuring agreement to result in an optimal outcome.
5. Monitoring: There should be credible mechanisms for monitoring the implementation of the restructuring agreement.
6. Inter-creditor comparability: All creditors should make a comparable contribution to the restructuring of debt.
7. Fair burden sharing: The burden of the restructuring should be fairly allocated between the negotiating parties.
8. Maintaining market access: The process should be designed to facilitate future market access for the borrower at affordable rates.
The G20’s current efforts to address the silent debt crisis are failing. They are contributing to the likely failure of low income countries in Africa and the rest of the global south to offer all their residents the possibility of leading lives of dignity and opportunity.
Danny Bradlow, in addition to his university position, is Co-Chair of the T20 task force on sovereign debt, and Co-Chair of the Academic Circle on the Right to Development.
Marina Zucker-Marques is a co-chair for the Brazil T20 Task Force 3 on reforming the International Financial Architecture
Kevin P. Gallagher does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
This chart essentially shows the stresses that New Zealand’s public health system can expect to face. I have analysed the death data by age, covering all deaths from July 1998 to June 2024. For those years (using June years) I have looked at every age of death from 16 to 99 and every birth year from 1900 to 2022, and counted deaths by birth-year.
For death-age 95, the most frequent birth year was 1928. As we would expect, most deaths at these high ages occurred in 2022 or 2023, thanks to Covid19. Thus, birth years in the 1920s feature in the chart.
Birth years in the early 1930s don’t feature so much because of the low birth numbers in those years. With fewer people born in say 1933, then 1933 will not often feature as the most frequent birth year for any given age.
Birth years around 1950 do not feature. This is both because the classic baby boomer generation is a healthy generation, and also because there were not as many births in the decade after World War Two as there were in the following two decades. So, while classic baby boomers will place an increasing burden on the public health system, the biggest burdens will come from those born between 1955 and 1975. (Also, classic baby boomers have high levels of private health insurance; this will be less affordable for subsequent generations as they age.)
Birth years from 1955 to 1964 feature most strongly, mainly because births in New Zealand peaked in those years. This birth cohort will place massive pressure on New Zealand’s public health system. People dying since 1998 between age 37 and age 67 are most likely to have been born in the years either side of 1960.
The cohort born 1966 to 1974 will also place huge pressures on Te Whatu Ora (Health New Zealand), in part because there are likely to be very many new Aotearoans in this birth cohort. By and large, immigrants are healthier than the New Zealand born population, because their health is considered before New Zealand residency can be granted.
The late 1970s represents a ‘baby-bust’ generation, like the early 1930s. Hence these ‘Gen-Y’ people don’t feature in this chart. The frequencies for the late 1980s’ and early 1990s’ birth years reflect the ‘baby blip’ which began in 1987. Also, these birth years relate to death of young people, which, being less frequent, can also be a bit more random.
People born in 1939 turn 85 this year. From 1938, birth numbers generally increased each year until the early 1960s. The impact of an aging population on New Zealand’s public healthcare system is certainly beginning. This impact will escalate each year for at least the next 25 years. People born in 1961 will turn 85 in 2046.
By contrast, we have been lulled into complacency because the unusually small early-1930s’ birth cohort placed a substantially below-average pressure on public healthcare.
We note that death numbers are a proxy for the demand for high-intensity healthcare. People born after 1955 are already making considerable demands on Aotearoa New Zealand’s health care.
*******
Keith Rankin (keith at rankin dot nz), trained as an economic historian, is a retired lecturer in Economics and Statistics. He lives in Auckland, New Zealand.
Even by these standards, the latest proclamation from OpenAI chief executive Sam Altman, published on his personal website this week, seems remarkably hyperbolic. We are on the verge of “The Intelligence Age”, he declares, powered by a “superintelligence” that may just be a “few thousand days” away. The new era will bring “astounding triumphs”, including “fixing the climate, establishing a space colony, and the discovery of all of physics”.
However, even setting aside these motivations, it’s worth taking a look at some of the assumptions behind Altman’s predictions. On closer inspection, they reveal a lot about the worldview of AI’s biggest cheerleaders – and the blind spots in their thinking.
Steam engines for thought?
Altman grounds his marvellous predictions in a two-paragraph history of humanity:
People have become dramatically more capable over time; we can already accomplish things now that our predecessors would have believed impossible.
This is a story of unmitigated progress heading in a single direction, driven by human intelligence. The cumulative discoveries and inventions of science and technology – Altman reveals – have led us to the computer chip and, inexorably, to artificial intelligence which will take us the rest of the way to the future. This view owes much to the futuristic visions of the singularitarian movement.
Such a story is seductively simple. If human intelligence has driven us to ever-greater heights, it is hard not to conclude that better, faster, artificial intelligence will drive progress even farther and higher.
This is an old dream. In the 1820s, when Charles Babbage saw steam engines revolutionising human physical labour in England’s industrial revolution, he began to imagine constructing similar machines for automating mental labour. Babbage’s “analytical engine” was never built, but the notion that humanity’s ultimate achievement would entail mechanising thought itself has persisted.
According to Altman, we’re now (almost) at that mountaintop.
Deep learning worked – but for what?
The reason we are so close to the glorious future is simple, Altman says: “deep learning worked”.
Deep learning is a particular kind of machine learning that involves artificial neural networks, loosely inspired by biological nervous systems. It has certainly been surprisingly successful in a few domains: deep learning is behind models that have proven adept at stringing words together in more or less coherent ways, at generating pretty pictures and videos, and even contributing to the solutions of some scientific problems.
So the contributions of deep learning are not trivial. They are likely to have significant social and economic impacts (both positive and negative).
But deep learning “works” only for a limited set of problems. Altman knows this:
humanity discovered an algorithm that could really, truly learn any distribution of data (or really the underlying “rules” that produce any distribution of data).
That’s what deep learning does – that’s how it “works”. That’s important, and it’s a technique that can be applied to various domains, but it’s far from the only problem that exists.
What is interesting here is the fact that Altman thinks “rules from data” will go so far towards solving all humanity’s problems.
There is an adage that a person holding a hammer is likely to see everything as a nail. Altman is now holding a big and very expensive hammer.
Deep learning may be “working” but only because Altman and others are starting to reimagine (and build) a world composed of distributions of data. There’s a danger here that AI is starting to limit, rather than expand, the kinds of problem-solving we are doing.
What is barely visible in Altman’s celebration of AI are the expanding resources needed also for deep learning to “work”. We can acknowledge the great gains and remarkable achievements of modern medicine, transportation and communication (to name a few) without pretending these have not come at a significant cost.
They have come at a cost both to some humans – for whom the gains of global north have meant diminishing returns – and to animals, plants and ecosystems, ruthlessly exploited and destroyed by the extractive might of capitalism plus technology.
Although Altman and his booster friends might dismiss such views as nitpicking, the question of costs goes right to the heart of predictions and concerns about the future of AI.
Altman is certainly aware that AI is facing limits, noting “there are still a lot of details we have to figure out”. One of these is the rapidly expanding energy costs of training AI models.
Microsoft recently announced a US$30 billion fund to build AI data centres and generators to power them. The veteran tech giant, which has invested more than US$10 billion in OpenAI, has also signed a deal with owners of the Three Mile Island nuclear power plant (infamous for its 1979 meltdown) to supply power for AI. The frantic spending suggests there may be a hint of desperation in the air.
Magic or just magical thinking?
Given the magnitude of such challenges, even if we accept Altman’s rosy view of human progress up to now, we might have to acknowledge that the past may not be a reliable guide to the future. Resources are finite. Limits are reached. Exponential growth can end.
What’s most revealing about Altman’s post is not his rash predictions. Rather, what emerges is his sense of untrammelled optimism in science and progress.
This makes it hard to imagine that Altman or OpenAI takes seriously the “downsides” of technology. With so much to gain, why worry about a few niggling problems? When AI seems so close to triumph, why pause to think?
What is emerging around AI is less an “age of intelligence” and more an “age of inflation” – inflating resource consumption, inflating company valuations and, most of all, inflating the promises of AI.
It’s certainly true that some of us do things now that would have seemed magic a century and a half ago. That doesn’t mean all the changes between then and now have been for the better.
AI has remarkable potential in many domains, but imagining it holds the key to solving all of humanity’s problems – that’s magical thinking too.
Hallam Stevens has previously received funding from the Ministry of Education (Singapore), the National Heritage Board (Singapore), the National Science Foundation (USA) and the Wenner-Gren Foundation.
Source: The Conversation (Au and NZ) – By Anthony Scott, Professor of Health Economics and Director, Centre for Health Economics, Monash Business School, Monash University
A battle between private hospitals and private health insurers is playing out in public.
At its heart is how much health insurers pay hospitals for their services, and whether that’s enough for private hospitals to remain viable.
Concerns over the viability of the private health system have caught the attention of the federal government, which has launched a review into private hospitals that has yet to be made public.
But are private hospitals really in trouble? And if so, is more public funding the answer?
Private hospitals vs private health insurers
Many private hospital operators have reported significant pressures since the start of the COVID pandemic, including staff shortages.
Inflationary pressures have increased the costs of supplies and equipment, pushing up the costs of providing hospital care.
Now, private hospitals have publicised their difficult contract negotiations with private health insurers in an attempt to gain support and help their case.
Healthscope, which runs 38 for-profit private hospitals in Australia, has been threatening to end agreements with private health insurers.
St Vincent’s, which operates ten not-for-profit private hospitals, announced it would end its contract with nib (one of Australia’s largest for-profit health insurers) but then reached an agreement.
UnitingCare Queensland, which operates four private hospitals, announced it would end its contract with the Australian Health Service Alliance, which represents more than 20 small and medium non-profit private health insurers. Since then, the two parties have also kissed and made up.
Why should we care?
There are three reasons why viability of the private health sector affects us all, regardless of whether we have private health insurance or use private hospitals.
1. Taxpayers subsidise the private health system
Australian taxpayers subsidised private health insurance premiums by A$6.3 billion
(in premium rebates) in 2021–22. Much of this makes its way to private hospitals. Medicare also subsidised fees for medical services delivered for private patients in private and public hospitals to the tune of $3.81 billion in 2023–24.
But when the going gets tough, the private health sector (both hospitals and health insurers) turns to the government for more handouts.
So we should be concerned about the value we currently get from our public investment into the private health system, and if more public investment is warranted.
2. Public hospitals may be affected if private hospitals close
Calls for greater government support for private health have long argued that a larger private hospital sector would help reduce pressures on the public system.
Indeed, this was the justification for a series of incentives introduced from the late 1990s to support private health insurance in Australia.
However, the extent of this is hotly debated. Recent evidence shows higher private health insurance coverage leads to only very small falls in waiting times in public hospitals.
While it is possible the closure of a few private hospitals might lead some patients to seek care in public hospitals, this shift might not be that large and will not increase waiting times too much.
3. Fewer private beds, but is that a bad thing?
If unviable private hospitals close or merge, we’d expect to see fewer
private hospital beds overall.
Fewer private hospital beds is not necessarily bad news. Mergers of small private day hospitals, in particular, might make them more efficient and lead to lower costs, which in turn lowers health insurance premiums.
We might also need fewer private beds. This is due to policies that try to shift health care out of hospitals into the community or the use of hospital-in-the-home schemes (where patients receive hospital-type care at home with the support of visiting health staff and/or telehealth). The private health insurers are supporting both.
If a few small private hospitals close, this reflects the market adjusting to less demand for hospital care. Some of the closures have been for maternity wards but with falling birth rates, this also seems like an appropriate market adjustment.
Any objective data about what is happening in the private hospital sector is scarce. This is mainly because the Australian Bureau of Statistics has stopped a compulsory survey of all private hospitals. The latest data we have is from 2016–17.
Health insurers are the largest payer of private hospitals and hence wield a considerable amount of negotiating power. In 2016–17, almost 80% of private hospitals’ income came from private health insurers. Health insurers have also increasingly become “active” purchasers of health care – not just passively paying insurance claims, but wanting to strike a good deal with private hospitals for their members to keep premiums (and costs) down, and profits high.
Reports of hospitals closing ignore hospitals that are opening at the same time. But since 2016–17 there are no publicly reported data on the total number of private hospitals in Australia or changes over time.
The latest figures we have show about half of all hospitals in Australia are private, and of these 62% are for-profit with the rest run by not-for-profit organisations (such as St Vincent’s).
The main for-profit providers are Ramsay Health Care and Healthscope. Both have operations overseas and were in trouble before the COVID pandemic.
Fast-forward to 2024 and the recent issues with contract negotiations suggests the financial situation of for-profit private hospitals might not have improved. So this could reflect a long-term issue with the sustainability of the private hospital sector.
What are the options?
The private health system already receives large public subsidies. So the crux of the current debate is whether the government should intervene again to prop up the private sector. Here are some options:
do nothing and let this stoush play out Closure and mergers of private hospitals might be good if smaller hospitals and wards are no longer needed and patients have other alternatives
introduce more regulation Negotiations between small groups of private hospitals and very large dominant private health insurers may not be efficient. If the insurers have significant market power they can force small groups of private hospitals into submission. Some private hospital groups may be negotiating with many different health insurers at the same time, which can be costly. Regulation of exactly how these negotiations happen could make the process more efficient and create a more level playing field
change how private hospitals are paid Public hospitals are essentially paid the same national price for each procedure they provide. This provides incentives for efficiency as the price is fixed and so if their costs are below the price, they can make a surplus. Private hospitals could also be funded this way, which could remove much of the costs of contract negotiations with private hospitals. Instead, private hospitals would be free to focus on other issues such as the number and quality of procedures, and providing high-value health care.
How do we help private hospitals become more efficient? Regulating prices and contract negotiations are a start. Kitreel/Shutterstock
What next?
Revisiting the regulation of prices and contract negotiations between private hospitals and private health insurers could potentially help the private hospital sector to be more efficient.
Private health insurers are rightly trying to encourage such efficiencies but the tools they have to do this through contract negotiations are quite blunt.
As we wait for the results of the review into the private hospital sector, value for money for taxpayers is paramount. We are all subsidising the private hospital sector.
Anthony Scott has previously received funding from the Medibank Better Health Foundation.
Terence C. Cheng does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment
Surviving lung cancer in Aotearoa New Zealand could depend on whether you can access a GP – raising questions about equity in the country’s health system.
Our new research examines the outcomes for patients who are diagnosed with lung cancer through their GP versus those who are diagnosed at the emergency department (ED).
Examining 2,400 lung cancer diagnoses in Waikato between 2011 and 2021, we found those who are diagnosed with lung cancer after ED visits tended to have later-stage disease and poorer outcomes compared to those diagnosed after a GP referral.
We also found diagnosis after ED attendance was 27% higher for Māori than non-Māori and 22% higher for men than women.
These results raise important questions about health inequity in New Zealand and highlight the need to ensure everyone is able to access an early cancer diagnosis.
For those who are enrolled in a practice, the wait times for appointments are often such that the only option is to go to the ED for help.
This is especially true in rural areas where the hospital can become the default route to diagnosis.
Lung cancer is New Zealand’s single biggest cause of cancer deaths, with over 1,800 per year. Some 80% of those who are diagnosed with lung cancer present with advanced disease and very poor prospects of survival.
It’s also the cancer with the largest equity gap. The mortality rate for Māori with lung cancer is three to four times that of people of European descent.
While much of this disparity is due to differences in the rates of smoking among ethnic groups, there is also evidence delays in diagnosis and poorer access to surgery are also major influences on survival rates.
Identifying lung cancer
Lung cancer usually starts in the tissue lining the airways and symptoms can initially be relatively minor – some shortness of breath during exercise, a niggly cough or sharp pains while breathing.
Patients with these sorts of symptoms usually go to a GP to check whether this is something that needs further investigation.
But if someone cannot get an appointment, or does not recognise the symptoms as serious, then they are likely to delay taking action.
Advanced symptoms of lung cancer include coughing up blood or having lumps in the neck due to lymphatic spread of the cancer. People with these alarming symptoms tend to go to the hospital for treatment.
Our study confirms earlier findings that those diagnosed through the emergency department are:
more likely to have advanced disease
more likely to have a more aggressive type of cancer (called small cell cancer), and
have substantially poorer likelihoods of survival.
The median survival for those who never went to the ED was 13.6 months, while the median survival for those with one ED visit was just three months.
That said, attending an emergency department has some advantages. These include being seen by a doctor within a few hours, immediate access to x-rays and, in our major hospitals, access to the definitive diagnostic tool for a lung cancer – a computed tomography (CT) machine.
Our study found 25% of cases went to the ED two or more times in the two weeks before their diagnosis. This was especially true for those going to one of the Waikato rural hospitals, where a second or third visit was more likely before being diagnosed.
Barriers to care
It is clear New Zealand still has several barriers to primary care. This has lead to an over-reliance on emergency departments for diagnosing cancer, despite the long-running faster cancer treatment targets.
The situation is unlikely to improve. Access to GPs is getting worse, in part due to increasing fees.
Māori and Pacific patients with lung cancer were less likely than other ethnic groups to have been enrolled with a primary health organisation when they were diagnosed. They were also less likely to have visited a GP in the three months prior to diagnosis.
Making it easier to see a GP
Making general practice care more accessible is the most effective way of addressing the inequities in our lung cancer statistics.
Currently, New Zealand has only 74 GPs per 100,000 people, compared to 110 in Australia.
It is clear we need to substantially increase the number of GPs. This is a long-term project but needs to be a strategic goal for the health sector.
In the meantime, we need to make primary care more accessible by increasing patient subsidies and reducing the direct patient costs to see a doctor. At the same time, we need to better equip GPs with access to diagnostic facilities, including in our rural hospitals.
Ross Lawrenson receives funding from NZ Health Research Council. He is an Honorary Fellow of the Royal New Zealand College of General Practitioners.
Chunhuan Lao receives funding from NZ Health Research Council.
Have you ever thought of an ankle sprain as a brain injury? Most people probably wouldn’t.
However, we are starting to understand how the brain is constantly adapting, known as plasticity.
Even though the damage of an ankle sprain happens at the ankle, there may also be some changes going on in the brain to how it well it senses pain or movement.
One of our doctoral students, Ashley Marchant, has shown something similar happens when we change how much weight (or load) we put on the muscles of the lower limb. The closer the load is to normal earth gravity, the more accurate our movement sense is; the lower the muscle load, the less accurate we get.
This work means we need to rethink how the brain controls and responds to movement.
One of the big issues in the treatment and prevention of sport injuries is that even when the sports medicine team feels an athlete is ready to return, the risk of a future injury remains twice to eight times higher than if they’d never had an injury.
This means sports medicos have been missing something.
Our work at the University of Canberra and the Australian Institute of Sport has targeted sensory input in an attempt to solve this puzzle. The goal has been to assess the ability of the sensory reception, or perception, aspect of movement control.
Over 20 years, scientists have developed tools to allow us to determine the quality of the sensory input to the brain, which forms the basis of how well we can perceive movement. Gauging this input could be useful for everyone from astronauts to athletes and older people at risk of falls.
We can now measure how well a person gets information from three critical input systems:
the vestibular system (inner ear balance organs)
the visual system (pupil responses to changes in light intensity)
the position sense system in the lower limbs (predominantly from sensors in the muscles and skin of the ankle and foot).
This information allows us to build a picture of how well a person’s brain is gathering movement information. It also indicates which of the three systems might benefit from additional rehabilitation or training.
Lessons from space
You may have seen videos of astronauts, such as on the International Space Station, moving around using only their arms, with their legs hanging behind them.
The crew of the International Space Station have some fun with ‘synchronised space swimming’ in 2021.
The brain rapidly deactivates the connections it normally uses for controlling movement. This is OK while the astronaut is in space but as soon as they need to stand or walk on the earth or moon surface, they are at greater risk of falls and injury.
Similar brain changes might be occurring for athletes due to changes in movement patterns after injury.
For example, developing a limp after a leg injury means the brain is receiving very different movement information from that leg’s movement patterns. With plasticity, this may mean the movement control pattern doesn’t return to an optimum pre-injury status.
As mentioned previously, a history of injury is the best predictor of future injury.
This suggests something changes in the athlete’s movement control processes after injury – most likely in the brain – which extends beyond the time when the injured tissue has healed.
Measures of how well an athlete perceives movement are associated with how well they go on to perform in a range of sports. So sensory awareness could also be a way to identify athletic talent early.
In older people and in the context of preventing falls, poor scores on the same sensory input perception measures can predict later falls.
This might be due to reduced physical activity in some older people. This “use it or lose it” idea might show how brain connections for movement perception and control can degrade over time.
Precise health care
New technologies to track sensory ability are part of a new direction in health care described as precision health.
Precision health uses technologies and artificial intelligence to consider the range of factors (such as their genetic make-up) that affect a person’s health and provide treatments designed specifically for them.
Applying a precision health approach in the area of movement control could allow much more targeted rehabilitation for athletes, training for astronauts and earlier falls prevention for older people.
Gordon Waddington owns shares in Prism Neuro Pty Ltd a perceptual neuroscience ability measurement company. He receives funding from the Medical Research Futures Fund, Australian Research Council, NSW Institute of Sport, Queensland Academy of Sport and the Australian Institute of Sport.
Jeremy Witchalls receives funding from the NSW Institute of Sport and the Australian Institute of Sport.
Our new research details the history of raupō (bulrush) from the time before people arrived in Aotearoa. It shows this resilient, opportunistic plant – and taonga species – can play an important role restoring wetlands and freshwater quality.
An unexpected finding was that the decline of freshwater quality in many lakes did not really kick in until the mid-20th century with intensification of agriculture. Until then, lake water quality indicators generally showed these ecosystems remained healthy. The prolific expansion of raupō after Aotearoa was first settled may have helped.
Thriving on material washed from disturbed catchments, raupō acted as an ecological buffer, intercepting nutrients and sediments, and reducing potentially harmful effects on freshwater ecosystems.
From the mid-20th century, as water quality began to deteriorate, raupō populations – and any buffering effects – were generally in decline as wetlands and lake shallows were drained for grazing land and better access to water supply.
Lessons from this plant’s past can be put to good use today as we strive to bring back the mauri (life force) of our freshwater systems.
Survival strategies for hard times
Before settlement, when dense forest covered most of the country, raupō was surviving on the fringes. As a wetland plant, it likes its roots submerged, but needs light to grow.
Its preferred niche is the shallow margins of lakes, ponds and streams or nutrient-rich swamps. Before people, these places were much less common. Forests typically grew right up to the water’s edge and extended across some swamps.
Under these conditions, raupō evolved strategies for survival: aerated roots to cope with water logging; tiny, abundant seeds that spread far and wide on the wind; rhizomes (underground stems) that extend from the mother plant and store carbohydrates to keep the plant alive in lean times.
Raupō has several attributes that allow it to grow on disturbed land. 1. large, resilient structures; 2. small, wind-dispersed seeds; 3. long-lived seed bank; 4. flowers produce abundant pollen; 5. aerated roots; 6. rhizomes store energy over winter; 7. rhizomes anchor in substrate, trapping sediment; 8. aggressive clonal propagation; 9. floating rhizome mats. Author provided, CC BY-SA
Raupō can even build floating root mats, from sediment trapped by its rhizomes, that extend out across open water and even detach from the shoreline to become mobile raupō islands.
With these survival strategies, raupō could wait for better times which, in Aotearoa’s dynamic environment, duly arrived.
Episodic agents of disruption – storms, floods, earthquakes, landslides, volcanic ashfall – created opportunities. Local forest damage allowed light to penetrate to ground level, and slips and floods brought nutrient-rich sediment from soils.
Raupō would seize these opportunities to expand. But they were typically short-lived as the inevitable process of forest succession returned the environment to stability – and raupō back to a state of patient hibernation.
Hitting the jackpot
Then people arrived, with fire and hungry mouths to feed. This time, the disturbances persisted. Forest clearances endured, sediments rich in nutrients flooded wetlands and lakes, and raupō, supremely equipped for just this scenario, spread across swamps and lake shores as wildfires spread on land.
Our tūpuna (ancestors) observed this behaviour, as well as what was happening around raupō. Insects and birds were feeding and nesting. Freshwater fish, crays, shellfish and eel spawned among its fertile beds.
This new-found abundance also offered a range of resource opportunities. Raupō’s flax-like leaves were woven into mats, rope and string. Leaves and stems were used like thatch to cloak the roofs and walls of whare.
This graphic shows how raupō responded to environmental changes during the past millennium (upper panels), informed by pollen analysis of lake sediments (lower panels). Author provided, CC BY-SA
Traditional poi were often made from raupō leaves. Some iwi, particularly in the south, used the stems to build lightweight boats for navigating rivers and lakes. Flower stalks, shoots and young leaves were eaten, and the rhizomes and roots, when cooked, provided edible carbohydrates. The most cherished raupō kai, however, were cakes baked using the copious raupō pollen.
Unsurprisingly, for many iwi raupō remains a taonga species today, treasured for this array of resources and for its ecological and even spiritual roles in maintaining the mauri of freshwater habitats, upon which so much depends.
For some iwi, raupō are seen as kaitiaki (guardians) watching over a lake or wetland, and signalling its health. In these ways, raupō also connects us with other Indigenous communities. Although raupō is native to this country, the same species is found in Australia and parts of East Asia, while relatives in the genus Typha (Greek for marsh) occur naturally on all continents, except Antarctica.
Similar practices occurred wherever raupō and its relatives are found.
This connection between cultural and ecological roles is one of the fascinating findings from our research. We describe raupō as a “human-associated species”, not just because of its taonga status, but because its fate seems so closely linked to people.
More work needs to be done, but history tells us raupō has an important role in restoring the health of our freshwater ecosystems. Not only can it soak up nutrients and contaminants, but as both a native and taonga species it can assist remediation solutions that are ecologically and culturally supportive and sustainable.
This research was funded by the New Zealand Ministry of Business, Innovation and Employment research programmes – Our lakes’ health; past, present, future (C05X1707) and Our lakes, Our future (CAWX2305).
Last month, Republican presidential candidate Donald Trump delivered a one-hour address on the danger of illegal immigration to the United States. His stage was the US-Mexico border in Arizona and the set piece of his performance was the border wall.
The message was simple: with their border policy, Democrats have “unleashed a deadly plague of migrant crime”. Trump has ratcheted up the tensions on immigration further since then, repeating wild conspiracy theories about Haitian immigrants eating pets and, more recently, claiming migrants are “attacking villages and cities all throughout the Midwest”.
What the US needs, Trump has repeatedly stressed, is a closed border, a walled border.
A long history of wall-building advocacy
The US-Mexico border wall, which is currently around 700 miles in length in various stretches, has loomed large in American politics in recent decades, especially since the 2016 US presidential campaign. Yet, current stories about the wall mostly overlook its history.
Most importantly, the media ignore the long-standing appeal of the wall as a tool of spatial and cultural division in the making of the US-Mexico border.
In my forthcoming book, I trace the origin of the border wall to the early 1900s, when the US Immigration Service and other federal agencies called for the construction of barriers at the border.
Congress answered their appeal by adopting an act in 1935 that authorised the secretary of state to construct and maintain fences between the US and Mexico. For decades following its adoption, US officials stood before Congress almost yearly, asking for funding for the construction of border fences.
This trend culminated in the 1940s with two parallel projects: the Western Land Boundary Fence Project (576 miles or 926 kilometres of fencing from El Paso, Texas, to the west) and the Rio Grande Border Fence Project (415 miles or 668 kilometres of fencing along the Mexico-Texas border).
Neither one of these projects was ever fully realised. But if they had been built, they would have surpassed the length of the current border wall.
Immigration, disease and crime
What is telling when looking at the history is how similar the arguments supporting such fences in the early 1900s were to those deployed today. Immigration, disease and crime have been recurring justifications for the wall, both then and now.
Indeed, there is an uncanny likeness to Trump’s rhetoric surrounding the US-Mexico border — including during his August speech in Arizona — and the narratives justifying a border wall in the mid-20th century.
High on the list of justifications was the need to deter “juvenile delinquents”, “thieves”, “beggars”, undocumented workers, narcotic smugglers, “wetbacks” (a derogatory term for Mexicans), and Mexican nationals seeking medical care in the US at public expense.
These arguments appeared regularly in government reports and during congressional hearings from the 1930s to the late 1950s.
A 1934 report by the Immigration Services on the feasibility of a short border fence between El Paso and Ciudad Juárez, for example, said it would stifle illegal immigration that took employment opportunities from American workers, while lowering wages in the borderland area.
Reminiscent of recent analogies between the borderland and a “war zone”, the report noted that sending agents to patrol the border without proper equipment was pointless. It was akin to:
put[ting] a body of troops in the field in an enemy’s theatre of operation without artillery, observation planes, trucks, ammunition and other weapons.
The fence was “the correct solution to the problem.”
At times, the fear of the undocumented merged with the fear of contagion. A foot and mouth disease outbreak in Mexico in 1946, for example, provided additional rhetorical support for the wall. As Texas Senator Tom Connally said when the Committee on Foreign Relations considered the issue:
It has been a dream of the Department of State for many years to have this fence, not because of the hoof and mouth disease, but for immigration and customs and smuggling and all of that sort of thing.
Senator Tom Connally in 1938. Harris & Ewing photographs, via Wikimedia Commons
Persistent racial faultlines
The 1935 act has long been forgotten. In fact, by the end of the 1950s, only a few hundred miles of fencing had actually been built.
These earlier walling plans failed for a range of reasons, including opposition by Texan landowners and industries relying on illegal Mexican labour. Perhaps most importantly, there were serious reservations back then about the efficiency of fences in curbing immigration.
Yet, these doubts have not weighed in to the same extent in contemporary debates about the border wall. This underscores the performative role of the wall in today’s politics.
In fact, close to 700 hundred miles (1,126 kilometres) of fencing has been built under the Secure Fence Act of 2006. This includes large portions of the wall built under the presidency of Barack Obama and, to a lesser extent, Trump’s.
What has filtered through, however, is the racialised narrative that paints Mexicans nationals in a disparaging way.
This rhetoric relied on generalisations and stereotypes on themes such as criminality, licentiousness and disease. It transformed Mexico into a threat to be curtailed and became a frame of reference that has permeated politics for decades – and is now a defining issue in the upcoming presidential election.
Marie-Eve Loiselle does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Source: The Conversation (Au and NZ) – By Jeff Bleich, Professorial fellow, Jeff Bleich Centre for Democracy and Disruptive Technologies, Flinders University
In countries with compulsory voting, such as Australia and many in Latin America, the system usually ensures an overwhelming majority of voters cast their ballots election after election.
In the United States, it’s a very different story. Two-thirds of eligible voters turned out to vote in the 2020 presidential election – the highest rate since 1900. Turnout in presidential elections before 2020 tended to hover between 50% and 65%.
Often, it’s the voters choosing to stay home on the couch who effectively decide an election’s outcome.
Under the United States’ unusual Electoral College presidential voting system, the candidate who wins the most votes nationally does not necessarily win the election. Twice in the past 25 years, Democrats have won the popular vote in the presidential race and still lost the election. That includes Donald Trump’s win over Hillary Clinton in 2016.
As such, victory depends on getting more voters “off the couch” in key battleground states where the decisive Electoral College votes are up for grabs. In those states, it doesn’t matter what percentage of people show up to vote, or how much a candidate wins by, it is winner take all.
A voter who doesn’t vote, therefore, actually makes an active choice — they remove a vote from the candidate they would have likely chosen, and so give an important advantage to the person they would not have voted for.
The “couch” is effectively where Americans go to vote against their self-interest.
Who is more incentivised to vote?
As this year’s presidential election between Trump and Kamala Harris approaches, we ask a simple question: whose “couch” will decide one of the most consequential elections in living memory?
Recent research demonstrates that partisanship is an important driver of voter choice in presidential elections.
The fact that the US is deeply divided is not news to most, but current survey data show how evenly split along partisan lines it actually is. With about 30% of Americans identifying as a Republican and 30% identifying as a Democrat, there is virtually no difference in the total number of voters who support each major party.
The remaining 40% of Americans identify as “independent” – that is, not loyal to either major political party. Almost seven decades of research on the American voter shows, however, that independents heavily “lean” towards one party or the other, with about half leaning Republican and the other half leaning Democrat.
One possible insight into which group has greater incentive to vote is polling on people’s dissatisfaction with their party’s candidate.
According to the most recent Gallup Poll data, 9% of Republicans currently have an unfavourable opinion of Trump. In contrast, only 5% of Democrats have an unfavourable opinion of Harris.
Partisan voters who are dissatisfied with their party candidate have a massive incentive to “stay on the couch” and refrain from voting. They don’t really want to vote for “the other team”, but they can’t stand their own team anymore either.
For example, Republican women in the suburbs, veterans and traditional Republicans have started to abandon Trump over his stances on reproductive rights and national security, and his temperament. The Trump campaign clearly knows this. At a rally in New York a few days ago, he told attendees to “get your fat ass out of the couch” to go vote for him.
Should these disaffected Republican and Republican-leaning voters stay home on November 5, Harris may well have a decisive edge over Trump.
When the couch wins, America loses
In 2016, Trump defied the polls and traditional voter turn-out trends by convincing some disaffected, working-class Democrats to stay on the couch, vote for an unelectable third party candidate or, in some cases, vote for him.
Could this happen again? Or will Democrats be able to reverse this phenomenon by getting exhausted Republicans suffering Trump fatigue to stay home, while motivating everyone from Taylor Swift fans to “never Trumpers” to veterans of foreign wars to get out to vote.
Recent trends suggest overall turnout will be comparatively high, in line with the past three federal US elections.
Democrats have traditionally benefited from higher voter turn-out, but it is not as clear this is still the case in 2024. Recent research shows higher turnout rates seem to have favoured the Republican Party since 2016.
Yet both parties still have significant numbers of people who don’t vote. According to the Pew Research Center, 46% of Republicans and Republican-leaning independents didn’t vote in the past three elections (2018, 2020 and 2022), compared to the 41% of Democrats and Democratic-leaning independents.
So again, who sits on the couch matters. Inevitably, many of those who stay home will get precisely what they don’t want. When the couch wins, America loses.
Jeff Bleich is a former US ambassador to Australia and a member of the National Security Leaders for America, a group of 700 former generals, admirals, service secretaries, ambassadors, and other national security professionals, that has endorsed Kamala Harris in the presidential election. He was also special counsel to President Barack Obama and served as chair of the Fulbright Foreign Scholarship Board under President Donald Trump and as a member of President Joe Biden’s (non-partisan) National Security Education Board.
Rodrigo Praino receives funding from the Australian Research Council, the Australian Government Department of Defence, and SmartSat CRC.
Source: The Conversation – USA – By Andrew J. Hoffman, Professor of Management & Organizations, Environment & Sustainability, and Sustainable Enterprise, University of Michigan
The U.S. has seen a large number of billion-dollar disasters in recent years.AP Photo/Mark Zaleski
Millions of Americans have been watching with growing alarm as their homeowners insurance premiums rise and their coverage shrinks. Nationwide, premiums rose 34% between 2017 and 2023, and they continued to rise in 2024 across much of the country.
There are a few reasons, but a common thread: Climate change is fueling more severe weather, and insurers are responding to rising damage claims. The losses are exacerbated by more frequent extreme weather disasters striking densely populated areas, rising construction costs and homeowners experiencing damage that was once more rare.
Hurricane Ian, supercharged by warm water in the Gulf of Mexico, hit Florida as a Category 4 hurricane in October 2022 and caused an estimated $112.9 billion in damage. Ricardo Arduengo/AFP via Getty Images
Just a decade ago, few insurance companies had a comprehensive strategy for addressing climate risk as a core business issue. Today, insurance companies have no choice but to factor climate change into their policy models.
Rising damage costs, higher premiums
There’s a saying that to get someone to pay attention to climate change, put a price on it. Rising insurance costs are doing just that.
Increasing global temperatures lead to more extreme weather, and that means insurance companies have had to make higher payouts. In turn, they have been raising their prices and changing their coverage in order to remain solvent. That raises the costs for homeowners and for everyone else.
The importance of insurance to the economy cannot be understated. You generally cannot get a mortgage or even drive a car, build an office building or enter into contracts without insurance to protect against the inherent risks. Because insurance is so tightly woven into economies, state agencies review insurance companies’ proposals to increase premiums or reduce coverage.
The insurance companies are not making political statements with the increases. They are looking at the numbers, calculating risk and pricing it accordingly. And the numbers are concerning.
The arithmetic of climate risk
Insurance companies use data from past disasters and complex models to calculate expected future payouts. Then they price their policies to cover those expected costs. In doing so, they have to balance three concerns: keeping rates low enough to remain competitive, setting rates high enough to cover payouts and not running afoul of insurance regulators.
But climate change is disrupting those risk models. As global temperatures rise, driven by greenhouse gases from fossil fuel use and other human activities, past is no longer prologue: What happened over the past 10 to 20 years is less predictive of what will happen in the next 10 to 20 years.
The number of billion-dollar disasters in the U.S. each year offers a clear example. The average rose from 3.3 per year in the 1980s to 18.3 per year in the 10-year period ending in 2024, with all years adjusted for inflation.
With that more than fivefold increase in billion-dollar disasters came rising insurance costs in the Southeast because of hurricanes and extreme rainfall, in the West because of wildfires, and in the Midwest because of wind, hail and flood damage.
Hurricanes tend to be the most damaging single events. They caused more than US$692 billion in property damage in the U.S. between 2014 and 2023. But severe hail and windstorms, including tornadoes, are also costly; together, those on the billion-dollar disaster list did more than $246 billion in property damage over the same period.
As insurance companies adjust to the uncertainty, they may run a loss in one segment, such as homeowners insurance, but recoup their losses in other segments, such as auto or commercial insurance. But that cannot be sustained over the long term, and companies can be caught by unexpected events. California’s unprecedented wildfires in 2017 and 2018 wiped out nearly 25 years’ worth of profits for insurance companies in that state.
To balance their risk, insurance companies often turn to reinsurance companies; in effect, insurance companies that insure insurance companies. But reinsurers have also been raising their prices to cover their costs. Property reinsurance alone increased by 35% in 2023. Insurers are passing those costs to their policyholders.
What this means for your homeowners policy
Not only are homeowners insurance premiums going up, coverage is shrinking. In some cases, insurers are reducing or dropping coverage for items such as metal trim, doors and roof repair, increasing deductibles for risks such as hail and fire damage, or refusing to pay full replacement costs for things such as older roofs.
Some insurances companies are simply withdrawing from markets altogether, canceling existing policies or refusing to write new ones when risks become too uncertain or regulators do not approve their rate increases to cover costs. In recent years, State Farm and Allstate pulled back from California’s homeowner market, and Farmers, Progressive and AAA pulled back from the Florida market, which is seeing some of the highest insurance rates in the country.
In some cases, insurers are restricting coverage. Roof repairs, like these in Fort Myers Beach, Fla., after Hurricane Ian, can be expensive and widespread after windstorms. Joe Raedle/Getty Images
State-run “insurers of last resort,” which can provide coverage for people who can’t get coverage from private companies, are struggling too. Taxpayers in states such as California and Florida have been forced to bail out their state insurers. And the National Flood Insurance Program has raised its premiums, leading 10 states to sue to stop them.
According to NOAA data, 2023 was the hottest year on record “by far.” And 2024 could be even hotter. This general warming trend and the rise in extreme weather is expected to continue until greenhouse gas concentrations in the atmosphere are abated.
In the face of such worrying analyses, U.S. homeowners insurance will continue to get more expensive and cover less. And yet, Jacques de Vaucleroy, chairman of the board of reinsurance giant Swiss Re, believes U.S. insurance is still priced too low to fully cover the risk from climate change.
Climate change is a major factor in the rising cost of insurance. Join us for a special free webinar with experts Andrew Hoffman of the University of Michigan and Melanie Gall of Arizona State University to discuss the arithmetic behind these rising rates, what climate change has to do with it, and what may be coming in your future insurance bills.
Andrew J. Hoffman does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Just like people confronted with a sea of options at the grocery store, bees foraging in meadows encounter many different flowers at once. They must decide which ones to visit for food, but it isn’t always a straightforward choice.
Flowers offer two types of food: nectar and pollen, which can vary in important ways. Nectar, for instance, can fluctuate in concentration, volume, refill rate and accessibility. It also contains secondary metabolites, such as caffeine and nicotine, which can be either disagreeable or appealing, depending on how much is present. Similarly, pollen contains proteins and lipids, which affect nutritional quality.
When confronted with these choices, you’d think bees would always pick the flowers with the most accessible, highest-quality nectar and pollen. But they don’t. Instead, just like human grocery shoppers, their decisions about which flowers to visit depend on their recent experience with similar flowers and what other flowers are available.
I find these behaviors fascinating. My research looks at how animals make daily choices – especially when looking for food. It turns out that bees and other pollinators make the same kinds of irrational “shopping” decisions humans make.
Predictably irrational
Humans are sometimes illogical. For instance, someone who wins $5 on a scratch ticket immediately after winning $1 on one will be thrilled – whereas that same person winning $5 on a ticket might be disappointed if they’re coming off a $10 win. Even though the outcome is the same, perception changes depending on what came before.
Perceptions are also at play when people assess product labels. For instance, a person may expect an expensive bottle of wine with a fancy French label to be better than a cheap, generic-looking one. But if there’s a mismatch between how good something is and how good someone expects it to be, they may feel disproportionately disappointed or delighted.
Research shows bumblebees and humans share many of these behaviors. A 2005 study found bees evaluate the quality of nectar relative to their most recent feeding experience: Bees trained to visit a feeder with medium-quality nectar accepted it readily, whereas bees trained to visit a feeder with high-quality nectar often rejected medium-quality nectar.
My team and I wanted to explore whether floral traits such as scents, colors and patterns might serve as product labels for bees. In the lab, we trained groups of bees to associate certain artificial flower colors with high-quality “nectar” – actually a sugar solution we could manipulate.
The bumblebee colony, right, is attached by tunnel to the foraging arena, left, where colored discs serve as artifical flowers. Claire Hemingway, CC BY-SA
For example, we trained one group to associate blue flowers with high-quality nectar. We then offered that group medium-quality nectar in either blue or yellow flowers.
We found the bees were more willing to accept the medium-quality nectar from yellow flowers than they were from blue. Their expectations mattered.
In another recent experiment, we gave bumblebees a choice between two equally attractive flowers – one high in sugar concentration but slower to refill and one quick to refill but containing less sugar. We measured their preference between the two, which was similar.
At the center of each artifical flower is a tube the bee enters to access the sugar solution. Claire Hemingway, CC BY-SA
We then expanded the choice by including a third flower that was even lower in sugar concentration or even slower to refill. We found that the presence of the new low-reward flower made the intermediate one appear relatively better.
These results are intriguing and suggest, for both bees and other animals, available choices may guide foraging decisions.
Potential uses
Understanding these behaviors in bumblebees and other pollinators may have important consequences for people. Honeybees and bumblebees are used commercially to support billions of dollars of crop production annually.
If bees visit certain flowers more in the presence of other flowers, farmers could use this tendency strategically. Just as stores stock shelves to present unattractive options alongside attractive ones, farmers could plant certain flower species in or near crop plants to increase visitation to the target crops.
Claire Therese Hemingway is affiliated with The Smithsonian Tropical Research Institue.
Four years ago, in an attempt to overturn his loss in the 2020 presidential election, then-President Donald Trump and his surrogates furiously challenged its results. Lodging 63 lawsuits, Trump and his surrogates tried to discredit or override vote counting, election processes and certification standards in nine states.
None of these attempts was successful. Many were dismissed as baseless – often by Trump-appointed judges – before they even saw trial. Simply put, there is no evidence of widespread fraud. Even a voter data expert hired by Trump concluded that the 2020 election was not stolen.
The U.S. legal system agreed, demonstrating that courts remain an important bulwark protecting American democracy. Yet the legal system cannot prevent political violence wrought by election denialism, as the country soon learned.
The mob was spurred, at least in part, by Trump’s rousing speech at a rally in Washington, D.C., earlier that day. There, he reiterated his claims that the 2020 election had been “stolen by emboldened radical-left Democrats” and warned the crowd of approximately 53,000 that “if you don’t fight like hell, you’re not going to have a country anymore.”
Many legal scholars considered this to be incitement.
“He clearly knew there were people in that crowd who were ready to and intended to be violent,” legal scholar Garrett Epps told the BBC. “He not only did nothing to discourage it, he strongly hinted it should happen.”
Trump: A sore loser … and winner
Trump has a long history of denying the results of any contest whose outcome he does not like.
Before entering the political arena, Trump called the 2012 Emmys “dishonest” because his show, “The Apprentice,” did not win. In 2012, he dismissed then-President Barack Obama’s reelection as a “total sham” and questioned the accuracy of vote tallies and voting machines. Unleashing a barrage of tweets, Trump urged citizens to “fight like hell” against a “disgusting injustice.”
As a presidential candidate in 2016, Trump called the Republican primaries fraudulent after his competitor Sen. Ted Cruz won in Iowa, tweeting that the Texan “stole it.”
Trump has doubled down on his election denial this election cycle. By May 2024, The New York Times had documented 550 such statements, up from roughly 100 in the entire 2020 campaign.
This narrative of pervasive victimization has been bolstered by a flurry of lawsuits and criminal investigations brought against the former president. Since 2020, state and federal prosecutors have charged Trump with 94 crimes, including business fraud, mishandling classified documents and interfering with the federal election.
Trump has cast these legal challenges as a deliberate attempt by President Joe Biden to interfere with the 2024 election over 350 times.
“My legal issues, every one of them, civil and the criminal ones, are all set up by Joe Biden,” Trump told a New York City crowd in January 2024. “They’re doing it for election interference.”
His surrogates amplify this message. For instance, Mike Howell, director of the right-leaning Heritage Foundation’s Oversight Project, proclaimed on June 6, 2024, at a public Washington event that there is a “0% chance of a free and fair election.”
From denialism to violence: Warning signs
Lying about election results is no mere tantrum. It is a cornerstone of Trump’s strategy to paint himself as the victim of an elitist deep state – an image that appeals to his base, particularly among white working-class voters, some of whom feel that they are victims themselves of globalization and shadowy elites.
I fear little can be done to prevent such violence.
In 2022, Congress, acting in rare bipartisan fashion, approved the Electoral Count Reform and Transition Improvement Act of 2022, which closed many doors that President Trump attempted to use to thwart the 2020 election. Yet, as history shows, rule of law is not a certain brace against violence.
Given the perceived stakes of the election for most Americans, along with Trump’s ever-sharpening incendiary rhetoric, it is hard to imagine that Jan. 6, 2021, was an isolated chapter in American history.
Indeed, it may have been just a prelude.
Alexander Cohen does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Democratic vice presidential nominee Tim Walz thanks supporters after serving ice cream at the Minnesota State Fair on Sept. 1, 2024. Stephen Maturen/Getty Images
Since Democratic presidential nominee Kamala Harris selected Tim Walz as her running mate in August 2024, political commentators have offered various takes on Walz – is he pragmatic or progressive, centrist or radical, a grassroots lefty or a mainstream Democrat?
Walz will have a chance to speak directly to voters and possibly explain who he is and what he stands for when he debates Republican contender JD Vance on Oct. 1, 2024.
I am a scholar of populist politics in North America, and I understand why it is difficult to define how Walz fits within the Democratic Party.
On the one hand, Walz is a shock to the Democratic Party, which often endorses elite-educated, moderate politicians from the country’s two coasts. Walz is a former public school teacher who graduated from a state college in Nebraska – and he is not afraid to embrace the moniker of a “progressive,” which some Democrats reject in order to avoid false comparisons to socialists.
As Walz said in an August 2024 donor call for Harris: “Don’t ever shy away from our progressive values. One person’s socialism is another person’s neighborliness.”
Yet, Walz is unlike many other progressives in the Democratic Party. He is a gun owner and a hunter – and was one of the “best shots in Congress” when he represented Minnesota in Washington, as he will remind people. He uses sports metaphors to convey his messages, rallying Democrats behind a “fourth quarter” comeback in the election, for example.
Yet these apparent contradictions make sense when considering that Walz follows a rich lineage of Midwestern progressive politics that starts with the Minnesota Democratic Farmer-Labor Party, a state affiliate of the Democratic Party that maintains the traditions and values of populist farmer politics in the American Midwest.
The Minnesota Democratic-Farmer-Labor Party is one of the first major recognized political parties in the state. It began more than 100 years ago as a form of populist protest to the harm industrialization and urbanization brought to rural farmers at the turn of the 20th century.
In the late 1800s, political movements like the Grangers and the Farmers’ Alliances organized to bring attention to falling crop prices, increases in railroad fees for transporting crops and the monopolization of agribusiness.
In Minnesota, these farmer protest groups joined forces with American labor unions to build a third-party alternative to the Democrats and Republicans. This new group, known as the Farmer-Labor Party, formed in 1918 as a way to represent rural people’s interests. The Farmer-Labor Party challenged state officials to legalize union protections and offer farmer subsidies, and unsuccessfully tried to place private utilities and natural resource industries under state control.
The Farmer-Labor Party was ideologically diverse – sometimes to a fault – and brought together a range of activists, even socialists, under the common goal of protecting working people. In 1936, the Farmer-Labor Party’s momentum captured President Franklin D. Roosevelt’s attention, and it became a key member of his New Deal coalition.
For most of the 1920s and 1930s, Farmer-Labor challenged the Democratic Party with its more progressive ideas. However, under the guidance of former vice president Hubert Humphrey, the party merged in 1944 with the more moderate Minnesota Democratic Party to form the Minnesota Democratic-Farmer-Labor Party.
Over the next several decades, the Democratic-Farmer-Labor Party pushed for pragmatic and progressive politics within the state’s Democratic Party. The movement’s grassroots message has centered around protecting the country’s rural backbone.
Influential Minnesotan politicians – including U.S. Sen. Paul Wellstone, who championed environmentalism and walked the picket lines with Midwestern laborers before he died in 2002 – have been members of the party.
The ideas behind Farmer-Laborism
Today, the Minnesota Democratic-Farmer-Labor Party shares many of its platforms and policy positions with the national Democratic Party.
But Farmer-Labor politics are distinct in how the party has embraced a Midwestern working-class identity and rallied against monopolies, business elites and corrupt government.
Among other Midwestern state political parties, like the Libertarian Party of Minnesota, Farmer-Labor is one of the most progressive and successful. The party has helped pass recent progressive legislation, like a public option health plan and a universal free school lunch policy.
Walz’s predecessors in the Farmer-Labor movement have also successfully spoken out against economic and political injustices from a position within working-class and agrarian communities. Like Walz, this movement took a populist stance against political and economic elites.
This Farmer-Labor tradition, in many ways, is a foil to the conservative-populism that is popular today. Unlike Trump’s appeal to middle America, this Minnesota brand of populism was not an attempt to save white Christian manhood. Instead, it was a genuine recognition that working people – especially those in middle America – needed to actively push back against economic inequality and forces that threatened the middle class.
For some people, Walz and the Democratic-Farmer-Labor Party are still hard to situate within the national Democratic Party.
This is in part because the Democratic Party has sidelined rural and working-class voters over the past few decades. In 2016, the Democratic Party made the strategic mistake of not focusing enough on the Midwest – and Democratic presidential nominee Hillary Clinton lost the Electoral College in important Midwestern states, including Wisconsin and Michigan.
In the 2024 election, the Democratic Party is presenting voters with Walz, who can speak to the American dream from a familiar perspective. Walz embraces unions beyond lip service, chastises corporate greed and does not shy away from rural voters even if they have cultural differences.
American voters said in September that they view Walz slightly more favorably than Republican contender JD Vance, though they say that they don’t know either candidate well. The debate should offer voters a chance to learn more about the popular Minnesota governor.
Conservatives, meanwhile, have tried to paint Walz as someone whose progessive politics challenge the culture of rural American life. I’d argue that the truth is far from that. Instead, like the Democratic-Farmer-Labor Party and some of the rural activists it produced, Walz is trying to uncouple small-town politics from the politics of fear and cultural isolation.
Gabriel Paxton does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Just when the summer uproar over Donald Trump calling his potential rival “Laffin’ Kamala” and “Cackling Copilot Kamala Harris” was beginning to subside, an apparent new round of attacks by Trump and other Republicans has emerged after their initial U.S. presidential debate.
The target – again – was Kamala Harris’ laugh.
Three days after the debate, for instance, Bruce Zuchowski, an Ohio sheriff, posted on his Facebook account that Harris was a “laughing hyena.” Zuchowski was subsequently barred from providing election security during in-person voting.
This is not surprising, given that Harris’ laughter was on full display during much of the nationally televised debate – and, worse, Trump was clearly the object of her unrelenting derision.
Much has been written already about the sexism and racism behind Trump’s contempt for Harris’ laugh.
Ellison’s essay, published in a 1986 collection “Going to the Territory,” still offers useful historical racial context for explaining Trump’s animus toward Harris. Among the stories Ellison tells: Black people once had to put their heads in a barrel to laugh because their laughter unnerved white Southerners.
The dangers of Black laughter
Best known for his 1952 novel “Invisible Man,” Ellison was one of America’s foremost social critics who confronted racism and white supremacy by telling the stories of alienation among everyday Black people searching for identity in a nation that deemed them inferior.
In “An Extravagance of Laughter,” Ellison began with an anecdote about attending a theater adaptation of Erskine Caldwell’s novel “Tobacco Road” in New York City in 1936. The popular play detailed the lives of destitute white sharecroppers during the Great Depression. The sharecroppers feared, among other things, losing their social status by dropping below the lower rung reserved for Black people in America.
While laughing uncontrollably at a comical scene in the play involving the antics of poor white Georgia farmers, Ellison became aware of the stir he was causing among the predominantly white audience.
For many white Americans, Black laughter was “a peculiar form of insanity suffered exclusively by Negroes, who in light of their social status and past condition of servitude were regarded as having absolutely nothing in their daily experience which could possibly inspire rational laughter,” Ellison explained.
As Ellison saw it, his laugh during the play was being construed as an affirmation of the Black buffoon stereotype.
As he described it, the white spectators were “catching fire and beginning to howl and cheer the disgraceful loss of control being exhibited” by a Black man.
Later in the essay, Ellison lampoons the use of “laughing barrels” in Southern towns, which he described as “huge whitewashed barrels labeled FOR COLORED, and into which any Negro who felt a laugh coming on was forced … to thrust his boisterous head.”
The intent of suppressing Black laughter, Ellison explained, was pro bono publico, or for the public good.
While the idea of the barrels may seem utterly ridiculous, Ellison understood them as an absurd strategy of containment for a not-so-absurd fear in post-Reconstruction and Jim Crow white America, when racial segregation was legal.
Black folks who laugh “turned the world upside down and inside out,” he explained.
And in so doing, Ellison wrote, Black laughter “in-verted (and thus sub-verted) tradition and thus the preordained and cherished scheme of Southern racial relationships was blasted asunder.”
In a 1983 letter celebrating Caldwell’s birthday, Ellison thanked the writer – “by giving artistic sanction to a source of comedy which in the interest of self-protection I had been forced to deny myself you had released me from three turbulent years of self-restraint.”
Flipping the script on who gets to laugh
The first time Trump found himself the object of Black laughter was during the 2011 White House correspondents’ dinner, where he was publicly and mercilessly roasted by a gleeful Barack Obama. The experience appeared to humiliate and infuriate Trump and is widely seen by political pundits as the catalyst for Trump’s entrance into the 2016 presidential race.
It is not surprising, then, to see his campaign resurrect the rhetoric that many deem to be racist to erode public confidence in Harris’ fitness for the office.
In this Harper’s Weekly cartoon published in 1874, two Black legislators are arguing in front of their white colleagues. Fotosearch/Getty Images
These hearken to the long and shameful history of racist characterizations of Black Americans as menaces to society. They include depictions of unruly, newly emancipated Black men holding public office in D.W. Griffith’s 1915 “The Birth of a Nation” to Trump’s public call for the death penalty for the Black and Hispanic teens known as the Central Park Five in a full-page New York Times ad in 1989.
In that case, the teen boys were falsely accused of the brutal assault of a white New York jogger. They served years in prison before being exonerated by DNA and the confession of a convicted rapist and murderer.
Harris is widely regarded by political commentators as the winner of the debate, and the lasting impression is that of a glowering Trump repeatedly failing to put a stop to Harris’ mirthful expressions of incredulity.
Almost a century has passed since Ellison’s disruptive laugh occurred in a New York theater in 1936. In that time, both Obama and Harris have reordered traditional gender and racial norms by using Black laughter in the very public theater of U.S. presidential politics.
Betsy Huang does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Rabies is a deadly disease. Without vaccination, a rabies infection is nearly 100% fatal once someone develops symptoms. Texas has experienced two rabies epidemics in animals since 1988: one involving coyotes and dogs in south Texas, and the other involving gray foxes in west central Texas. Affecting 74 counties, these outbreaks led to thousands of people who could have been exposed, two human deaths and countless animal lives lost.
In 1994, Gov. Ann Richards declared rabies a state health emergency. The Texas Department of State Health Services responded by launching the Oral Rabies Vaccination Program to control the spread of these wildlife rabies outbreaks.
Since 1995, the program has distributed over 53 million doses of rabies vaccine over 758,100 square miles (nearly 2 million square kilometers) in Texas by hand or aircraft. Rabies cases in dogs and coyotes went from 141 to 0 by 2005, and rabies cases in foxes went from 101 to 0 by 2014. By 2004, one canine rabies variant was effectively eliminated from Texas, and another variant was substantially controlled.
We are researchers who began studying wildlife rabies and oral vaccination in the 1980s. From providing a proof of concept in using oral vaccines in raccoons to being among the first to use new rabies vaccines in the 1990s, we were on the ground floor of efforts to contain this deadly virus.
Decades of vaccine research led to one of the most successful public health projects in Texas. And we’re hopeful it could provide a road map for the use of mass wildlife vaccination to prevent future outbreaks.
Developing the oral rabies vaccine
The Texas Oral Rabies Vaccination Program benefited greatly from the work of multiple researchers over prior decades.
The mid-20th century saw several major developments in rabies control. With the failure of efforts to poison or trap infected animals, virologist and veterinarian George Baer at the U.S. Centers for Disease Control and Prevention recognized the need for a different strategy to prevent and control wildlife rabies. His and his colleagues’ work in the 1960s led to the concept of oral rabies vaccination. While orally vaccinating wildlife would help combat infection at its source, it was previously thought to be logistically unfeasible given the large range of target animals.
By the late 1970s, European researchers began the first field trials to orally vaccinate foxes against rabies. Small plastic containers were filled with vaccines and placed into baits, such as chicken heads. Over 50,000 of these vaccine-laden baits were distributed over four years in fox habitats in forests and fields.
Researchers in Canada also began similar field trials in Ontario. During the 1980s, an average of 235 rabid foxes per year were reported in the area. Baits containing oral rabies vaccine were dropped annually from 1989 to 1995 and successfully eliminated the fox variant of rabies from the whole area.
Recombinant oral rabies vaccine
The first generation of these vaccines used live viruses modified in an attempt to not cause severe disease. Although effective and generally safe, the original rabies vaccines had to be kept in cool temperatures and had the rare risk of causing rabies in animals.
In the early 1980s, scientists developed recombinant rabies vaccines, which use a separate virus to express the genes of the rabies virus. A collaboration between a nonprofit institute, the U.S. government, and the pharmaceutical industry led to the development of a recombinant viral vaccine that produced a rapid immune response against rabies without the possibility of causing rabies.
In 1984, preliminary work in laboratory animals showed the promise of using an oral form of the recombinant vaccine to vaccinate animals. However, the concept of using genetically modified organisms was in its infancy among both scientists and the general public. While the vaccine was safe and effective in captive raccoons and foxes, major questions loomed over how it might affect other species once released into the environment.
After years of work improving the vaccine’s design and testing its safety in several nonhuman species, the first European trial was held on a military base in Belgium. With data supporting it could safely and effectively control wildlife in Luxembourg and France, the vaccine was licensed to control fox rabies in 1995.
In the U.S., similar studies of the oral recombinant rabies vaccine were conducted. The first trial began in 1990 at Parramore Island off the Virginia coast, and a year of intensive monitoring found no significant adverse effects on the environment or any wildlife species. A second yearlong study on the mainland near Williamsport, Pennsylvania, had similarly positive results.
After the vaccine was successfully used to control raccoon rabies in tests in severalother EastCoast states, it was approved for use on raccoons in 1997.
In 1998, the U.S. Department of Agriculture’s Animal and Plant Health Inspection Service and the U.S. Fish and Wildlife Service received funding to expand existing oral wildlife vaccination projects to states of strategic importance, to prevent the spread of specific rabies viruses, and to coordinate interstate projects.
Results in Texas
In Texas, the oral recombinant vaccine is now primarily distributed by hand and by approximately 75 separate helicopter flights annually.
The Texas Department of State Health Services rabies laboratory worked alongside the CDC to create the Regional Rabies Virus Reference Typing Laboratory. One of us was recruited to both distribute the vaccine in the field and to develop molecular typing tools to discriminate between different types of rabies virus variants in the lab. These techniques allowed us to identify where different rabies virus variants were emerging at any given moment.
The Texas Oral Rabies Vaccination Program continues to monitor and control rabies cases in the state.
Our lab was also the first in the nation outside of the CDC to assist other U.S. states and countries in testing their specimens for rabies virus variants. These techniques helped researchers monitor where the rabies epizootic was ongoing or retreating due to wildlife vaccination and new forms of spread.
With the constant threat of emerging and reemerging infectious diseases like COVID-19 and influenza, the prospect of mass vaccination of wild animals may be one way to address future pandemics. Though there is much work ahead of us, we have hope that we may one day have the option of using mass wildlife vaccination to reduce or eliminate infectious diseases like rabies.
Rodney E. Rohde has received funding from the American Society of Clinical Pathologists, American Society for Clinical Laboratory Science, U.S. Department of Labor (OSHA), and other public and private entities/foundations. Rohde is affiliated with ASCP, ASCLS, ASM, and serves on several scientific advisory boards.
Charles E. Rupprecht consults for global academic, governmental, industrial and NGO organizations. He receives funding from academic, governmental, industrial, and NGO sources.
As the climate warms, the southwestern U.S. is increasingly experiencing weather whiplash as the region swings from drought to flooding and back again. As a result, the public is hearing more about little-known infectious diseases, such as valley fever.
In May 2024, about 20,000 people attended a music festival in Buena Vista Lake, California. In the months that followed, at least 19 developed valley fever, and eight were hospitalized from their infection. This outbreak follows a dramatic increase of more than 800% in valley fever infections in California between 2000 and 2018.
In 2023, California reported the second-highest number of valley fever cases on record, with more than 9,000 cases reported statewide. And between April 2023 and March 2024, California provisionally reported 10,593 cases – 40% more than during the same period the prior year.
The Conversation U.S. asked Jennifer Head, Simon Camponuri and Alexandra Heaney – researchers specializing in the epidemiology of valley fever – to explain what valley fever is, and what might explain its rise in recent years.
What is valley fever, and how do you get infected?
When the fungus has access to moisture and nutrients, it grows long, branching fungal chains throughout the soil. When the soil dries out, these chains fragment to form fungal spores, which can be stirred up into the air when the soil is disturbed, such as by wind or digging. Airborne spores can then be inhaled and cause a respiratory infection.
Cases of valley fever are typically highest in California’s southern San Joaquin Valley and southern Arizona, but they have been increasing outside of these regions. Between 2000 and 2018, the incidence of valley fever cases increased fifteenfold in the northern San Joaquin Valley and eightfold along the Southern California coast. And between 2014 and 2018, incidence increased by more than eightfold along the central coast.
Because of these trends and the virulence of the pathogen that causes valley fever, it is listed as a priority pathogen by the World Health Organization. Historically, fungal infections have received very little attention and resources. By creating this list, the WHO is hoping to galvanize action surrounding listed pathogens, including getting more resources for research as well as the development of new treatments.
What are the symptoms, and what should people be looking for?
After inhaling fungal spores from the environment, Coccidioides initially infects the lungs, causing symptoms like mild to severe cough, fever, difficulty breathing, chest pain and tiredness. Valley fever symptoms can resemble other common respiratory infections, so it’s important for people to get checked by a doctor if they’ve experienced prolonged symptoms, particularly if they have been given antibiotics that they are not responding to.
In California and Arizona, an estimated one-third of community-acquired pneumonia cases – or pneumonia acquired outside of the hospital – are caused by valley fever. However, only a fraction of community-acquired pneumonia cases get tested for it, so it’s likely the number of valley fever cases is significantly higher. Among diagnosed cases, half experienced symptoms for two months or more before being diagnosed.
In 5% to 10% of cases, the fungus can spread from the lungs to other parts of the body, such as the central nervous system, liver and bones, causing meningitis or arthritis-like symptoms. These cases can be severe and possibly fatal.
Jose Epifanio Sanchez Trujeque of Lebec, Calif., spent four months in the hospital after contracting valley fever in 2023. The Washington Post/Getty Images
What time of year should you be most concerned?
Valley fever cases can occur year-round, but in California, cases reported via surveillance systems tend to increase starting in August and September, peak in November and return to background levels in January and February.
Researchers believe that patients are likely exposed to the fungus in the summer and early fall months, typically one to three months prior to their diagnosis. This delay accounts for time between when patients are exposed, develop symptoms and are diagnosed with the disease. While cases peak in the fall on average, seasonal strength and timing varies regionally.
This transition was followed by a near-record spike in cases in 2023. The state experienced another wet winter during the 2023-2024 wet season, furthering concern about continued high risk for valley fever in 2024.
Our research team recently developed a model to forecast valley fever cases that will occur between April 2024 and March 2025 in California. We forecast that the state is likely to see another spike in cases during the fall and winter of 2024, on par with the spike in 2023.
During high-risk periods, clinicians should consider valley fever as a potential diagnosis. This is especially true when evaluating a patient presenting with valley fever symptoms or a respiratory illness who lives in, works in or traveled to an endemic or emerging region.
We are currently working to characterize seasonal disease patterns in Arizona as well, which are different from California’s. This is likely because Arizona has two rainy seasons.
Are some people at greater risk than others?
Those who spend time or work outdoors in areas where valley fever is common, especially where they may be exposed to dirt and dust, are more likely to get it.
While healthy people are still at risk of infection, certain factors can increase the likelihood of developing severe disease from valley fever. These include being an adult 60 years or older, having diabetes, HIV or another condition that weakens the immune system, or being pregnant. People who are Black or Filipino also have been noted to have a higher risk of severe disease, which may relate to more exposure to the fungal spores, underlying health conditions, inequities in accessing care or other possible predispositions.
How can you protect yourself from getting valley fever?
People who live and work in the regions where the fungus is found should avoid exposure to dust as much as possible. When it is windy outside and the air is dusty, stay indoors and keep windows and doors closed.
When driving through a dusty area, limit vehicle speed, keep car windows closed and recirculate the air, if possible. When working outdoors, use dust suppression techniques, including wetting soil before digging to prevent stirring up dust, and installing fencing, windbreaks and vegetation where possible.
For those who must directly stir up soil or be in dusty conditions, such as while doing construction or gardening work, consider using an N95 mask to limit dust inhalation.
Jennifer Head receives funding from the National Institute of Allergy and Infectious Diseases (NIAID) of the National Institutes of Health.
Alexandra K. Heaney receives funding from the National Institute of Allergy and Infectious Diseases (NIAID) of the National Institutes of Health.
Simon Camponuri receives funding from the National Institute of Allergy and Infectious Diseases (NIAID) of the National Institutes of Health and from the National Institute for Occupational Safety and Health (NIOSH) of the Centers for Disease Control and Prevention.
Delving into the presidential candidates’ successes on a number of drug-pricing policies, you’ll see a continuation of progress across the administrations. Neither the Trump administration nor the Biden-Harris administration, however, has done anything to truly lower drug prices for the majority of Americans.
$35 insulin
Insulin is a necessity for patients with diabetes. But from January 2014 to April 2019, the average price per unit went from US$0.22 to $0.34 before dropping back slightly by July 2023 to $0.29 per unit. Since dosing is weight-based, insulin costs for someone weighing 154 pounds would have risen from $231 to $357 a month from 2014 to 2019 and dropped to $305 a month by 2023. Price increases have led some patients to space out their medications by taking less than the dose they need for good blood sugar control. One study estimated that over 25% of patients in an urban diabetes center were underusing their insulin.
In July 2020, the Trump administration enacted a $35 cap on insulin copayments via executive order. In effect, it made participating Medicare Part D programs limit the price of just one of each type of insulin product to $35. For instance, if there were six short-acting insulin products on an insurance plan’s approved drug list, the insurer had to offer one vial form and one pen form at $35.
In August 2022, the Biden-Harris administration signed the Inflation Reduction Act into law. This maintained the $35 insulin cap with the same stipulations but made the program mandatory for all Medicare Part D and Medicare Part B members. This expanded the number of people who could benefit from cheaper insulin to 3.3 million.
This still doesn’t help a majority of diabetics. If you don’t have Medicare, the $35 reduction does not apply to you. Furthermore, pharmaceutical companies are not responsible for lowering insulin costs under these policies, but health plans are on the hook for lowering copayments. Costs could be passed along to beneficiaries in future Medicare premiums.
Importing Canadian drugs
Americans pay nearly 2.6 times more for prescription drugs than people in other high-income countries. One way regulators have tried to reduce prices is to simply import drugs at the prices pharmaceutical companies charge those countries rather than those charged to U.S. consumers.
In July 2019, the Trump administration proposed importing drugs from Canada as a way to share Canadians’ lower drug costs with American consumers. He signed an executive order allowing the Food and Drug Administration to create the rules under which states could import the drugs. When President Joe Biden came into office, he left the executive order in place and the rulemaking process continued.
No state under the Trump or Biden-Harris administrations has yet been able to successfully import a Canadian drug product. In January 2024, however, the Food and Drug Administration approved Florida’s plan to import Canadian drugs, the first state to receive the green light. Colorado, New Hampshire, New Mexico and Texas have applications pending as of September 2024.
Unfortunately, it is unlkely that Canada would allow their prescription drugs to be shipped in large quantities to American consumers, not without imposing high tariffs as a disincentive. That is because drug manufacturers could limit supplies to Canada and cause shortages if drugs are moved to the U.S. Manufacturers could also be less willing to negotiate lower prices for Canadians if that will hurt U.S. profits.
Negotiating with the pharmaceutical industry
Be it prescription drugs or cars, both buyer and seller must agree on a price for a successful sale to occur. If the potential buyer is unwilling to walk away from negotiations, you will not get the seller’s best price. One reason U.S. drug prices are higher than other countries’ is because the government is not a shrewd negotiator.
Negotiations that result in major reductions in drug prices frequently result from the drug manufacturer losing access to patients on a certain health plan or ending up in a higher drug tier that substantially raises a patient’s copay. However, if the buyer refuses the seller’s final offer, their members or citizens lose access to those drugs. While major private health plans and pharmacy benefit managers are able to directly negotiate drug prices with pharmaceutical manufacturers, often with substantial savings, Medicare was prevented from doing so by federal law until recently.
In May 2018, the Trump administration released a so-called blueprint for reducing prescription drug prices that included negotiating Medicare prescription drug prices with the pharmaceutical industry. This plan wasn’t enacted during his term.
In August 2022, under the Biden-Harris administration, the Inflation Reduction Act enabled price negotiation and specified the number of drugs that negotiations could include in a year.
The Inflation Reduction Act allowed Medicare to negotiate drug prices for the first time.
The first negotiation between Medicare and the pharmaceutical industry took place over the summer of 2024, lowering costs for 10 Medicare Part D drugs, which include the blood thinner Xarelto and the drugs Farxiga and Jardiance, which treat Type 2 diabetes, heart failure and kidney disease. The resulting $1.5 billion in savings will be extended in 2026 to the approximately 8.8 million Medicare Part D patients who are taking these drugs. The prices for these drugs are still twice what they are in four other developed countries.
Prices will be negotiated for another 15 Medicare Part D drugs in 2027. Thereafter, drug negotiations could include Medicare Part D drugs, which you pick up from your pharmacy, and Medicare Part B drugs, which are administered or received from your doctor’s office.
Another aspect of the Inflation Reduction Act is capping out-of-pocket expenses at $2,000. This won’t go into effect until 2025, however, and simply shifts costs above the cap onto taxpayers.
Continuation of progress
It is often challenging to attribute policy successes to one administration versus another when assessing complex issues such as drug pricing. There were ideas initiated during the Trump administration that did not come to fruition until the Biden-Harris administration implemented and expanded on them.
For example, Medicare price negotiation, proposed in a Trump administration “blueprint,” was codified in law by President Biden, but the fruits of this policy will not be seen until the next administration. And regardless of who you attribute this success to, only a portion of people on Medicare will see any relief from high drug prices as a result.
Truly lowering the costs of prescription drugs would require identifying the maximum price the nation is willing to pay for benefits, such as cost per quality adjusted life year at the federal, state and private payer levels, and being willing to walk away from negotiations if the price exceeds that level. This would not be a panacea, though, especially for patients with rare and ultrarare diseases, and would need to be eased in over time to avoid bankrupting the industry.
C. Michael White does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Source: The Conversation – USA – By Manuel Pastor, Distinguished Professor of Sociology and American Studies & Ethnicity, USC Dornsife College of Letters, Arts and Sciences
The edge of the Salton Sea, a heavily polluted lake with large geothermal and lithium resources beneath it. Manuel Pastor
The county also happens to be sitting on enough lithium to produce nearly 400 million batteries, sufficient to completely revamp the American auto fleet to electric propulsion. Even better, that lithium could be extracted in a way consistent with broader goals to reduce pollution.
The traditional ways to extract lithium involve either hard rock mining, which generates lots of waste, or large evaporation ponds, which waste a lot of water. In Imperial Valley, companies are pioneering a third method. They are extracting the mineral from the underground briny water brought up during geothermal energy production and then injecting that briny water back into the ground in a closed loop. It promises to yield the cleanest, greenest lithium on the planet.
The hope of a clean energy future has excited investors and public officials so much that the area is being rechristened as “Lithium Valley.”
In a region desperate for jobs and income, the prospect of a “white gold rush” is appealing. Public officials have been working to roll out the red carpet for big investors, including trying to create a clear plan for infrastructure and a quicker permitting process. To get community groups’ support, they are playing up the potential for jobs, including company commitments to hire local workers.
But Imperial Valley residents who have been on the butt end of get-rich schemes around water and real estate in the past are worried that their political leaders may be giving away the store. As we explore in our new book, “Charging Forward: Lithium Valley, Electric Vehicles and a Just Future,” the U.S. has an opportunity to ensure that these residents directly benefit from the lithium extraction boom, which is an important part of the global shift to clean energy.
Possibilities and perils in ‘Lithium Valley’
Imperial Valley is emblematic of the potential and the risks that have long faced impoverished communities in resource-rich regions.
To understand the possibilities and perils in Imperial Valley, it’s useful to remember that the world is not just moving away from fossil fuel extraction but toward more mineral extraction. Today’s battery technology – necessary for electric vehicles and energy storage – relies on minerals including cobalt, magnesium, nickel and graphite. And mineral extraction is often accompanied by obscured environmental risks.
A prototype for CTR’s lithium-producing geothermal facility, in the Hell’s Kitchen area of Imperial Valley. Manuel Pastor
In Imperial Valley, environmental and community organizations are worried about lithium extraction’s water use, waste and air pollution as production steps up and truck traffic increases. When your region’s childhood asthma rate is already more than twice the national average, and dust from the drying lake is toxic, kicking up a “little extra dust” is a big deal.
Comite Civico del Valle, a long-established environmental justice organization in Imperial Valley, has sued to slow down a streamlined permitting process for Controlled Thermal Resources, a company planning lithium extraction there. The group’s concern is that inadequate environmental reviews could result in harm to residents’ health. Both the company and public officials are warning that the lawsuit could stop the lithium boom before it begins.
Behind these policies and financial incentives have been public will and taxpayer money.
Young advocates with the Imperial Valley Equity & Justice Coalition have been spreading their concerns through the community. Chris Benner
We believe that local residents, not just companies, deserve a return. Rather than promising to just pay for community “benefits,” such as environmental mitigation, contributions to municipal coffers or jobs, the companies could pay “dividends” directly to local residents and communities.
There are models of this dividend approach. For example, the Alaska Permanent Fund gives an annual amount to all residents of that state from revenues obtained from the oil beneath the ground.
In Imperial Valley, the actual ownership of the lithium is complex, involving a mix of privately owned subsurface rights, public lease rights obtained by companies and public rights held by the regional water district to whom companies will pay royalties.
Given the ownership complexities and the desire to benefit as development takes place, local authorities and community organizations persuaded the state in 2022 to pass a per-metric-ton lithium tax to address local needs.
Controlled Thermal Resources CEO Rod Colwell, right, walks near the Salton Sea with a colleague. AP Photo/Marcio Jose Sanchez
That “flat tax” was bitterly resisted by some in the emerging industry on the grounds that it could make Imperial Valley’s less-polluting extraction method too costly to compete with environmentally damaging imports; after the vote, CTR’s CEO called the legislators “clowns.” Meanwhile, CTR has also agreed to hire union workers in the construction phase. Everyone – companies, communities and government officials – is struggling to balance economic viability with accountability.
Lessons for a just transition
The hesitance of low-income Imperial Valley residents to immediately buy into the lithium vision is deeply rooted in history.
Decades of racial exclusion, patronizing practices and broken promises have led to deep distrust of outsiders who assert that things will be better this time.
You can still find old billboards promising a resort life on the Salton Sea, which today is one of the state’s most polluted lakes. Wind kicks up toxic dust when the water is low. Manuel Pastor
Building the supply chain here, too
In recent years, some people have pinned their hopes on lithium. The main site so far in Imperial Valley has been CTR’s Hell’s Kitchen. It’s a fitting moniker on summer days when temperatures regularly exceed 110 degrees.
Ensuring that the surrounding communities benefit from this new lithium boom will require thinking about how to attract not just companies extracting the lithium but also those that will use it. So far, Imperial County has had limited success in attracting related industries. In 2023, a company named Statevolt said it would build a “gigafactory” there to assemble batteries. However, the company’s previous efforts – Britishvolt in the United Kingdom and Italvot in Italy – have stalled without any volts being produced. Imperial County will need serious suitors to make a go of it.
A potentially promising future for modern transportation and energy storage may be brewing in Imperial Valley. But getting to a brighter future for everyone will require remembering a lesson from the past: that community investments tend to be hard-won. We believe that ensuring everyone benefits long term is essential for achieving a more inclusive and sustainable future.
Research for the book from which this article draws was supported by the James Irvine Foundation, New Energy Nexus, the California Wellness Foundation, and Open Society Foundations. Manuel Pastor was also supported by a Residency at the Rockefeller Foundation’s Bellagio Center.
Research for the book from which this article draws was supported by the James Irvine Foundation, New Energy Nexus, the California Wellness Foundation, and Open Society Foundations. Chris Benner was also supported by a Residency at the Rockefeller Foundation’s Bellagio Center.
Uniformed members of Trail Life USA present the colors at the Family Research Council’s 2018 Values Voter Summit.Chip Somodevilla/Getty Images
The Family Research Council is a conservative advocacy group with a “biblical worldview.” While it has a church ministries department that works with churches from several evangelical Christian denominations that share its perspectives, it does not represent a single denomination. Although its activities are primarily focused on policy, advocacy, government lobbying and public communication, the Internal Revenue Service granted the council’s application to be treated as “an association of churches” in 2020.
Concerned that the IRS had erred in allowing the council and similar groups to be designated churches or associations of churches, Democratic members of the House of Representatives sent the Treasury secretary and the IRS commissionerletters in 2022 and 2024 expressing alarm. The House Democrats pointed to what appeared to be “abuse” of the tax code and asked the IRS to “determine whether existing guidance is sufficient to prevent abuse and what resources or Congressional actions are needed.”
As a professor of nonprofit law, I believe some groups that aren’t churches or associations of churches want to be designated that way to avoid the scrutiny being a charitable organization otherwise requires. At the same time, some other groups that should qualify as churches may have difficulty doing so because of the IRS’ outdated test for that status.
All charitable nonprofits, including churches, get the same basic benefits under federal tax law. This means they don’t have to pay taxes on their revenue and that donors can deduct the value of their gifts from their taxable income – as long as they itemize deductions on their tax return.
Unlike other tax-exempt charities, churches don’t have to file 990 forms. That means the public does not have access to churches’ staff pay, board membership and funding details, which are in this publicly available tax form that all other charities must complete every year. The availability of 990 forms enhances the transparency and accountability of the nonprofit sector.
And churches and associations of churches are unlikely to get audited by the IRS. Federal law requires that a senior IRS official “reasonably believes” the church or association has violated federal tax rules before beginning an investigation. This means that an official must have reason to believe the organization has violated federal tax law before obtaining any information from the organization.
This standard is higher than what’s needed before an audit can begin for all other tax-exempt organizations and indeed all taxpayers. For everyone else, the IRS is free to begin an examination based only on a suspicion of a violation or even based on random selection.
Also, unlike other tax-exempt charities, churches and church associations are automatically eligible for their tax-exempt status. They don’t have to apply for it.
Why churches get special treatment
Congress has passed laws granting churches and what it calls “integrated auxiliaries” and “conventions or associations of churches” special protections because the First Amendment to the U.S. Constitution protects religious freedom.
Churches include houses of worship ranging in size from a handful of parishioners to megachurches with 10,000 or more people attending weekly services. Houses of worship of all faiths, including synagogues, mosques and temples, count as churches, according to the IRS.
Integrated auxiliaries are church schools and other organizations affiliated with churches or conventions and primarily supported by internal church sources, as opposed to by the public or government.
Conventions or associations of churches are organizations that have houses of worship from either a single denomination or from multiple denominations as their members. Most denominational bodies, such as the executive committee of the Southern Baptist Convention and the U.S. Conference of Catholic Bishops, are likely conventions or associations of churches, although the IRS does not publish a list of such entities.
Not every religious nonprofit belongs in one of these categories.
For example, the University of Notre Dame, where I teach law students and conduct legal research, and World Vision, a global humanitarian group, are both religious organizations that do not fall into any of these categories. This makes sense, because Notre Dame and World Vision are primarily engaged in activities other than fostering a religious congregation or coordinating the activities of churches within a single denomination.
The IRS has long relied on a 14-factor test to distinguish churches from the other religious nonprofits. Examples of those factors include having ordained ministers, a formal doctrine, a distinct membership and a regular congregation attending religious services.
It’s not necessary for all the factors to apply to pass this test.
Yet for almost as long, courts have been uncomfortable with this test because it draws heavily on the traditional characteristics of Protestant Christian churches, as the U.S. Court of Federal Claims explained in a 2009 ruling. This system therefore may be a poor fit for houses of worship of other faiths, especially given the increasing diversity of faith communities.
These courts have instead adopted an “associational test.” It focuses on whether the organization’s congregants hold religious services on a regular basis and gather in person on other occasions.
Aprill and I recommend that the IRS change its definition for churches to the associational one adopted by some courts in rulings as early as 1980. As the U.S. Court of Federal Claims explained in that 2009 ruling, this test focuses on whether a body of believers assembles regularly to worship. Given technological advances, the IRS should also make it clear that this test can be satisfied through remote participation in religious services using interactive, teleconferencing apps such as Zoom.
This definition would be also better suited for congregations of all faiths because some faiths do not prioritize many of the factors included in the IRS test, such as having a formal code of doctrine or requiring members to not be associated with other houses of worship or faiths. And it would better reflect how some Americans participate in religious services today.
We recommend that the IRS revisit its test for being a church and that Congress pass a law that would change the definition of church associations. The new law could limit associations of churches to organizations that represent a single denomination, as Congress likely initially intended.
This latter change would make it harder for religious organizations that are primarily involved in bringing churches from multiple faiths together to engage in advocacy or other activities to obtain this status and the lack of transparency and accountability that come with it. We believe Congress, not the IRS, should make this change because of the potential political tensions that narrowing the definition could create.
We don’t think the changes would impinge upon the special role that churches have in our society. Indeed, the revised test for qualifying as a church would better fit with both the increasing variety of faiths in our country and technological advancements.
Lloyd Hitoshi Mayer is affiliated with the University of Notre Dame, a tax-exempt religious nonprofit corporation. Lloyd Hitoshi Mayer is also affiliated with South Bend City Church, a tax-exempt religious nonprofit corporation that is classified as a church for federal tax purposes.
Retailers donate products that are typically packaged, palatable and safe for consumption, yet unsuitable for sale due to quality concerns, such as minor blemishes. Since these items can go a long way to feeding hungry people, donations represent one of the best uses of leftover or surplus food.
Donations are also technically acts of charity, and the companies responsible for them get tax breaks. This means that donations boost profits by lowering costs. There’s a second effect of donations on a store’s bottom line: They improve the quality of food on the store’s shelves and increase revenue from food sales.
As a supply chain scholar who studies food banks, I worked with a team of economists to estimate the effects of retail food donations. We used sales data for five perishable food categories sold by two competing retail chains, with stores located in a large, Midwestern metropolitan area. We found that stores that remove items on the brink of expiration, donate them to food banks and fill up the emptied shelf space with fresher inventory get more revenue from sales and earn higher profits.
Retailers donate 30% of what food banks give their clients
They get about 30% of that food for free from supermarkets and big-box retailers that sell groceries. Prior to the start of the COVID-19 pandemic, retailers supplied more than twice as much food to food banks than the federal government did. The volume of food supplied by federal programs administered by the United States Department of Agriculture, such as the Emergency Food Assistance Program, have steadily increased since 2020, to now almost match the volume of food donated by retailers.
The remaining 2.88 billion pounds of food were either purchased directly, provided by farmers, donated by food processing companies or donated by people and organizations in local communities.
Retail donation routines are established but inconsistent
When food on a store’s shelves is on the verge of expiration, store managers have three options. They can donate or discard it, or sell it at a discount.
Stores that regularly donate food have established routines for when they set aside about-to-expire food to give away. However, these routines are often inconsistent.
Many stores donate only on a seasonal basis or just give away certain kinds of food. For example, they might donate only meat, baked goods or fruits and vegetables. In many cases, donations take a backseat to more immediate priorities, such as customer service.
Those realities can increase the likelihood that food will land at the dump instead of on somebody’s table.
Although millions of Americans struggle to find their next meal, close to 40% of food gets thrown out along the supply chain, as food moves between agricultural producers, factories, retailers and consumers. This is largely due to logistical challenges: It’s hard to transport and distribute highly perishable food.
Discounted meat is displayed at a San Rafael, Calif., grocery store in September 2024. Justin Sullivan/Getty Images
Discounts on food can undercut sales
Stores often prefer to sell food on the brink of expiration at a discount rather than donate it or throw it out due to the money they recoup that way. This option, however, also keeps the discounted food on the shelf, where it takes up valuable space that could otherwise hold fresher inventory.
Shelf space dedicated to the sale and promotion of full-priced products competes with that for price-discounted food. Stocking perishable foods that are starting to look iffy – such as bananas with brown spots sold alongside unblemished yellow bananas – could harm a retailer’s image if shoppers start to question the store’s quality.
My research team calls this practice “preemptive removal.” Increasing the average quality level of food on display does more than improve a store’s appearance. We used panel data with over 20,000 observations, and we included 21 retail stores that compete in a similar market geography. The five fresh food categories were bakery, dairy, deli, meat and produce.
Stores that donated food, instead of discounting it, may have made better use of the limited room to display fresher inventory. My research team found that food donations can increase average food prices by up to 1%, which corresponds to a 33% increase in profit margins. Profit margins for supermarkets and other food retailers are quite low and typically hover below 3%.
That means even a small increment in food prices, even a 1% bump up, can translate into significantly higher profits for retailers. At the same time, increasing the volume of retail food donations would get more food to people who need it, limit hunger and reduce food insecurity.
Prof Lowrey has consulted with several Feeding America member Food Banks on procurement and food-distribution-related supply chain projects. He has also served on an advisory board to the Academy of Nutrition & Dietetics, focused on supply chain responses to the COVID-19 pandemic in the emergency feeding network. His research has been funded by the Agriculture and Food Research Initiative (National Institute for Food and Agriculture, U.S. Department of Agriculture).
Source: The Conversation – Africa – By Martina van Heerden, Senior Lecturer in English for Educational Development, University of the Western Cape
It can be difficult to tell someone what you think of their work, even if you mean well and even if you think they’re doing a good job. Sometimes the person doesn’t understand what you mean, or doesn’t respond the way you’d hoped. Feedback should contribute to learning, but you might sometimes wonder if it’s any use at all. South African university lecturer Martina van Heerden studied the art of giving feedback to students in higher education. Her insights and three top tips are useful for effective communication in many areas of life.
Why did you decide to study feedback?
As a tutor, I initially did not get training on how to give feedback to students on their essays. After a while, I started thinking more about what exactly I was trying to say and do with my feedback. For example, if I told a student “your argument lacks depth”, was I just telling the to make a stronger argument in this essay, or was there a “deeper message”?
So, in my PhD, I explored “what lies beneath” our feedback. What I found is that often feedback has very specific messages for students, largely about what is valued in a particular context; what the student is expected to know in that discipline.
Feedback is a big concern in higher education globally. It is fairly well researched and most research identifies various problems with it. Students don’t seem to take up the feedback, or there are different understandings of its purpose, or it’s not as effective as it should be because of academic language and conventions. The blame tends to be put on students.
I wondered if the problem lay instead with how educators approach and give feedback.
Focusing on English literature studies, I analysed written comments given to first year students and worked with the tutors giving the feedback. English literature is a tricky discipline to give feedback in as it involves balancing language, literature and academic literacy aspects. Focusing too much on one aspect in feedback could mislead students.
What did you find?
There was a bit of misalignment between the purpose and the practice of feedback.
Ideally, the underlying message of feedback in literature studies should be to develop students’ ability to think critically and analytically about texts. It could do this, for example, by asking questions that stimulate thinking around the topics and themes of the text (rather than asking students to merely provide more information on it).
Most of the feedback in my study, however, focused on correcting surface-level errors like grammar and spelling. Although there is nothing wrong with this in itself, it could mislead students about what is valued in the discipline.
Feedback is often quite frustrating for both students and educators – both research and practice wisdom attest to this. Educators are frustrated because students don’t seem to learn from feedback, and students are frustrated because they are getting what they feel is unhelpful feedback. These are global concerns. There is a big discrepancy between how useful educators and students perceive feedback to be.
My work and other research highlights the importance of seeing feedback as a literacy – that is, as a skill – that needs to be developed deliberately.
Too often, it is assumed that educators will know how to give effective feedback, or it is assumed that students will know what to do with feedback. But a lot of the time, they don’t – we go by our instincts and what is perhaps easier to identify and correct. For feedback to actually “feed forward” – beyond a specific essay or task – the skill needs to be developed.
How can people give better feedback?
I recommend asking yourself three questions:
1.) What do I want to achieve with my feedback? Ask yourself if you just want to help students pass this essay or do well in this task, or if you want them to learn something. If they need to learn something, what should they learn?
2.) How understandable is my feedback language? The language of feedback may be steeped in academic, professional, or industry terms which you take for granted. Or you may have developed your own feedback shorthand. This might be easy for you to understand – you’re the one writing it – but that doesn’t mean a student will. So, ask yourself whether someone who is not you would understand your feedback.
3.) What do I want my students to do with my feedback? Too often, comments don’t really give students guidance on what to do. Correcting errors and making statements about students’ work takes agency and action away from students. Using questions and suggestions means that students become more active in the feedback process.
Feedback is important for learning and development. Too often, though, it becomes another obstacle that has to be overcome. Useful, clear, actionable feedback can help students become better writers, researchers, thinkers and scholars.
Martina van Heerden is a member of the South African Association of Academic Literacy Practitioners.
Fiji Prime Minister Sitiveni Rabuka is cautioning New Caledonia’s local government to “be reasonable” in its requests from Paris ahead of a Pacific fact-finding mission.
A much-anticipated high-level visit by Pacific leaders to the French territory is confirmed, after it was postponed by New Caledonia’s local government in August due to allegations France was pushing its own agenda.
President Louis Mapou has confirmed the Pacific leaders’ mission will take place from October 27-29.
Rabuka is one of the four Pacific leaders taking part in the so-called “Troika Plus” mission and confirmed he will be in Nouméa on Sunday.
He told RNZ Pacific during his visit to Aotearoa last week that as “an old hand in Pacific leadership”, listening was key.
“I’m hoping that they will be very, very reasonable about what they’re asking for,” the prime minister said.
“When they started, the Kanaky movement started during my time as Prime Minister. I told them, ‘look, don’t slap the hand that has fed you’.
‘Good disassociation arrangement’ “So have a good disassociation arrangement when you become independent, make sure you part as friends.”
This week, Rabuka told RNZ Pacific in Apia that he would be taking a back seat during the mission.
Veteran Pacific journalist Nick Maclellan, who is in New Caledonia, said there was “significant concern” that political leaders in France did not understand the depth of the crisis.
“This crisis is unresolved, and I think as Pacific leaders arrive this week, they’ll have to look beyond the surface calm to realise that there are many issues that still have to play out in the months to come,” he said.
He said there appeared to be “a tension” between the local government of New Caledonia and the French authorities about the purpose of Pacific leaders’ mission.
“In the past, French diplomats have suggested that the Forum is welcome to come, to condemn violence, to address the question of reconstruction and so on,” he said.
“But I sense a reluctance to address issues around France’s responsibility for decolonisation.
‘Important moment’ “The very fact that four prime ministers are coming, not diplomats, not ministers, not just officials, but four prime ministers of Forum member countries, shows that this is an important moment for regional engagement,” he added.
In a statement on Friday, the Pacific Islands Forum Secretariat said that the prime ministers of Tonga and the Cook Islands, along with Solomon Islands Foreign Affairs Minister, would join Rabuka to travel to New Caledonia.
Tongan PM Hu’akavameiliku will head the mission, which is expected to land in Nouméa after the Commonwealth Heads of Government Meeting (CHOGM) in Samoa this week.
This article is republished under a community partnership agreement with RNZ.
Have you noticed certain words and phrases popping up everywhere lately?
Phrases such as “delve into” and “navigate the landscape” seem to feature in everything from social media posts to news articles and academic publications. They may sound fancy, but their overuse can make a text feel monotonous and repetitive.
This trend may be linked to the increasing use of generative artificial intelligence (AI) tools such as ChatGPT and other large language models (LLMs). These tools are designed to make writing easier by offering suggestions based on patterns in the text they were trained on.
However, these patterns can lead to the overuse of certain stylistic words and phrases, resulting in works that don’t closely resemble genuine human writing.
The rise of stylistic language
Generative AI tools are trained on vast amounts of text from various sources. As such, they tend to favour the most common words and phrases in their outputs.
And although most of the research has looked specifically at academic writing, the stylistic language trend has appeared in various other forms of writing, including student essays and school applications. As one application editor told Forbes, “tapestry” is a particularly common offending term in cases where AI was used to write a draft:
I no longer believe there’s a way to innocently use the word ‘tapestry’ in an essay; if the word ‘tapestry’ appears, it was generated by ChatGPT.
Why it’s a problem
The overuse of certain words and phrases leads to writing losing its personal touch. It becomes harder to distinguish between individual voices and perspectives and everything takes on a robotic undertone.
Also, words such as “revolutionise” or “intriguing” – while they might seem like they’re giving you a more polished product – can actually make writing harder to understand.
Stylish and/or flowery language doesn’t communicate ideas as effectively as clear and straightforward language. Beyond this, one study found simple and precise words not only enhance comprehension, but also make the writer appear more intelligent.
Lastly, the overuse of stylistic words can make writing boring. Writing should be engaging and varied; relying on a few buzzwords will lead to readers tuning out.
There’s currently no research that can give us an exact list of the most common stylistic words used by ChatGPT; this would require an exhaustive analysis of every output ever generated. That said, here’s what ChatGPT itself presented when asked the question.
Possible solutions
So how can we fix this? Here are some ideas:
1. Be aware of repetition
If you’re using a tool such as ChatGPT, pay attention to how often certain words or phrases come up. If you notice the same terms appearing again and again, try switching them out for simpler and/or more original language. Instead of saying “delve into” you could just say “explore”, or “look at it closely”.
2. Ask for clear language
Much of what you get out of ChatGPT will come down to the specific prompt you give it. If you don’t want complex language, try asking it to “write clearly, without using complex words”.
3. Edit your work
ChatGPT can be a helpful starting point for writing many different types of text, but editing its outputs remains important. By reviewing and changing certain words and phrases, you can still add your own voice to the output.
Being creative with synonyms is one way to do this. You could use a thesaurus, or think more carefully about what you’re trying to communicate in your text – and how you might do this in a new way.
4. Customise AI settings
Many AI tools such as ChatGPT, Microsoft Copilot and Claude allow you to adjust the writing style through settings or tailored prompts. For example, you can prioritise clarity and simplicity, or create an exclusion list to avoid certain words.
By being more mindful of how we use generative AI and making an effort to write with clarity and originality, we can avoid falling into the AI style trap.
In the end, writing should be about expressing your ideas in your own way. While ChatGPT can help, it’s up to each of us to make sure we’re saying what we really want to – and not what an AI tool tells us to.
Ritesh Chugh does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Source: The Conversation – UK – By Richard Youngs, Professor of International and European Politics, University of Warwick
A new team of 26 leaders has been appointed to the European Commission, reflecting a carefully crafted balance of political ideologies and member states. Each will take on a different portfolio, from democracy to agriculture to innovation.
And for the first time, the EU will have a dedicated defence commissioner in the form of Lithuania’s Andrius Kubilius.
Commission president Ursula von der Leyen has made it clear that in her second term, the primary focus will be defence and security issues. She wants to convert the EU into a “security project” and has created the new post to build the bloc’s military capacities and cooperation.
The last EU Commission that ran from 2019 to this year declared itself “geopolitical”. Under this label, it moved the European Union towards a heightened concern with military capabilities and hard power.
Most observers see this as a positive aspect of the last commission. And there is a striking degree of supportive consensus that the military-power shift needs to be extended and deepened.
However, this increasingly unchallenged conventional wisdom has unhelpfully narrowed and distorted the EU’s foreign policy debates. The EU needs to move beyond its hazy geopolitical mantra, not lean on it even more heavily.
Much EU policy debate has become concerned principally with the question of whether the EU can defend itself more robustly and without help from the US. Analysis of European foreign policy has come overwhelmingly to take the form of calls for the EU to advance more ambitiously in its emerging ethos of militarised self-preservation and for laggardly member states to accelerate their rearmament.
While the focus on defence capabilities was overdue and remains necessary, it is becoming too dominant.
Defence players and experts get a far readier hearing in Brussels than anyone working on more liberal agendas involving human rights, development or peacebuilding. Funds flow aplenty into new programmes on defence and away from these old liberal concerns, many of which policymakers and analysts now belittle as passé.
As they ramp up their defence spending, most member states are cutting their development aid. The incoming commissioners’ mission statements are all about security and protecting European democracy from external threats. There is no mention of the work they would do to support global human rights.
If it previously tended to under-securitise its major challenges, the EU now risks over-securitising them. Well beyond the defence sphere, nearly all areas of EU policy are now infused with a more securitised ethos.
The new hard-power orthodoxy risks crowding out any critical questioning of the EU’s new enthusiasm for concepts – power politics and zero-sum geopolitical rivalry – that were until recently anathema to its very essence.
This deflects from the broader and more significant question of how the EU needs to mobilise different kinds of power to shape international trends. Contrary to what now predominates as received wisdom, governments’ increased defence budgets and EU efforts to coordinate defence investments do not in themselves provide such leverage.
Indeed, with its priority on military defence, the EU has in recent years shown less evidence of qualitatively updating and sharpening its understanding of international leverage. While European leaders ritually claim that the union has “learned the language of power”, the current policy trajectory has diverted the EU away from being more influentially geostrategic.
Outgoing high representative Josep Borrell has himself lamented that the EU risks being better at reacting to its last crisis than pre-empting wider and future trends.
The shift in EU strategic narrative rests on an unduly one-dimensional reading of global trends. Contrary to what is now a commonly accepted premise, not every international development points towards state-to-state, zero-sum, order-menacing illiberalism.
Much of it does, but the evolving order is also one of intensified societal mobilisation against autocracy and state power. It sees sub-state networks working across borders and citizens seeking problem-oriented cooperation on the ailing global commons.
Out of step
Articles, political speeches, and European policy documents routinely urge the EU to step back and accept that liberal political values are now contested. But global surveys show strong and even rising levels of citizen support for democracy and underlying social trends away from authoritarian values.
Once a self-styled power of liberal betterment, the EU increasingly seems reduced to a strategy of stemming ordinary peoples’ desire for change. It rarely meets citizens’ pleas for support in their efforts to spur political and social reform. It has become an ambiguous bystander more than proactive promulgator.
By downplaying these complexities, the EU fixation on traditional geopolitical power looks increasingly at odds with the emerging order rather than skilfully aligned with it. The EU’s now commonly repeated leitmotif of “accepting the world as it is” actually does no such thing.
It actually collides with the underlying ways in which that world is shifting socially and politically. It’s one thing for the EU to get real about defending itself but another to become a regressive power that passively moulds itself to the power-politics of illiberalism.
Far from going alone, Europe instead needs to fashion more effective interdependencies and coalitions.
As its new leaders take office, the EU needs to move beyond the now omnipresent, yet ill-defined geopolitical narrative. It needs a more precise and forward-looking vision of what it wants power, sovereignty and autonomy for.
If, for many years, the EU dangerously neglected the need for hard, defensive power it now risks moving to other extreme – giving hard power such pride of place that it detracts from the more consequential trends that will redefine the world order.
Richard Youngs does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Source: The Conversation – UK – By Iain Farquharson, Lecturer in Global Challenges – Security Pathway Lead, Brunel University London
In the conflicts raging in Ukraine and the Middle East, we have recently seen calls for the establishment of what are being referred to as “buffer zones”.
Russia has proposed setting one up around Ukraine’s second city, Kharkiv in the north-east of the country. This, the Kremlin claims, is to protect Russian towns from shelling and missile attacks from Ukrainian territory.
Israel, meanwhile, wants to establish a buffer zone in southern Lebanon. It says it needs to protect nearly 70,000 civilians returning to their homes, which they have abandoned in the past year after rocket attacks by Hezbollah.
But these suggestions should be viewed with scepticism. Both Russia and Israel want to set up these buffer zones within the borders of neighbouring autonomous nation states – in breach of their sovereignty – in the name of “security”. They should instead primarily be seen as a way of formalising control over contested territory to protect their home bases, which would give them a military advantage.
The situation is further complicated by the fact that neither nation is formally at war with its opponent. No formal declaration of war has been issued by Russia to Ukraine, while Israel claims its legitimacy to establish a buffer zone under Article 51 of the UN constitution concerning self-defence.
Such arguments are hypocritical and one-sided. Russian and Israeli policymakers have shown no concern for the effect of the establishment of these zones on the Ukrainian and Lebanese populations of the areas.
The idea of buffer zones has a long history within international relations. Buffer zones have generally been defined as a nation state or neutral geographical area between two states not politically or militarily controlled by either of the rival states it separates.
The zones proposed by Russia and Israel don’t fit this definition. Both Kharkiv and southern Lebanon are militarily contested. And neither the Ukrainian nor Lebanese governments is in control of their territories.
If the Russian and Israeli proposals were to conform to this definition, they would comprise territory on both sides of the border of the two states, established with the agreement of both rival states. But neither Russia nor Israel is planning to cede their own territory in the establishment of these buffer zones. In fact, both have consistently sought to delegitimise their rival’s status as a nation state.
These considerations, alongside Ukrainian and Hezbollah resistance, suggest that these new buffer zones will be fiercely contested. Indeed, the history of buffer states and zones suggests that the effectiveness of such zones is highly questionable.
History of failure
Lebanon itself serves as an example of this in acting as a buffer state (although not formally declared as such) for the Israeli-Syrian rivalry from the late 1960s. Both Syria (1976) and Israel (1978 and 1982) intervened militarily in Lebanon at one point or another.
In this context, Lebanon provided a way for Syria to protect itself from surprise attacks. It allowed the political and military confrontation to play out without escalation to their own national territories. But it was terrible for Lebanon itself and ironically, Israel’s invasion of Lebanon in 1982 paved the way for the foundation of Hezbollah as a political and military force.
Similarly, Anglo-Russian rivalry over influence in Afghanistan in the 19th century focused on political manoeuvring to exert influence over Afghan rulers to protect British India and southern Russia respectively. This saw much money and political capital expended on both sides. There were also three British military incursions (1839-40, 1878-80 and 1919) attempting to consolidate their influence. None went well.
In both these cases though, competing powers were using an intervening state to avoid an escalation of tensions into conflict.
External ‘security zones’
In this instance, the recent declarations in pursuit of “buffer zones” by both Russia and Israel have more in common with strategic occupations of territory to resolve a military problem – namely attacks on their own territories. Within security studies literature these are termed “external security zones” and are generally militarily occupied zones within hostile territory deemed essential to the national security of the occupying power.
Historically, these zones have also been of questionable value. Following continued Palestinian attacks on Israeli border villages, in 1977 the Israel Defense Forces created a formal security buffer zone in south Lebanon through the proxy South Lebanon Army and supported by UN Interim Forces in Lebanon (Unifil) from March 1978.
The establishment of this zone did little to prevent shelling and rocket attacks on Israel, leading to significant exchanges of artillery fire in the summer of 1981. Then on June 6 1982, Israel invaded southern Lebanon.
Ultimately, neither buffer zones nor security zones have proved very effective at preventing conflict or preserving populations from its effects. These have almost always been negative, to say the least.
Now, both Russia and Israel are likely to find themselves facing increasing resistance from the occupied nation. This will require the commitment of more troops and perhaps deeper military advances under cover of the political and strategic “necessity” to ensure the security of their own borders.
These commitments will undoubtedly lead to more casualties. They will either lead to a destabilisation of existing governance in their regions or serve as a pretext for the aggressors to push further forward. It will also require them to further reshape their economies to fill military needs and could lead to potential escalation with other regional powers.
Iain Farquharson does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Source: The Conversation – UK – By Nathan Critch, Research Associate, Alliance Manchester Business School, University of Manchester
In his first conference speech as prime minister, Keir Starmer vowed that a Hillsborough law will be introduced in April, before the next anniversary of the football stadium disaster. The law will force public bodies to cooperate with investigations into future disasters and scandals.
This announcement follows a long campaign by the families of the 97 people killed (and hundreds more injured) when part of Hillsborough stadium collapsed in 1989.
The disaster and the inquiry that followed highlighted how justice is so often impeded by the tendency of powerful people to cover up information or refuse to cooperate in investigations.
Initial media coverage of the Hillsborough disaster wrongly blamed football fans for the deaths. A public inquiry cited faults in police control, although its main recommendations related to crowd safety in sports venues.
Bereaved families “were sure that the true context, circumstances and aftermath of Hillsborough had not been adequately explored, established and made public”. Further efforts and campaigns for truth and justice ensued. Families attempted to bring private prosecutions against two of the police officers who had been in charge of operations at the match. Neither were successfully charged.
In 2009, the government made an exception to the normal 30-year restriction on the publication of official documents, to ensure all documents related to the disaster were available to investigators.
Shortly afterwards, the government established the Hillsborough Independent Panel to reexamine the causes of the disaster in light of full access to relevant evidence and in close consultation with Hillsborough families.
The panel’s report emphasised policing failures and found that crowd safety had been “compromised at every level” due to “well known” issues. The report found that police “sought to deflect responsibility” on Liverpool fans.
New inquests concluded that the deaths of 97 had been unlawful, highlighting police and emergency service failures and exonerating the supporters who were initially blamed.
In 2012, South Yorkshire Police apologised, and confirmed the independent panel’s findings that “senior officers sought to change the record of events” in the aftermath.
Decades of campaigning
The long struggle for truth and justice has focused on a lack of honesty and openness by those in power, a willingness to close ranks and blame others, and a failure to disclose relevant information. A Hillsborough law will enforce “a positive duty to tell the truth” and require public officials to “proactively assist investigations”.
Starmer confirmed in his speech that the law will include criminal sanctions for those who breach it. Proposals also include better legal support and representations for future victims of disasters and their families.
Proposals for a Hillsborough law were first put forward in 2017 as a private members’ bill by Andy Burnham, then shadow home secretary. Its passage was interrupted by the 2017 general election, but some aspects were reintroduced in 2022 in another private member’s bill. This, too, was interrupted when Boris Johnson prorogued parliament.
Since becoming Labour leader, Starmer has framed his project as being one committed to returning his party, and the government, back to the service of working people. Passing a law designed and advocated for by working-class people who experienced injustice when their family members died is a clear symbol of this agenda.
The law is also indicative of Starmer’s efforts to frame his government as one that seeks to be transparent, open and consistent. This puts him in contrast to the preceding 14 years of Conservative rule, which were marred by allegations of corruption and misconduct.
High-profile scandals related to the pandemic, including members of the government holding illegal parties in Downing Street and misallocated contracts for PPE (personal protective equipment) to companies owned by people closely connected to government are just two examples.
The announcement comes as Starmer himself, and senior members of his government, have been accused of lack of transparency on donations and gifts.
Announcing the Hillsborough law goes some way to repairing his commitment to transparency and service in government, which has lost some of its shine in recent weeks.
Changing the culture
The reaction to the announcement from families and campaigners has been positive.
The director of the charity Inquest, which supports families of those who have died in state-related disasters, called the law “a step forward in providing a legacy for the 97 so that others do not have to go through the pain and trauma of decades of campaigning”.
The potential effect of the law goes far beyond Hillsborough. Other recent events including the Post Office scandal, infected blood and the Grenfell Tower fire have all been affected by a lack of openness and candour by those in power.
But will a law on its own be enough? From Hillsborough to Grenfell to Windrush, what these many injustices highlight is that the problem of secrecy and a lack of transparency and candour is systemic and cultural. The British state has long been marked by a tradition of elitism, a government-knows-best attitude and a scepticism towards citizen engagement, participation and openness.
While the Hillsborough law is indeed a step forward, it is only one piece of the jigsaw of making British governance more open and democratic.
Nathan Critch receives funding from the Economic and Social Research Council (grant number:
ES/V002740/1). He is affiliated with The Productivity Institute.