Developing countries like Indonesia use foreign high-skilled and high-wage workers to drive economic growth and innovation. However, protection of their legal rights is often neglected, affecting these workers’ productivity and well-being and Indonesia’s reputation as a destination country for employment.
My research delves into the flaws of Indonesia’s labour market institutions, such as the national labour dispute settlement system, revealing that current mechanisms are inadequate in protecting the rights of high-skilled foreign workers.
The study
My findings show the national dispute settlement system exhibits significant systemic shortcomings, such as processing cases slowly and siding with employers, which limit its capacity to protect all workers effectively. But disputes involving foreign workers are further complicated by the fact that immigration law allows employers to cancel residence permits, meaning that the government requires the workers to leave the country even though the workers may have been unfairly dismissed.
Foreign workers are mainly from Northeast Asia (China, Japan and Korea), and their use on investment-tied projects coupled with Indonesia’s downstreaming programme will ensure their numbers continue to grow. In 2023, the Indonesian government issued 168,048 permits for foreigners to work in Indonesia with the top three destinations being Central Sulawesi (18,678), Jakarta (13,862) and West Java (10,807). By July 2024, the government had already issued more than 14% more permits than by the same time the previous year.
My study examined 92 labour disputes involving foreign workers between 2006 (when the new national dispute settlement system was implemented) and 2022, which were settled by the Industrial Relations Court. One additional dispute was filed in 2023, but the Industrial Relations Court has not yet published the settlement despite a legal requirement to do so.
I complemented these court settlements with 98 qualitative interviews with other stakeholders, including policymakers, labour rights activists, legal professionals, and other foreign workers, such as foreign spouses, remote workers and digital nomads.
As in other countries too, the number of registered labour disputes is only the tip of the iceberg, as workers tend to cut their losses and move on rather than invest time, energy and limited financial resources in challenging their better-resourced employers.
Employers were all Indonesian companies, so no foreign workers who filed a lawsuit worked for a multinational company, and those who did so had at least 20 nationalities.
In terms of geographical distribution, the studied disputes were settled in 13 local jurisdictions, and were mostly lodged by workers rather than employers.
The nature of the disputes mostly involved claims that an employment contract had been terminated prematurely (87 cases), while a much smaller number involved resignation (4 cases) or were unknown (1 case). Of the 92 claims, 83 were initiated by workers, and eight by an employer. In one case, the lodging party was not recorded in the final decision.
Hiring a private lawyer
Employers used the Immigration Law to undermine the protective role of the Manpower Law – as it stands foreign workers are only entitled to employment protection if they hold a valid residence permit, which employers can and do shorten. Doing so shows that the Indonesian government prioritizes the flexibility of employers at the expense of employment protection for foreign workers.
In at least 92% cases, foreign workers used paid assistance of a private lawyer to represent themselves at formal meetings and hearings required by the Disputes Settlement Law, the cost of which could be hefty.
As one foreign worker explained:
It’s always in the back of your mind, to do whatever to make employers happy if you want to stay. No matter what the work permit and contract say, they can ask immigration to kick us out within a week!“
A retired government official responsible for designing policy regarding foreign workers was surprised when he heard this, explaining that:
I thought they could look after themselves because they earn such high wages. Well, higher than the average Indonesian worker, that is.
Hiring a private lawyer is the only way to represent themselves throughout the dispute resolution process because they need to leave Indonesia once they are fired. Not having the legal right to remain in Indonesia makes it very difficult – even impossible – to do it without them.
Addressing institutional failures
Engaging a private lawyer served as an ‘institutional fix’ that enabled most foreign workers to engage with Indonesia’s labour dispute settlement system by attending formal meetings and hearings, as well as filling out required paperwork and sending essential letters and replies.
Addressing this institutional failure requires a shift in law and policy. Firstly, legal reforms are essential to ensure that immigration and employment laws are integrated to enable foreign workers to have access to legal processes intended to help protect labour rights. At a minimum, this would involve amending policy to prevent employers from cancelling residence permits so that foreign workers need to leave the country prematurely.
Alternatively, the Directorate-General of Immigration could still permit employers to do so, but then provide the affected foreign workers with a limited-stay visa so that they can remain in Indonesia to engage with the legal process. The Hong Kong Immigration Department does this for Indonesian migrant workers.
Secondly, there is a need for enhanced support systems that provide immediate and effective assistance to foreign workers. Government agencies tasked with settling labour disputes, such as local manpower offices and the Industrial Relations Court, should be equipped with adequate resources and trained personnel to handle migrant labour issues. Doing so would decrease the reliance of foreign workers on private lawyers.
Failure to protect the employment rights of foreign workers has the potential to damage Indonesia’s reputation as a destination country for employment. Such damage could undermine Indonesia’s ambitious plans to build a new capital city (Ibu Kota Nusantara) with the assistance of foreign workers, and undermine the government’s downstreaming programme, which helps Indonesia earn more from the export of raw minerals.
Wayne Palmer has received research funding from the International Labour Organization, the Freedom Fund, and the Australian Research Council.
Careers in the maritime industry can take graduates all over the world.Igor-Kardasov
When most people are asked to picture an engineer at work, they probably imagine a civil engineer in a hard hat at a construction site, a chemical engineer in a laboratory or an electrical engineer examining a complex circuit board. Very few, I’m willing to bet, visualise someone aboard a ship.
But, for those drawn both to engineering and a seafaring life, marine engineering and nautical science are ideal careers – especially in a country like South Africa, uniquely positioned where the Atlantic and Indian Oceans converge.
Over 90% of the world’s goods are transported by sea. That means both marine engineers and nautical scientists are crucial to global trade, transportation and resource management. These professionals play a critical role in ensuring that vessels operate reliably, comply with environmental regulations and navigate safely through the world’s oceans.
South Africa’s Department of Higher Education does not distinguish between different types of engineering when collecting statistics about graduates. However, those of us in the marine engineering and nautical science space in academia can confirm the numbers are low. At my own institution, the Cape Peninsula University of Technology (CPUT) in Cape Town, between ten and 20 people graduate each year from these programmes. At another, Nelson Mandela University in the Eastern Cape province, around seven people graduate in these fields each year. With so few people studying these disciplines, the skills they impart are in high demand. The government’s list of scarce skills for 2024 includes “marine engineering technologist”.
I’m an engineering lecturer in the Department of Maritime Studies at CPUT. There, I teach in both the Bachelor of Nautical Science and Marine Engineering programmes, lecturing on a variety of subjects, including mathematics and applied thermodynamics (the branch of physics that deals with the relationships between heat, energy and work).
Watching my students complete their degrees and start careers in marine engineering or nautical science has made it clear that this work offers a blend of adventure, technical challenge, and the opportunity to contribute to an industry that is essential to global commerce and environmental stewardship.
Whether it’s designing cutting-edge marine technology or navigating the world’s vast oceans, the maritime field promises a fulfilling professional journey.
Theory and practice
Three universities – CPUT, Nelson Mandela University and the Durban University of Technology in KwaZulu-Natal – offer maritime studies courses aimed at those who intend to work at sea. A fourth, the University of KwaZulu-Natal, offers this degree with a focus on maritime law and logistics. There are also some specialised training institutions, among them the South African Maritime Safety Authority, that provide various qualifications and certifications.
You’ll need to have taken mathematics, physical science and English in your school-leaving matric year, and to have passed them well. (Contact individual universities to find out their precise degree requirements.) A strong interest in and commitment to a career at sea or in the maritime industry more broadly is crucial.
Being a strong swimmer can be an advantage. But it is not necessarily a requirement. Students who do not know how to swim will typically have the opportunity to learn and develop their swimming skills as part of their training.
There are practical and theoretical components to these degrees. At our Granger Bay campus near the V&A Waterfront in Cape Town, for instance, we’ve set up a survival centre – a practical facility where students receive training to equip them for life at sea. It is fully equipped with three fully enclosed lifeboats, two open lifeboats, a rigid capsule, two fast rescue craft, a heated 12 x 7 metre pool, an underwater escape training dunker, various life rafts, life jackets, immersion suits, and more.
On the theoretical side, a Bachelor of Nautical Science programme focuses on the navigation and operation of ships. It encompasses navigation techniques, ship stability, cargo handling, meteorology, and maritime laws. This prepares students for careers as navigators in the merchant navy. (Not to be confused with the military navy – a merchant navy is a country’s commercial shipping industry, which includes all the cargo and passenger ships that are registered under that nation and used for trade, transport and other non-military purposes.)
Some of our graduates have gone on to become ship’s masters, also called captains – the highest ranking officer on any ship.
Marine engineering programmes, meanwhile, focus on the design, development, operation and maintenance of the mechanical systems and equipment used on ships and other marine vessels. This includes everything from engines and propulsion systems to refrigeration and steering mechanisms. Marine engineers ensure that these systems function efficiently and safely. They often work closely with naval architects to integrate these technologies into new ship designs or retrofit them into existing vessels.
Ample opportunities
Oceanic African countries, like South Africa, need people with these skills to harness the full potential of their maritime resources.
The development of local expertise in maritime engineering and nautical science is essential for ensuring safe and efficient maritime operations. It also helps to protect marine environments and contributes to global maritime trade. Skilled professionals in these fields help these countries take advantage of their maritime assets, promote economic growth and enhance their roles in international commerce.
As a proud lecturer, I am thrilled to see my students progress and develop both internationally and locally. Many have gone on to work in various exciting and prestigious roles around the world. Some have become ship’s masters, navigating and managing large vessels on international waters, while others have taken on critical roles in maritime operations, port management and logistics in countries such as Singapore, Norway and the United Kingdom. Some have pursued careers in maritime law and policy. Their career paths reflect the diverse and global opportunities available in the maritime industry.
Ekaterina Rzyankina is affiliated with the Cape Peninsula University of Technology (CPUT).
It might sound far-fetched, but recent research suggests that dogs’ and humans’ brains synchronise when they look at each other.
This research, conducted by researchers in China, is the first time that “neural coupling” between different species has been witnessed.
Neural coupling is when the brain activity of two or more individuals aligns during an interaction. For humans, this is often in response to a conversation or story.
Neural coupling has been observed when members of the same species interact, including mice, bats, humans and other primates. This linking of brains is probably important in shaping responses during social encounters and might result in complex behaviour that would not be seen in isolation, such as enhancing teamwork or learning.
When social species interact, their brains “connect”. But this case of it happening between different species raises interesting considerations about the subtleties of the human-dog relationship and might help us understand each other a little better.
In the recent study, the researchers studied neural coupling using brain-activity recording equipment called non-invasive electroencephalography (EEG). This uses headgear containing electrodes that detect neural signals – in this case, from the beagles and humans involved in the study.
Researchers examined what happened to these neural signals when dogs and people were isolated from each other, and in the presence of each other, but without looking at each other. Dogs and humans were then allowed to interact with each other.
Look into my eyes
When dogs and humans gazed at each other and the dogs were stroked, their brain signals synchronised. The brain patterns in key areas of the brain associated with attention, matched in both dog and person.
Dogs and people who became more familiar with each other over the five days of the study had increased synchronisation of neural signals. Previous studies of human-human interactions have found increased familiarity between people also resulted in more closely matching brain patterns. So the depth of relationship between people and dogs may make neural coupling stronger.
The ability of dogs to form strong attachments with people is well known. A 2022 study found the presence of familiar humans could reduce stress responses in young wolves, the dog’s close relative. Forming neural connections with people might be one of the ways by which the dog-human relationship develops.
The researchers also studied the potential effect of differences in the brain on neural coupling. They did this by including dogs with a mutation in a gene called Shank3, which can lead to impaired neural connectivity in brain areas linked with attention. This gene is responsible for making a protein that helps promote communication between cells, and is especially abundant in the brain. Mutations in Shank3 have also been associated with autism spectrum disorder in humans.
Study dogs with the Shank3 mutation did not show the same level of matching brain signals with people, as those without the mutation. This was potentially because of impaired neural signalling and processing.
However, when researchers gave the study dogs with the Shank3 mutation, a single dose of LSD (a hallucinogenic drug), they showed increased levels of attention and restored neural coupling with humans.
The researchers were clear that there remains much to be learned about neural coupling between dogs and humans.
It might well be the case that looking into your dog’s eyes means that your respective brain signals will synchronise and enhance your connection. The more familiar you are with each other, the stronger it becomes, it seems.
Jacqueline Boyd is affiliated with The Kennel Club (UK) through membership and contributor to the Health Advisory Group. Jacqueline is a full member of the Association of Pet Dog Trainers (APDT #01583) and she also writes, consults and coaches on canine matters on an independent basis, in addition to her academic affiliation at Nottingham Trent University.
French Polynesia’s president and civil society leaders have called on the United Nations to bring France to the negotiating table and set a timetable for the decolonisation of the Pacific territory.
More than a decade after the archipelago was re-listed for decolonisation by the UN General Assembly, France has refused to acknowledge the world’s peak diplomatic organisation has a legitimate role.
France’s reputation has taken a battering as an out-of-touch colonial power since deadly violence erupted in Kanaky New Caledonia in May, sparked by a now abandoned French government attempt to dilute the voting power of indigenous Kanak people.
Pro-independence French Polynesian President Moetai Brotherson told the UN Decolonisation Committee’s annual meeting in New York on Monday that “after a decade of silence” France must be “guided” to participate in “dialogue.”
“Our government’s full support for a comprehensive, transparent and peaceful decolonisation process with France, under the scrutiny of the United Nations, can pave the way for a decolonisation process that serves as an example to the world,” Brotherson said.
Brotherson called for France to finally co-operate in creating a roadmap and timeline for the decolonisation process, pointing to unrest in New Caledonia that “reminds us of the delicate balance that peace requires”.
The 121 islands of French Polynesia stretch over a vast expanse of the Pacific, with a population of about 280,000, and was first settled more than 2000 years ago.
Often referred to as Tahiti after the island with the biggest population, France declared the archipelago a protectorate in 1842, followed by full annexation in 1880.
France last year attended the UN committee for the first time since the territory’s re-inscription in 2013 as awaiting decolonisation, after decades of campaigning by French Polynesian politicians.
French Permanent Representative to the UN Nicolas De Rivière responds to French Polynesian President Moetai Brotherson at the 79th session of the Decolonisation Committe on Monday. Image: UNTV
“I would like to clarify once again that this change of method does not imply a change of policy,” French permanent representative to the UN Nicolas De Rivière told the committee on Monday.
“There is no process between the state and the Polynesian territory that reserves a role for the United Nations,” he said, and pointed out France contributes almost 2 billion euros (US $2.2 billion) each year, or almost 30 percent of the territory’s GDP.
After the UN session, Brotherson told the media that France’s position is “off the mark”.
17 speakers back independence French Polynesia was initially listed for decolonisation by the UN in 1946 but removed a year later as France fought to hold onto its overseas territories after the Second World War.
Granted limited autonomy in 1984, with control over local government services, France retained administration over justice, security, defence, foreign policy and the currency.
Seventeen pro-independence and four pro-autonomy – who support the status quo – speakers gave impassioned testimony to the committee.
Lawyer and Protestant church spokesman Philippe Neuffer highlighted children in the territory “solely learn French and Western history”.
“They deserve the right to learn our complete history, not the one centred on the French side of the story,” he said.
“Talking about the nuclear tests without even mentioning our veterans’ history and how they fought to get a court to condemn France for poisoning people with nuclear radiation.”
France conducted 193 nuclear tests over three decades until 1996 in French Polynesia.
‘We demand justice’ “Our lands are contaminated, our health compromised and our spirits burned,” president of the Mururoa E Tatou Association Tevaerai Puarai told the UN denouncing it as French “nuclear colonialism”.
“We demand justice. We demand freedom,” Puarai said.
He said France needed to take full responsibility for its “nuclear crimes”, referencing a controversial 10-year compensation deal reached in 2009.
Some Māʼohi indigenous people, many French residents and descendants in the territory fear independence and the resulting loss of subsidies would devastate the local economy and public services.
Pro-autonomy local Assembly member Tepuaraurii Teriitahi told the committee, “French Polynesia is neither oppressed nor exploited by France.”
“The idea that we could find 2 billion a year to replace this contribution on our own is an illusion that would lead to the impoverishment and downfall of our hitherto prosperous country,” she said.
Source: The Conversation – Africa – By Olasunkanmi Habeeb Okunola, Visiting Scientist, United Nations University – Institute for Environment and Human Security (UNU-EHS), United Nations University
Extreme climate events — floods, droughts and heatwaves — are not just becoming more frequent; they are also more severe.
It’s important to understand how communities can recover from these events in ways that also build resilience to future events.
In a recent study, we analysed how communities affected by the extreme flood events of 2021 in Germany’s Ahr Valley and in Lagos, Nigeria, grappled with recovery from floods.
Our aim was to identify the factors – and combinations of factors – that served as barriers (or enablers) to recovery from disasters.
We found that financial limitations, political interests and administrative hurdles led to prioritising immediate relief and reconstruction over long-term sustainable recovery.
We concluded from our findings that the success of recovery efforts lies in balancing short-term relief and a long-term vision. While immediate aid is essential after a disaster, true resilience hinges on proactive measures that address systemic challenges and empower communities to build a better future.
Recovery should not be merely action-oriented and building back infrastructure (engineering). It should also include insights in other areas, like governance and psychology, helping people to deal with losses and to heal.
What worked
To understand the recovery pathways of the two regions, we reviewed relevant literature, newspaper articles and government documents. We also interviewed government agencies, NGO representatives, volunteers and local residents in the communities where these floods occurred.
We found that in the Ahr Valley, recovery wasn’t just about rebuilding structures, it was about empowering individuals.
Through initiatives like mental health and first aid courses, residents learned to support one another. This fostered a sense of community and resilience that was essential for meeting the emotional challenges posed by the disaster.
The focus on rebuilding with a sustainable vision also included environmental initiatives. For example, a type of heating system was put in place that didn’t rely on fossil fuels.
Not only did this reduce carbon emissions, it also served as a symbol of hope. It showed there was an opportunity to create a more sustainable and environmentally friendly community.
In Lagos, too, residents found strength in community and innovation. Grassroots efforts using sustainable materials like bamboo and palm wood highlighted the ingenuity and resourcefulness of the people. Faith-based organisations provided material aid as well as emotional and spiritual support. This reinforced the bonds that held the community together.
Each community faced unique challenges. But they shared a common thread: the importance of adaptive governance – flexible decision-making and strong community ties.
For example, established building codes in the Ahr Valley provided a framework for reconstruction, ensuring that new structures were resilient and safe.
In Lagos, the absence of strong government support highlighted the critical role of community organisations in providing services and fostering a sense of shared responsibility.
What needs improvment
In both the Ahr Valley and Lagos, the journey towards recovery has been fraught with obstacles as well.
In the Ahr Valley, bureaucratic red tape has become a formidable barrier. Residents, eager to rebuild their lives, find themselves entangled in a complex web of regulations and lengthy approval processes. This has delayed their access to insurance and recovery funds. Waiting for months or even years has eroded hope and fuelled a sense of abandonment.
Meanwhile, in Lagos, insufficient government support has left communities to fend for themselves, creating a breeding ground for uncertainty and conflict.
Land tenure disputes, fuelled by a lack of clear property rights, sow seeds of distrust and hinder resettlement efforts. Political disagreements complicate the picture, as competing interests divert attention and resources away from those who need them most.
In Lagos, none of the respondents reported having insurance to help them to recover from disaster-related losses.
While some residents in the Ahr Valley did have insurance, many were under-insured.
The Ahr Valley’s building codes offer a framework for reconstruction. But it’s clear that processes should be streamlined so communities can take ownership of their recovery.
In Lagos, the importance of robust social safety nets is clear. Partnerships between communities and authorities are also needed.
A different approach
Recovery isn’t a separate process that occurs after disasters only. It should be seen as an essential part of managing risks. It’s important to understand what recovery involves and what resources are needed.
This will help reduce future risks and increase resilience after extreme events.
Governments should encourage flexible governance structures that value community voices and local knowledge to enable recovery. A good example is the New Orleans Recovery Authority, established after Hurricane Katrina. It involved local residents and city officials in planning and rebuilding efforts.
Grassroots efforts in Lagos demonstrated the power of sustainable materials and community-led initiatives. Seeing things from the community’s point of view can help tailor solutions that fit the situation and adapt to evolving challenges.
Training and capacity-building programmes empower communities to be active in their own recovery.
Mental health and first aid courses were successful in the Ahr Valley. Equipping individuals with skills in sustainable practices and disaster preparedness helps weave a social fabric capable of weathering future storms.
Olasunkanmi Habeeb Okunola is a Visiting Scientist at, the United Nations University – Institute for Environment and Human Security (UNU-EHS)
Saskia E. Werners works with United Nations University, Institute for Environment and Human Security (UNU-EHS). She is grateful to have received research grants in support of her research on climate change adaptation and recovery.
Source: The Conversation – USA – By Virginia Raguin, Distinguished Professor of Humanities Emerita, College of the Holy Cross
A silhouette of onlookers in front of Esther Strauss’ sculpture ‘Crowning.’ Michel M. Raguin with cooperation of the Mariendom Linz , CC BY
A sculpture of the Virgin Mary showing her giving birth to Jesus was recently attacked and beheaded. Called “Crowning” by the artist Esther Strauss, the sculpture had been part of a temporary exhibition of art outside the Catholic St. Mary Cathedral in Linz, Austria.
The sculpture was controversial for its explicit depiction of birth; an online petition seeking its removal received more than 12,000 signatures. Strauss’ work was part of a project that sought to look at gender equality and the role of women, designed to honor the 100th anniversary of the cathedral’s consecration to the Virgin Mary. The exhibition opened on June 27, 2024, and the statue was vandalized a few days later.
My research as a historian of art has shown me that there has never been only one way of depicting the birth of Christ.
Depiction of birth in early texts
Early Christian writings reveal that the birth of Christ was of keen interest and reflected ideas of the day.
A widely read text from the mid-second century, called the The Protoevangelium of James, gives details about the life of the Virgin and infancy of Christ. As women of that time gave birth with the aid of midwives, the text explained that the Mother of God also was helped in her labor. Sections 19-20 of the text give details about Joseph contacting two midwives.
One woman is said to have doubted the virgin birth. After she inserted her finger into Mary’s vagina, her hands withered. An illustration in a French prayer book from Paris dating to about 1490-1500 shows the midwife with missing hands. The story explained that her hands grew back after she touched the child Christ.
Menologion of Basil II, an 11th-century illuminated Byzantine manuscript with 430 miniatures depicting the Nativity of Christ, now in the Vatican library. Via Wikimedia Commons
New modes of spirituality in later centuries brought changes in art. St. Bridget of Sweden, who founded a new order of nuns, left a large body or writing, including what she believed were revelations from God. One of her revelations included a vision of Christ’s birth she experienced in Bethlehem in 1371–72.
Although Bridget had given birth eight times, she described Mary’s delivery as “in the twinkling of any eye.” Bridget said she “was unable to notice or discern how or in what member (Mary) was giving birth.” By “member” she may have meant that she did not know through what part of Mary’s body Jesus emerged. Many paintings between the 15th and 16th centuries adopted her vision and showed the child surrounded by light and the Virgin calmly worshipping him.
A painting by Belgian artist Hugo van der Goes, in about 1475, follows Bridget’s vision of the birth. Instead of being “wrapped in swaddling clothes,” Christ lies naked, perfectly clean, in the “great and ineffable light” that Bridget described.
Each era and community produces art that speaks to its own priorities. Fifteenth-century Italy introduced traditions of a miraculous childbirth that were different from a realistic tradition cherished by early Christians of the second century. I would argue that “Crowning” is but one more example of such cultural change. Here, Mary is an inspiration for other women, physically strong and capable even in the difficult process of giving birth.
The sculpture, when intact, was barely 15 inches tall, a clear indication that it was not made for large-scale public veneration. It was a meditative image designed for a one-on-one encounter – for those who decided to engage.
Virginia Raguin does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
The Conservative party leadership ballot is a private affair. The MPs don’t have to reveal who they voted for if they don’t want to. And given how badly they appear to have bungled their final round of voting in this contest, it seems unlikely we’ll ever know what really happened.
James Cleverly was the firm favourite among MPs, and yet an attempt to manoeuvre him into the final two against the candidate his supporters felt most sure of beating in the final run-off, when party members vote, seems to have backfired.
It would appear Cleverly and his supporters forgot Lyndon B. Johnson’s first rule of politics – learn to count. As a result, party members now have a choice between two rightwing candidates, Robert Jenrick and Kemi Badenoch. Both are popular among members but less electable and palatable for the wider public. The debacle has exposed (not for the first time) the problems with the electoral system.
Cleverly was seen as the unifier of the party, with the ministerial experience and communication skills to help with a transformation. He had wowed party conference with a well-calibrated speech hinting that the party needed to “normalise” to regain trust. Yet his record leaves questions as to exactly how good his communication skills are in reality. He had made several “jokes”, which were not jokes at all – just offensive comments – and reportedly described his own government’s immigration policy as “batshit”.
A Telegraph article just before his shock loss in the parliamentary party vote feared he would “sign the death warrant” of the party as a “middle-of-the-road bluffer who tickles the tummies of members of the parliamentary party by flattering them that their historic defeat was not so bad after all”. Yet judging by the audible gasps when the result was announced, Tory MPs were shocked at how they had messed the vote up. Both the Liberal Democrats and Labour reacted with glee at the news.
Tory MPs react to the news that they’ve inadvertently knocked out their favourite candidate.
The final two
Badenoch has less ministerial experience than Cleverly but is loved by the Tory party as a battler and is now the favourite to win. The same “death warrant” article called Badenoch a “Warrior Queen”, but that cuts both ways. Badenoch, by channelling her inner Thatcher, is pitching herself as a fighter taking on the forces of reaction within and without. But, to quote another Tory, the Duke Of Wellington, Thatcher would only fight battles she knew she could win. Badenoch’s battle seem rather less focused, and her war on the forces of woke now includes new mothers and civil servants (10% of whom, in her view, should be in prison).
Another recent article, this time in the Guardian spoke of how “she often finds it hard to get through an interview without patronising or arguing with the presenter in a manner that reinforces claims she’s divisive and abrasive”. At the same time, her attempt to tell “hard truths” saw her publishing a lengthy pamphlet featuring some triangles – seemingly explaining electoral realignment – which no one could understand. Not ideal attributes for a leader.
So far in this contest, Jenrick’s most notable interventions have been to grandstand about the European Court of Human Rights (ECHR), compete to be toughest on immigration, and (and we need to follow the logic slowly here) argue that the ECHR is causing UK special forces to kill instead of capture terrorists. Jenrick is the living embodiment of the old Groucho Marx joke “those are my principles, and if you don’t like them…well, I have others”. He has made either a Damascene or cynical journey from squishy centre to hard right just ahead of this contest. What does he really believe? No one is sure.
The reasons for the Tories’ recent catastrophic election loss are in plain sight. Voters saw the Conservative governments as a toxic combination of poor delivery, scandals and being out of touch. The 2024 defeat was a combination of Boris Johnson’s immorality and Liz Truss’s incompetence. Rishi Sunak then finally fractured his own coalition with a self-defeating immigration policy. None of the candidates have addressed the reasons for the loss and the final two are evidently still in denial.
But it is the Tory members who are voting here. Their version of events is that disunity and a failure to deliver on immigration lost them power. Members may well be torn, as political scientist Tim Bale points out, between values and electability – though with Cleverly out, this latter may be a problem.
Peering through the fog of the contest, there are two things which are very likely. First, Johnson’s shifting of the party to the right, and his closer alignment of the Tory party with the remnants of UKIP is now more evident, and will be further deepened by whoever wins. While Badenoch and Jenrick differ on whether they should beat or join Reform, the Tory party is now on the latter’s territory. There is unlikely to be any Tory “hard truths” to address the electorate’s loss of trust in the party, but instead the talking points will be culture wars, immigration, and leaving the ECHR.
Second, as a result, the party will move further from the centre ground, and away from the average voter, and their concerns. The mess the parliamentary party has made of the contest and the long shadow of dysfunctional leadership have served only to remind voters of the reasons why the party was thrown out of office in July. Peering through his snazzy new glasses, Starmer can see his bad week just got a lot better.
The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.
Globular cluster NGC 2005. ESA/Hubble & Nasa, F. Niederhofer, L. Girardi, CC BY-SA
As I finished my PhD in 1992, the universe was full of mystery – we didn’t even know exactly what it is made of. One could argue that cosmologists had made little progress in our understanding of these basic facts since the discovery of the cosmic microwave background (CMB), the afterglow of the Big Bang, in the 1960s.
I left the UK after my doctoral studies to begin a research career in the US, where I was lucky to be recruited to work on a new experiment called the Sloan Digital Sky Survey (SDSS). This new survey embraced advances in digital technologies with the ambition of measuring the “redshifts” (how light becomes more red if a source appears to move away from you) of a million galaxies.
These redshifts were then used to measure distances, and allowed cosmologists to map the three-dimensional structure of the universe.
One cosmic puzzle in the 1980s, based on the pioneering CfA Redshift Survey of Margaret Geller and John Huchra, was the significant lumpiness of galaxies, and therefore matter, in our cosmic neighbourhood. Galaxies were clustered together across a wide range of scales, with evidence for coherent “superclusters” of galaxies spanning over 30 million light years in length.
This article is part of our series Cosmology in crisis? which uncovers the greatest problems facing cosmologists today – and discusses the implications of solving them.
It was important to know how such superclusters could have formed from the smooth CMB, as it would tell us the total amount of matter in the universe and, more intriguingly, what that matter was made of. That was assuming the only force in play was gravity.
By the end of the first phase of the SDSS, we had achieved our goal of a million redshifts. This data was used to discover many superclusters across the universe, including the amazing “Sloan Great Wall”, which remains one of the largest known coherent structures in the universe, over a billion light years in length.
Type 1A supernova remnant. Nasa/CXC/U.Texas
I am lucky to have lived through this amazing era of cosmic discovery around the turn of the century. Surveys like SDSS, combined with new observations of the CMB and searches for distant exploding stars known as Type Ia Supernovae (SNeIa), coincided to deliver an emphatic answer to the question: “What is the universe made of?”
The discovery of dark energy
From 1999 to 2004, the cosmological community came together to agree that the universe was 5% normal (baryonic) matter, 25% dark matter (unknown, invisible matter), and 70% “dark energy” (an expansive force) – essentially a cosmological constant, which was first postulated by Einstein. The discovery that the universe was dominated by this constant energy shocked everyone, especially as Einstein had called the cosmological constant his “biggest blunder”.
Today, cosmologists still agree this is the most likely make-up of our universe. But observational cosmologists like me have refined our measurements of these cosmic variables significantly – reducing the errors on these quantities.
The latest numbers from the Dark Energy Survey (DES) indicate that 31.5% of the universe is matter (a combination of dark and normal), with the remainder being dark energy assuming a cosmological constant. The error on this measurement is just 3%.
Knowing these numbers to higher precision will hopefully help cosmologists understand why the universe is like this. Why would we expect to have 70% of the universe today as “dark” (can’t be seen via electromagnetic radiation) and not associated with “matter” like everything else in the universe?
The origin of this dark energy remains the biggest challenge to physics, even after 20 years of intense study.
Intriguing measurements
Like me, a few cosmologists have become distracted by other problems over the last two decades. However, 2024 could be the start of a new era of discovery. This year, cosmologists published new results based on two of our best cosmological probes.
The first probe consists of exploding stars dubbed “SNeIa”. As these stars have a narrow range of masses, their explosions can be well calibrated, giving cosmologists a predictable brightness that can be seen far away. By comparing the known brightness of these SNeIa to their redshifts, we can determine the expansion history of the universe. These objects were, in fact, critical for discovering that the expansion of our universe is accelerating.
The second probe works by looking at Baryon Acoustic Oscillations (BAO) – relics of predictable sound waves in the plasma (charged gas) of the early universe, before the CMB. These are now frozen into the large-scale structure of galaxies around us. Like SNeIa, their predictable size can be compared with their observed size today to measure the expansion history of the universe.
Recently, DES reported its final SNeIa results from over a decade of work, detecting and characterising many thousands of supernova events. While these SNeIa results are consistent with the orthodox view that the universe is dominated by a cosmological constant, they do leave open the tantalising possibility of new physics – namely, that the dark energy could be varying with cosmic time.
That said, scientists are trained to be sceptical, and there are many reasons to distrust a single experiment, single observation, or even a single set of cosmologists!
Cosmologists now go to extraordinary lengths to “blind” their results from themselves during analysis of the data, only revealing the answer at the last moment. This blinding is done to avoid unconscious human biases affecting the work, which could possibly encourage people to get the answer they believe they should see.
This is why repeatability of results is at the heart of all science. In cosmology, we cherish the need for multiple experiments checking and challenging each other.
The second result to turn heads was the first BAO measurements from the Dark Energy Spectroscopic Instrument (DESI), successor to the SDSS. The first DESI map of the cosmos is deeper and denser than the original SDSS. Its first BAO results are intriguing – the data alone is still consistent with a cosmological constant, but with hints of a possible time-varying dark energy when combined with other data sources.
DESI in the dome of the Nicholas U. Mayall 4-meter Telescope at the Kitt Peak National Observatory. wikipedia, CC BY-SA
In particular, when DESI analyses the combination of its BAO results with the final DES SNeIa data, the significance of a time-varying dark energy increases to 3.9 sigma (a measure of how unusual a set of data is if a hypothesis is true) – only 0.6% chance of being a statistical fluke.
Most of us would take such odds, but scientists have been hurt before by systematic errors within their data that can mimic such statistical certainty. Particle physicists therefore demand a discovery standard of 5 sigma for any claims of new physics – or less than a one in a million chance of being wrong!
As scientists will say: “Extraordinary claims require extraordinary evidence.”
Mindboggling implications
Are we entering a new era of cosmological discovery? If so, what would it mean?
The answer to my first question is probably yes. The next few years will be fun for cosmologists, with new data and results due from the European Space Agency’s Euclid mission. Launched last year, it is already scanning the sky with unprecedented accuracy.
Likewise, DESI will get more and better data, while the European Southern Observatory starts its own massive redshift survey in 2025. Then you have the Rubin Observatory in Chile coming online soon. Combining these datasets should prove beyond doubt if dark energy varies with cosmic time.
If it does, it implies there is less dark energy now than in the past. This could be caused by many things but, interestingly, it could signify the end of a present, accelerated phase of the expansion of the universe.
It also implies that dark energy is probably not a cosmological constant thought to be due to the background energy associated with empty space. According to quantum mechanics, empty space isn’t really empty, with particles popping in and out of existence creating something we call “vacuum energy”. Ironically, predictions of this vacuum energy do not agree with our cosmological observations by many orders of magnitude.
So, if we did discover that dark energy varies over time, it might explain why observations are at odds with quantum mechanics, which is an extremely well-tested theory. This would suggest the assumption in the standard model of cosmology, that dark energy is constant, needs a rethink. Such a realisation may help solve other mysteries about the universe – or pose new ones.
In short, the new cosmological observations coming this decade will stimulate a new era of physical thinking. Congratulations to my younger cosmologists: it is your era to have fun.
Source: The Conversation – USA – By Alan Jenn, Associate Professional Researcher in Transportation, University of California, Davis
A Nissan Leaf charges at a station in Pasadena, Calif., on Sept. 23, 2024.Mario Tama/Getty Images
The Biden administration is using tax credits, regulations and federal investments to shift drivers toward electric vehicles. But drivers will make the switch only if they are confident they can find reliable charging when and where they need it.
Over the past four years, the number of public charging ports across the U.S. has doubled. As of August 2024, the nation had 192,000 publicly available charging ports and was adding about 1,000 public chargers weekly. Infrastructure rarely expands at such a fast rate.
Agencies are allocating billions of dollars authorized through the 2021 Bipartisan Infrastructure Law for building charging infrastructure. This expansion is making long-distance EV travel more practical. It also makes EV ownership more feasible for people who can’t charge at home, such as some apartment dwellers.
Charging technology is also improving. Speeds are now reaching up to 350 kilowatts – fast enough to charge a standard electric car in less than 10 minutes. The industry has also begun to shift to a standard called ISO 15118, which governs the interface between EVs and the power grid.
This standard enables a plug-and-charge system: Just plug in the charger and you’re done, without contending with apps or multiple payment systems. Many existing chargers can be retrofitted to it, rather than needing to install totally new chargers.
Although EV charging infrastructure has improved in the past several years, reliability is still a critical issue. For example, a 2022 study by researchers at the University of California, Berkeley, found that nearly 30% of public non-Tesla fast chargers in the Bay Area didn’t work. A national study in 2023 that used artificial intelligence models to analyze driver reviews of EV charging stations reached a similar result.
These findings highlight the need for more robust maintenance and monitoring systems across charging networks. Federal guidelines require that chargers must have an average annual “uptime,” or functional time, greater than 97%, but this metric is not always as clear-cut as it sounds. While many charging-point operators report high uptime percentages, their figures often exclude factors such as slow charging speeds or incomplete charges that degrade users’ experience.
Cars waiting to charge at a center in San Diego. Gil Tal, CC BY-ND
Many drivers complain about throttling – chargers that dispense electricity at less than the maximum rate the car is capable of accepting, so the car charges more slowly than expected. Sometimes this is normal: Cars will charge more slowly as their battery gets closer to full in order to avoid damaging the battery. Other factors can include weather conditions and the number of other vehicles simultaneously using the charging station.
Drivers’ issues with chargers involve more than just uptime. Technical barriers, such as payment processing and vehicle-charger communication, sometimes can prevent a charge from starting or completing.
To ensure that all EVs can charge smoothly at any network, groups such as the National Charging Experience Consortium and CharIN are bringing automakers, charging providers and national laboratories together to address these issues.
Other obstacles are more local, such as long lines at charging stations and chargers that are blocked by parked cars, snowbanks or other obstacles. Finding vehicles with internal combustion engines parked in EV charger spots is common enough that it has a name: getting ICEd. There’s a clear need for more comprehensive solutions to help the charging experience keep pace with demand for EVs.
A Wall Street Journal tech columnist finds abundant chargers – with abundant challenges – in Los Angeles.
A street-level view
At the University of California, Davis, we are working with the California Energy Commission to understand the range of charging obstacles that EV drivers face. As part of a three-year study, we are sending undergraduate students out to test thousands of chargers across the entire state of California.
So far, our results show that just over 70% of charge attempts have succeeded. Many issues have caused failed charges, including traffic congestion at charging stations, damaged or offline chargers, difficulty using navigation apps to find charging stations, and malfunctioning chargers.
Quantity and quality both matter
As federal investments continue to pour money into EV charging, our findings indicate that it’s important to use these resources not only to expand the network but also to improve the user experience at every step.
Areas for improvement include stricter oversight of charger maintenance; more robust uptime requirements that reflect real-world performance; and better collaboration between automakers, charging-point operators and software providers to ensure that vehicles and chargers can work together seamlessly.
The future of EV adoption depends not just on how many chargers are available, but on how reliable and easy they are to use. By addressing specific pain points that drivers face, policymakers and industry leaders can create a charging ecosystem that truly supports the needs of all EV drivers. Reliability is key to unlocking widespread confidence in the EV charging infrastructure and ensuring that it can keep pace with the growing number of electric vehicles on the road.
Alan Jenn receives funding from the California Energy Commission and is a participant in the National Charging Experience Consortium (ChargeX)
The 67 million Americans eligible for Medicare make an important decision every October: Should they make changes in their Medicare health insurance plans for the next calendar year?
The decision is complicated. Medicare has an enormous variety of coverage options, with large and varying implications for people’s health and finances, both as beneficiaries and taxpayers. And the decision is consequential – some choices lock beneficiaries out of traditional Medicare.
Beneficiaries choose an insurance plan when they turn 65 or become eligible based on qualifying chronic conditions or disabilities. After the initial sign-up, most beneficiaries can make changes only during the open enrollment period each fall.
The 2024 open enrollment period, which runs from Oct. 14 to Dec. 7, marks an opportunity to reassess options. Given the complicated nature of Medicare and the scarcity of unbiased advisers, however, finding reliable information and understanding the options available can be challenging.
We are health carepolicy experts who study Medicare, and even we find it complicated. One of us recently helped a relative enroll in Medicare for the first time. She’s healthy, has access to health insurance through her employer and doesn’t regularly take prescription drugs. Even in this straightforward scenario, the number of choices were overwhelming.
The stakes of these choices are even higher for people managing multiple chronic conditions. There is help available for beneficiaries, but we have found that there is considerable room for improvement – especially in making help available for everyone who needs it.
The choice is complex, especially when you are signing up for the first time and if you are eligible for both Medicare and Medicaid. Insurers often engage in aggressive and sometimes deceptive advertising and outreach through brokers and agents. Choose unbiased resources to guide you through the process, like http://www.shiphelp.org. Make sure to start before your 65th birthday for initial sign-up, look out for yearly plan changes, and start well before the Dec. 7 deadline for any plan changes.
2 paths with many decisions
Within Medicare, beneficiaries have a choice between two very different programs. They can enroll in either traditional Medicare, which is administered by the government, or one of the Medicare Advantage plans offered by private insurance companies.
Within each program are dozens of further choices.
Traditional Medicare is a nationally uniform cost-sharing plan for medical services that allows people to choose their providers for most types of medical care, usually without prior authorization. Deductibles for 2024 are US$1,632 for hospital costs and $240 for outpatient and medical costs. Patients also have to chip in starting on Day 61 for a hospital stay and Day 21 for a skilled nursing facility stay. This percentage is known as coinsurance. After the yearly deductible, Medicare pays 80% of outpatient and medical costs, leaving the person with a 20% copayment. Traditional Medicare’s basic plan, known as Part A and Part B, also has no out-of-pocket maximum.
People enrolled in traditional Medicare can also purchase supplemental coverage from a private insurance company, known as Part D, for drugs. And they can purchase supplemental coverage, known as Medigap, to lower or eliminate their deductibles, coinsurance and copayments, cap costs for Parts A and B, and add an emergency foreign travel benefit.
The Medicare Advantage program allows private insurers to bundle everything together and offers many enrollment options. Compared with traditional Medicare, Medicare Advantage plans typically offer lower out-of-pocket costs. They often bundle supplemental coverage for hearing, vision and dental, which is not part of traditional Medicare.
Understanding the tradeoffs between premiums, health care access and out-of-pocket health care costs can be overwhelming.
Turning 65 begins the process of taking one of two major paths, which each have a thicket of health care choices. Rika Kanaoka/USC Schaeffer Center for Health Policy & Economics
Different Medicare Advantage plans have varying and large impacts on enrollee health, including dramatic differences in mortality rates. Researchers found a 16% difference per year between the best and worst Medicare Advantage plans, meaning that for every 100 people in the worst plans who die within a year, they would expect only 84 people to die within that year if all had been enrolled in the best plans instead. They also found plans that cost more had lower mortality rates, but plans that had higher federal quality ratings – known as “star ratings” – did not necessarily have lower mortality rates.
While many Medicare Advantage plans boast about their supplemental benefits , such as vision and dental coverage, it’s often difficult to understand how generous this supplemental coverage is. For instance, while most Medicare Advantage plans offer supplemental dental benefits, cost-sharing and coverage can vary. Some plans don’t cover services such as extractions and endodontics, which includes root canals. Most plans that cover these more extensive dental services require some combination of coinsurance, copayments and annual limits.
Even when information is fully available, mistakes are likely.
At 65, when most beneficiaries first enroll in Medicare, federal regulations guarantee that anyone can get Medigap coverage. During this initial sign-up, beneficiaries can’t be charged a higher premium based on their health.
Older Americans who enroll in a Medicare Advantage plan but then want to switch back to traditional Medicare after more than a year has passed lose that guarantee. This can effectively lock them out of enrolling in supplemental Medigap insurance, making the initial decision a one-way street.
For the initial sign-up, Medigap plans are “guaranteed issue,” meaning the plan must cover preexisting health conditions without a waiting period and must allow anyone to enroll, regardless of health. They also must be “community rated,” meaning that the cost of a plan can’t rise because of age or illness, although it can go up due to other factors such as inflation.
People who enroll in traditional Medicare and a supplemental Medigap plan at 65 can expect to continue paying community-rated premiums as long as they remain enrolled, regardless of what happens to their health.
In most states, however, people who switch from Medicare Advantage to traditional Medicare don’t have as many protections. Most state regulations permit plans to deny coverage, impose waiting periods or charge higher Medigap premiums based on their expected health costs. Only Connecticut, Maine, Massachusetts and New York guarantee that people can get Medigap plans after the initial sign-up period.
Deceptive advertising
Information about Medicare coverage and assistance choosing a plan is available but varies in quality and completeness. Older Americans are bombarded with ads for Medicare Advantage plans that they may not be eligible for and that include misleading statements about benefits.
A November 2022 report from the U.S. Senate Committee on Finance found deceptive and aggressive sales and marketing tactics, including mailed brochures that implied government endorsement, telemarketers who called up to 20 times a day, and salespeople who approached older adults in the grocery store to ask about their insurance coverage.
The Department of Health and Human Services tightened rules for 2024, requiring third-party marketers to include federal resources about Medicare, including the website and toll-free phone number, and limiting the number of contacts from marketers.
Although the government has the authority to review marketing materials, enforcement is partially dependent on whether complaints are filed. Complaints can be filed with the federal government’s Senior Medicare Patrol, a federally funded program that prevents and addresses unethical Medicare activities.
Nearly one-third of Medicare beneficiaries seek information from an insurance broker. Brokers sell health insurance plans from multiple companies. However, because they receive payment from plans in exchange for sales, and because they are unlikely to sell every option, a plan recommended by a broker may not meet a person’s needs.
Help is out there − but falls short
An alternative source of information is the federal government. It offers three sources of information to assist people with choosing one of these plans: 1-800-Medicare, medicare.gov and the State Health Insurance Assistance Program, also known as SHIP.
Telephone SHIP services are available nationally, but one of us and our colleagues have found that in-person SHIP services are not available in some areas. We tabulated areas by ZIP code in 27 states and found that although more than half of the locations had a SHIP site within the county, areas without a SHIP site included a larger proportion of people with low incomes.
Virtual services are an option that’s particularly useful in rural areas and for people with limited mobility or little access to transportation, but they require online access. Virtual and in-person services, where both a beneficiary and a counselor can look at the same computer screen, are especially useful for looking through complex coverage options.
As one SHIP coordinator noted, many people are not aware of all their coverage options. For instance, one beneficiary told a coordinator, “I’ve been on Medicaid and I’m aging out of Medicaid. And I don’t have a lot of money. And now I have to pay for my insurance?” As it turned out, the beneficiary was eligible for both Medicaid and Medicare because of their income, and so had to pay less than they thought.
The interviews made clear that many people are not aware that Medicare Advantage ads and insurance brokers may be biased. One counselor said, “There’s a lot of backing (beneficiaries) off the ledge, if you will, thanks to those TV commercials.”
Many SHIP staff counselors said they would benefit from additional training on coverage options, including for people who are eligible for both Medicare and Medicaid. The SHIP program relies heavily on volunteers, and there is often greater demand for services than the available volunteers can offer. Additional counselors would help meet needs for complex coverage decisions.
The key to making a good Medicare coverage decision is to use the help available and weigh your costs, access to health providers, current health and medication needs, and also consider how your health and medication needs might change as time goes on.
This article is part of an occasional series examining the U.S. Medicare system.
Grace McCormack receives funding from the Commonwealth Fund and Arnold Ventures.
Melissa Garrido receives funding from Commonwealth Fund, the Laura and John Arnold Foundation, and the National Institutes of Health for Medicare-related research, including research discussed in this piece.
Donald Trump accuses others of acts he has done at an Oct. 3, 2024, rally in Michigan.AP Photo/Carlos Osorio
Donald Trump has a particular formula he uses to convey messages to his supporters and opponents alike: He highlights others’ wrongdoings even though he has committed similar acts himself.
On Oct. 3, 2024, Trump accused the Biden administration of spending Federal Emergency Management Agency funds – money meant for disaster relief – on services for immigrants. Biden did no such thing, but Trump did during his time in the White House, including to pay for additional detention space.
This is not the first time he has accused someone of something he had done or would do in the future. In 2016, Trump criticized opponent Hillary Clinton’s use of an unsecured personal email server while secretary of state as “extreme carelessness with classified material.” But once he was elected, Trump continued to use his unsecured personal cellphone while in office. And he has been criminally charged with illegally keeping classified government documents after he left office and storing them in his bedroom, bathroom and other places at his Mar-a-Lago estate.
After complaining about how Hillary Clinton handled classified documents, Donald Trump stored national secrets in a bathroom. Justice Department via AP
More recently, the Secret Service arrested a man with a rifle who was allegedly planning to shoot Trump during a round of golf. In the wake of this event, Trump accused Democrats of using “inflammatory language” that stokes the fires of political violence. Meanwhile, Trump himself has a long history of making inflammatory remarks that could potentially incite violence.
As a scholar of both politics and psychology, I’m familiar with the psychological strategies candidates use to persuade the public to support them and to cast their rivals in a negative light. This strategy Trump has used repeatedly is called “projection.” It’s a tactic people use to lessen their own faults by calling out these faults in others.
Projection abounds
There are plenty of examples. During his Sept. 10, 2024, debate with Vice President Kamala Harris, Trump claimed that Democrats were responsible for the July 13 assassination attempt against him. “I probably took a bullet to the head because of the things that they say about me,” he declared.
Earlier in the debate he had falsely accused immigrants in Springfield, Ohio, of eating other people’s pets – a statement that sparked bomb threats and prompted the city’s mayor to declare a state of emergency.
Trump isn’t the only politician who uses projection. His running mate, JD Vance, claimed “the rejection of the American family is perhaps the most pernicious and the most evil thing the left has done in this country.” Critics quickly pointed out that his own family has a history of dysfunction and drug addiction.
Projection happens on both sides of the political aisle. In reference to Trump’s proposed 10% tariff on all imported goods, the Harris campaign launched social media efforts to condemn the so-called “Trump tequila tax.” While Harris frames this proposal as a sales tax that would devastate middle-class families, she deflects from the fact that inflation has made middle-class life more expensive since she and President Joe Biden took office.
How it works
Projection is one example of unconscious psychological processes called defense mechanisms. Some people find it hard to accept criticism or believe information that they wish were not true. So they seek – and then provide – another explanation for the difference between what’s happening in the world and what’s happening in their minds.
In general, this is called “motivated reasoning,” which is an umbrella phrase used to describe the array of mental gymnastics people use to reconcile their views with reality.
Some examples include seeking out information that confirms their beliefs, dismissing factual claims or creating alternate explanations. For example, a smoker might downplay or simply avoid information related to the link between smoking and lung cancer, or perhaps tell themselves that they don’t smoke as much as they actually do.
Motivated reasoning is not unique to politics. It can be a challenging concept to consider because people tend to think they are fully in control of their decision-making abilities and that they are capable of objectively processing political information. The evidence is clear, however, that there are unconscious thought processes at work, too.
Influencing the audience
Audiences are also susceptible to unconscious psychological dynamics. Research has found that over time, people’s minds subconsciously attach emotions to concepts, names or phrases. So someone might have a particular emotional reaction to the words “gun control,” “Ron DeSantis” or “tax relief.”
And people’s minds also unconsciously create defenses for those seemingly automatic emotions. When a person’s emotions and defenses are questioned, a phenomenon called the “backfire effect” can occur, in which the process of controlling, correcting or counteracting mistaken beliefs ends up reinforcing the person’s beliefs rather than changing them.
For instance, some people may find it hard to believe that the candidate they prefer – whom they believe to be the best person for the job – truly lost an election. So they seek another explanation and accept explanations that justify their beliefs. Perhaps they choose to believe, even in the absence of evidence, that the race was rigged or that many fraudulent votes were cast. And when evidence to the contrary is offered, they insist their views are correct.
Vice President Kamala Harris has campaigned with Liz Cheney, right, a prominent Republican who formerly served in Congress. AP Photo/Mark Schiefelbein
A way out
Fortunately, research shows specific ways to reduce people’s reliance on these automatic psychological processes, including reiterating and providing details of objective facts and – importantly – attempting to correct untruths via a trusted source from the same political party.
For instance, challenges to Democrats’ belief that the Trump-affiliated conservative agenda called Project 2025 is “dangerous” would be more effective coming from a Democrat than from a Republican.
Similarly, a counter to Trump’s claim that the international community is headed toward World War III with Democrats in the White House would be stronger coming from one of Trump’s fellow Republicans. And certainly, statements that Trump “can never be trusted with power again” carries more weight when it comes from the lips of former Republican Vice President Dick Cheney than from any member of the Democratic Party.
Critiques from within a candidate’s own party are not out of the question. But they are certainly improbable given the hotly charged climate that is election season 2024.
April Johnson does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
The Republican Party and Democratic Party offer voters starkly different visions of LGBTQ rights in America.Douglas Rissing via Getty Images
Polls show that LGBTQ rights will likely factor into most Americans’ pick for president this November as they choose between former Republican President Donald Trump and Vice President Kamala Harris, a Democrat.
A March 2024 survey by independent pollster PRRI found that 68% of voters will take LGBTQ rights into consideration at the polls. Fully 30% stated that they would vote only for a candidate who shares their views on the issue.
It is no coincidence, then, that LGBTQ rights issues feature prominently in the party platforms.
The Republican Party’s electoral promises include cutting existing federal funding for gender-affirming care and restricting transgender students’ participation in sports. Meanwhile, the Democratic Party platform proposes to outlaw discrimination against LGBTQ people, including passing the Equality Act, which would prohibit discrimination based on sexual orientation and gender identity in housing, health care and public accommodations.
As a legal scholar who has written extensively on the history of LGBTQ rights, I have seen that the clearest indication of how a politician will act once in office is not what they promise on the campaign trail. Instead, it’s what they have done in the past.
Let’s examine their records.
Trump restricted some LGBTQ rights
Trump and his running mate, U.S. Sen. JD Vance of Ohio, are both relatively new to politics, so their records on LGBTQ rights issues are slim.
Trump enacted two policies restricting LGBTQ rights early in his one term in office. The first was his 2017 executive order Promoting Free Speech and Religious Liberty, which reinforced that federal law must respect conscience-based objections to comply with the First Amendment. This order indirectly imperiled LGBTQ rights because many LGBTQ rights battles are fought over whether conservative Christian businesses run afoul of anti-discrimination laws when they refuse to serve same-sex couples.
A few months later, Trump banned transgender individuals from serving in the U.S. armed forces. He ultimately revoked the directive, implementing instead a new policy that allowed existing transgender soldiers to remain in the military but barred new transgender recruits from enlisting.
Vance has opposed trans rights
Vance, a one-term senator, has accrued a record of trying to roll back the rights of transgender Americans during his short time in public office.
Between 2023 and 2024, Vance introduced or sponsored five bills opposing trans rights. One seeks to restrict gender-affirming care for minors by imposing criminal sanctions on doctors who perform such surgeries; another aims to do the same by exposing physicians to civil liability for either prescribing gender affirming hormones or performing surgeries.
Harris and her vice presidential pick, Minnesota Gov. Tim Walz, have both made LGBTQ rights a legislative priority throughout their long political careers.
Harris initially took public office in 2003 as San Francisco’s district attorney. In that role, she established a hate crimes unit that prosecuted violence against LGBTQ youth in schools. She also trained prosecutors nationwide to counter the “gay panic” and “trans panic” defenses in court, which is when lawyers attempt to justify violence as a fear-based reaction to the victim’s sexual orientation or gender identity.
Since 2021, President Joe Biden has issued multiple executive orders to combat discrimination against the LGBTQ community, including by eliminating the Trump-era restrictions on transgender military service. Biden also signed into law the Respect for Marriage Act, which changed the federal definition of marriage from “a man and a woman” to “two individuals.” The statute ensures that the federal government would continue to recognize same-sex unions if the Supreme Court ever reversed its decision to legalize marriage equality.
Walz: Ally in the statehouse
Harris’ vice-presidential pick has a similarly extensive record backing LGBTQ rights.
As a U.S. representative from 2007 to 2019, Walz supported efforts to grant federal benefits to same-sex couples before marriage equality became federal law. He also co-sponsored many of the House versions of the same bills as Harris.
As citizens head to the polls in November, they can be confident that, on this topic at least, the candidates mean what they say.
Marie-Amelie George does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Source: The Conversation – USA – By Molly Yanity, Professor and Director of Sports media and Communication, University of Rhode Island
Indiana Fever guard Caitlin Clark, right, scrambles for a loose ball against Connecticut Sun guard DiJonai Carrington during a game on Aug. 28, 2024.Brian Spurlock/Icon Sportswire via Getty Images
Clark, however, didn’t get a chance to compete for a league title.
The Connecticut Sun eliminated Clark’s team, the Indiana Fever, in the first round of the playoffs with a two-game sweep, ending her record rookie-of-the-year campaign.
And it may be just the latest chapter in a complicated saga steeped in race.
During the next day’s media availability, USA Today columnist Christine Brennan recorded and posted an exchange between herself and Carrington.
In the brief clip, the veteran sports writer asks Carrington, who is Black, if she purposely hit Clark in the eye during the previous night’s game. Though Carrington insisted she didn’t intentionally hit Clark, Brennan persisted, asking the guard if she and a teammate had laughed about the incident. The questions sparked social media outrage, statements from the players union and the league, media personalities weighing in and more.
But Brennan’s questions were not asked in a vacuum. The emergence of a young, white superstar from the heartland has caused many new WNBA fans to pick sides that fall along racial lines. Brennan’s critics claim she was pushing a line of questioning that has dogged Black athletes for decades: that they are aggressive and undisciplined.
Because of that, her defense of her questions – and her unwillingness to acknowledge the complexities – has left this professor disappointed in one of her journalistic heroes.
Brennan and much of the mainstream sports media, particularly those who cover professional women’s basketball, still seem to have a racial blind spot.
The emergence of a Black, queer league
When the WNBA launched in 1997 in the wake of the success of the 1996 Olympic gold-medal-winning U.S. women’s basketball team, it did so under the watch of the NBA.
While the league experienced fits and starts in attendance and TV ratings over its lifetime, the demographic makeup of its players is undeniable: The WNBA is, by and large, a Black, queer league.
In 2020, the Women’s National Basketball Players Association reported that 83% of its members were people of color, with 67% self-reporting as “Black/African-American.” While gender and sexual identity hasn’t been officially reported, a “substantial proportion,” the WNBPA reported, identify as LBGTQ+.
In 2020, the league’s diversity was celebrated as players competed in a “bubble” in Bradenton, Florida, due to the COVID-19 pandemic. They protested racial injustice, helped unseat a U.S. senator who also owned Atlanta’s WNBA franchise, and urged voters to oust former President Donald Trump from the White House.
Racial tensions bubble to the surface
In the middle of it all, the WNBA has more eyeballs on it than ever before. And, without mincing words, the fan base has “gotten whiter” since Clark’s debut this past summer, as The Wall Street Journal pointed out in July. Those white viewers of college women’s basketball have emphatically turned their attention to the pro game, in large part due to Clark’s popularity at the University of Iowa.
While the rising tide following Clark’s transition to the WNBA is certainly lifting all boats, it is also bringing detritus to the surface in the form of racist jeers from the stands and on social media.
After the Sun dispatched the Fever, All-WNBA forward Alyssa Thomas, who seldom speaks beyond soundbites, said in a postgame news conference: “I think in my 11-year career I’ve never experienced the racial comments from the Indiana Fever fan base. … I’ve never been called the things that I’ve been called on social media, and there’s no place for it.”
Echoes of Bird and Magic
In “Manufacturing Consent,” a seminal work about the U.S. news business, Edward Herman and Noam Chomsky argued that media in capitalist environments do not exist to impartially report the news, but to reinforce dominant narratives of the time, even if they are false. Most journalists, they theorized, work to support the status quo.
In sports, you sometimes see that come to light through what media scholars call “the stereotypical narrative” – a style of reporting and writing that relies on old tropes.
In Brennan’s coverage of the Carrington-Clark incident, there appear to be echoes of the way the media covered Los Angeles Lakers point guard Magic Johnson and Boston Celtics forward Larry Bird in the 1980s.
The battles between two of the sport’s greatest players – one Black, the other white – was a windfall for the NBA, lifting the league into financial sustainability.
But to many reporters who leaned on the dominant narrative of the time, the two stars also served as stand-ins for the racial tensions of the post-civil rights era. During the 1980s, Bird and Magic didn’t simply hoop; they were the “embodiments of their races and living symbols of how blacks and whites lived in America,” as scholars Patrick Ferrucci and Earnest Perry wrote.
The media gatekeepers of the Magic-Bird era often relied on racial stereotypes that ultimately distorted both athletes.
For example, early in their careers, Bird and Johnson received different journalistic treatment. In Ferrucci and Perry’s article, they explain how coverage of Bird “fit the dominant narrative of the time perfectly … exhibiting a hardworking and intelligent game that succeeded despite a lack of athletic prowess.” When the “flashy” Lakers and Johnson won, they wrote, it was because of “superior skill.”
When they lost to Bird’s Celtics, they were “outworked.”
Framing matters
Let’s go back to Brennan.
Few have done more for young women in the sports media industry than Brennan. In time, energy and money, she has mentored and supported young women trying to break into the field. She has used her platform to expand the coverage of women’s sports.
“I think [critics are] missing the fact of what I’m trying to do, what I am doing, what I understand clearly as a journalist, asking questions and putting things out there so that athletes can then have an opportunity to answer issues that are being discussed or out there.”
I don’t think Brennan asking Carrington about the foul was problematic. Persisting with the narrative was.
Leaning into racial stereotypes is not simply about the language used anymore. Brennan’s video of her persistent line of questioning pitted Carrington against Clark. It could be argued that it used the stereotype of the overly physical, aggressive Black athlete, as well.
At best, Brennan has a blind spot to the strain racism is putting on Black athletes today – particularly in the WNBA. At worst, she is digging in on that tired trope.
A blind spot can be addressed and seen. An unacknowledged racist narrative, however, will persist.
Molly Yanity does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
The 2024 Nobel Prizes in physics and chemistry have given us a glimpse of the future of science. Artificial intelligence (AI) was central to the discoveries honoured by both awards. You have to wonder what Alfred Nobel, who founded the prizes, would think of it all.
We are certain to see many more Nobel medals handed to researchers who made use of AI tools. As this happens, we may find the scientific methods honoured by the Nobel committee depart from straightforward categories like “physics”, “chemistry” and “physiology or medicine”.
We may also see the scientific backgrounds of recipients retain a looser connection with these categories. This year’s physics prize was awarded to the American John Hopfield, at Princeton University, and British-born Geoffrey Hinton, from the University of Toronto. While Hopfield is a physicist, Hinton studied experimental psychology before gravitating to AI.
The chemistry prize was shared between biochemist David Baker, from the University of Washington, and the computer scientists Demis Hassabis and John Jumper, who are both at Google DeepMind in the UK.
There is a close connection between the AI-based advances honoured in the physics and chemistry categories. Hinton helped develop an approach used by DeepMind to make its breakthrough in predicting the shapes of proteins.
The physics laureates, Hinton in particular, laid the foundations of the powerful field known as machine learning. This is a subset of AI that’s concerned with algorithms, sets of rules for performing specific computational tasks.
Hopfield’s work is not particularly in use today, but the backpropagation algorithm (co-invented by Hinton) has had a tremendous impact on many different sciences and technologies. This is concerned with neural networks, a model of computing that mimics the human brain’s structure and function to process data. Backpropagation allows scientists to “train” enormous neural networks. While the Nobel committee did its best to connect this influential algorithm to physics, it’s fair to say that the link is not a direct one.
Training a machine-learning system involves exposing it to vast amounts of data, often from the internet. Hinton’s advance ultimately enabled the training of systems such as GPT (the technology behind ChatGPT), and the AI algorithms AlphaGo and AlphaFold, developed by Google DeepMind. So, backpropagation’s impact has been enormous.
DeepMind’s AlphaFold 2 solved a 50-year-old problem: predicting the complex structures of proteins from their molecular building blocks, amino acids.
Every two years, since 1994, scientists have been holding a contest to find the best ways to predict protein structures and shapes from the sequences of their amino acids. The competition is called Critical Assessment of Structure Prediction (CASP).
For the past few contests, CASP winners have used some version of DeepMind’s AlphaFold. There is, therefore, a direct line to be drawn from Hinton’s backpropagation to Google DeepMind’s AlphaFold 2 breakthrough.
David Baker used a computer program called Rosetta to achieve the difficult feat of building new kinds of proteins. Both Baker’s and DeepMind’s approaches hold enormous potential for future applications.
Attributing credit has always been controversial aspect of the Nobel prizes. A maximum of three researchers can share a Nobel. But big advances in science are collaborative. Scientific papers may have 10, 20, 30 authors or more. More than one team might contribute to the discoveries honoured by the Nobel committee.
This year we may have further discussions about the attribution of the research on backpropagation algorithm, which has been claimed by various researchers, as well as for the general attribution of a discovery to a field like physics.
We now have a new dimension to the attribution problem. It’s increasingly unclear whether we will always be able to distinguish between the contributions of human scientists and those of their artificial collaborators – the AI tools that are already helping push forward the boundaries of our knowledge.
In the future, could we see machines take the place of scientists, with humans being consigned to a supporting role? If so, perhaps the AI tool will get the main Nobel prize with humans needing their own category.
Nello Cristianini is affiliated with the University of Bath, and the author of two books that cover the topics of this article, The Shortcut (CRC Press, 2023) and Machina Sapiens (Mulino, 2024).
Globular cluster NGC 2005. ESA/Hubble & Nasa, F. Niederhofer, L. Girardi, CC BY-SA
As I finished my PhD in 1992, the universe was full of mystery – we didn’t even know exactly what it is made of. One could argue that cosmologists had made little progress in our understanding of these basic facts since the discovery of the cosmic microwave background (CMB), the afterglow of the Big Bang, in the 1960s.
I left the UK after my doctoral studies to begin a research career in the US, where I was lucky to be recruited to work on a new experiment called the Sloan Digital Sky Survey (SDSS). This new survey embraced advances in digital technologies with the ambition of measuring the “redshifts” (how light becomes more red if a source appears to move away from you) of a million galaxies.
These redshifts were then used to measure distances, and allowed cosmologists to map the three-dimensional structure of the universe.
One cosmic puzzle in the 1980s, based on the pioneering CfA Redshift Survey of Margaret Geller and John Huchra, was the significant lumpiness of galaxies, and therefore matter, in our cosmic neighbourhood. Galaxies were clustered together across a wide range of scales, with evidence for coherent “superclusters” of galaxies spanning over 30 million light years in length.
This article is part of our series Cosmology in crisis? which uncovers the greatest problems facing cosmologists today – and discusses the implications of solving them.
It was important to know how such superclusters could have formed from the smooth CMB, as it would tell us the total amount of matter in the universe and, more intriguingly, what that matter was made of. That was assuming the only force in play was gravity.
By the end of the first phase of the SDSS, we had achieved our goal of a million redshifts. This data was used to discover many superclusters across the universe, including the amazing “Sloan Great Wall”, which remains one of the largest known coherent structures in the universe, over a billion light years in length.
Type 1A supernova remnant. Nasa/CXC/U.Texas
I am lucky to have lived through this amazing era of cosmic discovery around the turn of the century. Surveys like SDSS, combined with new observations of the CMB and searches for distant exploding stars known as Type Ia Supernovae (SNeIa), coincided to deliver an emphatic answer to the question: “What is the universe made of?”
The discovery of dark energy
From 1999 to 2004, the cosmological community came together to agree that the universe was 5% normal (baryonic) matter, 25% dark matter (unknown, invisible matter), and 70% “dark energy” (an expansive force) – essentially a cosmological constant, which was first postulated by Einstein. The discovery that the universe was dominated by this constant energy shocked everyone, especially as Einstein had called the cosmological constant his “biggest blunder”.
Today, cosmologists still agree this is the most likely make-up of our universe. But observational cosmologists like me have refined our measurements of these cosmic variables significantly – reducing the errors on these quantities.
The latest numbers from the Dark Energy Survey (DES) indicate that 31.5% of the universe is matter (a combination of dark and normal), with the remainder being dark energy assuming a cosmological constant. The error on this measurement is just 3%.
Knowing these numbers to higher precision will hopefully help cosmologists understand why the universe is like this. Why would we expect to have 70% of the universe today as “dark” (can’t be seen via electromagnetic radiation) and not associated with “matter” like everything else in the universe?
The origin of this dark energy remains the biggest challenge to physics, even after 20 years of intense study.
Intriguing measurements
Like me, a few cosmologists have become distracted by other problems over the last two decades. However, 2024 could be the start of a new era of discovery. This year, cosmologists published new results based on two of our best cosmological probes.
The first probe consists of exploding stars dubbed “SNeIa”. As these stars have a narrow range of masses, their explosions can be well calibrated, giving cosmologists a predictable brightness that can be seen far away. By comparing the known brightness of these SNeIa to their redshifts, we can determine the expansion history of the universe. These objects were, in fact, critical for discovering that the expansion of our universe is accelerating.
The second probe works by looking at Baryon Acoustic Oscillations (BAO) – relics of predictable sound waves in the plasma (charged gas) of the early universe, before the CMB. These are now frozen into the large-scale structure of galaxies around us. Like SNeIa, their predictable size can be compared with their observed size today to measure the expansion history of the universe.
Recently, DES reported its final SNeIa results from over a decade of work, detecting and characterising many thousands of supernova events. While these SNeIa results are consistent with the orthodox view that the universe is dominated by a cosmological constant, they do leave open the tantalising possibility of new physics – namely, that the dark energy could be varying with cosmic time.
That said, scientists are trained to be sceptical, and there are many reasons to distrust a single experiment, single observation, or even a single set of cosmologists!
Cosmologists now go to extraordinary lengths to “blind” their results from themselves during analysis of the data, only revealing the answer at the last moment. This blinding is done to avoid unconscious human biases affecting the work, which could possibly encourage people to get the answer they believe they should see.
This is why repeatability of results is at the heart of all science. In cosmology, we cherish the need for multiple experiments checking and challenging each other.
The second result to turn heads was the first BAO measurements from the Dark Energy Spectroscopic Instrument (DESI), successor to the SDSS. The first DESI map of the cosmos is deeper and denser than the original SDSS. Its first BAO results are intriguing – the data alone is still consistent with a cosmological constant, but with hints of a possible time-varying dark energy when combined with other data sources.
DESI in the dome of the Nicholas U. Mayall 4-meter Telescope at the Kitt Peak National Observatory. wikipedia, CC BY-SA
In particular, when DESI analyses the combination of its BAO results with the final DES SNeIa data, the significance of a time-varying dark energy increases to 3.9 sigma (a measure of how unusual a set of data is if a hypothesis is true) – only 0.6% chance of being a statistical fluke.
Most of us would take such odds, but scientists have been hurt before by systematic errors within their data that can mimic such statistical certainty. Particle physicists therefore demand a discovery standard of 5 sigma for any claims of new physics – or less than a one in a million chance of being wrong!
As scientists will say: “Extraordinary claims require extraordinary evidence.”
Mindboggling implications
Are we entering a new era of cosmological discovery? If so, what would it mean?
The answer to my first question is probably yes. The next few years will be fun for cosmologists, with new data and results due from the European Space Agency’s Euclid mission. Launched last year, it is already scanning the sky with unprecedented accuracy.
Likewise, DESI will get more and better data, while the European Southern Observatory starts its own massive redshift survey in 2025. Then you have the Rubin Observatory in Chile coming online soon. Combining these datasets should prove beyond doubt if dark energy varies with cosmic time.
If it does, it implies there is less dark energy now than in the past. This could be caused by many things but, interestingly, it could signify the end of a present, accelerated phase of the expansion of the universe.
It also implies that dark energy is probably not a cosmological constant thought to be due to the background energy associated with empty space. According to quantum mechanics, empty space isn’t really empty, with particles popping in and out of existence creating something we call “vacuum energy”. Ironically, predictions of this vacuum energy do not agree with our cosmological observations by many orders of magnitude.
So, if we did discover that dark energy varies over time, it might explain why observations are at odds with quantum mechanics, which is an extremely well-tested theory. This would suggest the assumption in the standard model of cosmology, that dark energy is constant, needs a rethink. Such a realisation may help solve other mysteries about the universe – or pose new ones.
In short, the new cosmological observations coming this decade will stimulate a new era of physical thinking. Congratulations to my younger cosmologists: it is your era to have fun.
Pennsylvania has many slogans and nicknames. “The Keystone State.” “State of Independence.” “Home of beer, chocolate, and liberty and Taylor Swift.” And now: “centre of the political universe”.
According to recent analysis by political statistician Nate Silver, how Pennsylvania swings on November 5 is likely to determine the next leader of the free world. If Kamala Harris wins the state, her odds of taking the White House reach 91%. If Trump wins, his odds skyrocket to 96%.
That’s how much Pennsylvania’s 19 electoral votes matter (270 are needed to win the Electoral College), and how much the state is a bellwether nationally for how each candidate is performing with “must-win” voters.
Nearly every statewide poll conducted in Pennsylvania (PA) in the last month shows a statistical tie in the presidential contest. FiveThirtyEight forecasts in its simulations that Harris would win the state 54 times out of 100 elections and Trump 46 times, meaning the state is a virtual toss-up.
In 2016, Trump pulled off a narrow upset in PA, defeating Democrat Hillary Clinton 48.2 to 47.5%. The victory cracked the crucial “Blue Wall,” alongside Michigan and Wisconsin, which paved Trump’s path to the White House. In 2020, President Joe Biden, thanks partly to touting his family’s roots in the working-class city of Scranton, beat Trump in Pennsylvania 50 to 48.8%. In the last 10 elections, Pennsylvania has selected the eventual occupant of the Oval Office eight times.
The world is watching as the US election campaign unfolds. Sign up to join us at a special Conversation event on October 17. Expert panellists will discuss with the audience the upcoming election and its possible fallout.
Beyond the race for the White House, arguably there’s nowhere else with a more high-stakes race. Most notably, incumbent Democratic Senator Bob Casey has been exchanging barbs with Republican challenger Dave McCormick in an election that could tip the balance of the US Congress.
Bellwether state
Democratic political strategist James Carville once quipped that Pennsylvania is Philadelphia and Pittsburgh, with Alabama in between. Today, one could say it’s the Land of Walmart, Tractor Supply Co. and Fox News v the Land of Starbucks, Lululemon stores and MSNBC.
Zooming out, an electoral map of the state looks a lot like that of the country: vast swaths of Republican red in the rural, central parts of the state, and dashes of Democratic deep blue in the east and the west denoting its population centres.
Pennsylvania reflects the political realignment of both the Democratic and Republican parties in the last decade plus. Predominantly white, blue-collar Americans have gravitated to the Republican party. Meanwhile affluent urbanites have remade the Democratic party, formerly a base for the working class, into the party of the college educated and those who are less likely to be religious. But the Democrats still pick up 49% of the non-college educated and their share of the suburban vote has been rising.
Neither presidential candidate, however, is writing off key constituencies in PA. The Harris team has opened up 50 headquarters across Pennsylvania in an effort to make inroads in conservative, rural communities. Meanwhile, Trump has made a major play for Black voters and had looked like he was on track to win the highest support from Black voters of any Republican presidential candidate in history.
Particularly up for grabs are moderate suburbanites, such as those on Philadelphia’s “Main Line” (an area of well-off suburbs) and in upscale outskirts of the state capital of Harrisburg, who tend to be more liberal on social issues and conservative on economic issues.
Democrats have a slight edge in overall registration numbers in PA, at 44% compared to Republicans at 40% (12% of Pennsylvanians identify as independents). However, the registration advantage for Democrats is the thinnest it’s been in decades.
Big spending and big issues
As 2024’s biggest electoral prize, no state has been bombarded with more cash and attention than PA. Harris and Trump have criss-crossed the state for months at locations such as the Pennsylvania Farm Show Complex (a huge agricultural showground) and at union rallies.
Harris and her allies have spent US$21.2 million (£16.9 million) on political ads in Pennsylvania (that’s three times what they’ve spent in Georgia, twice what they’ve spent in Michigan and 18 times what they’ve spent in North Carolina). To match, Trump and his allies have doled out $20.9 million in PA (twice what they’ve spent in Georgia, three times than they’ve spent in Michigan and eight times what they’ve spent in North Carolina).
Dollars have funnelled into negative ads galore on the many issues that Americans more broadly face, including inflation and the cost of living crisis, crime, abortion and immigration. The war in Ukraine has featured as an especially central issue for Pennsylvania’s large Polish community in an attempt by the Democrats to harness historic fears about Russia.
No topic, however, has sparked more controversy than fracking, the process of extracting oil and gas from underground rock. PA has become a national leader in fracking, triggering outrage among environmentalists, even as advocates tout the industry as an enormous wealth and job creator for the state.
Harris, who declared as a Democratic presidential primary candidate in 2019 that: “There’s no question I’m in favor of banning fracking,” now says “let me be absolutely clear, as I’ve been when I said it back in 2020, I will not ban fracking”. Trump has unequivocally championed fracking as part of his “drill, baby, drill” message on lowering prices and creating domestic energy independence.
What’s in store
If Pennsylvania’s presidential race is anywhere near as tight as the polls suggest, a winner might not be announced in Pennsylvania, or the country, on election night. With the counting of absentee and overseas ballots (and the possibility of a recount), the process could drag on for days, if not weeks.
That’s one reason why both sides are already “lawyered-up” in anticipation of litigious combat. In 2020, the US Supreme Court declined to intervene in a case in Pennsylvania that tested rules surrounding the timing of when mail-in votes could still be counted. However, other aspects of electoral protocols or the integrity of ballots could again be challenged.
Already in 2024, Pennsylvania has been politically consequential. The first assassination attempt of Trump occurred in the tiny town of Butler, PA. Harris’s decision to snub popular state governor Josh Shapiro as her running mate also raised concerns, and could lead to considerable second-guessing if she loses PA and the presidency. Pennsylvania also hosted the one (and likely only) debate between Harris and Trump.
Whether Harris or Trump ends up as president will depend on whether their political stars align. Either way, those stars revolve around Pennsylvania, the centre of the political universe.
Thomas Gift does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Source: The Conversation – UK – By Heather Ewart, Postdoctoral Researcher, Evolutionary Biology, University of Manchester
Conservation biologist Rebecca Cliffe fits an accelerometer backpack to a wild three-fingered sloth to measure its movement.The Sloth Conservation Foundation, CC BY-NC-ND
Sloths are more vulnerable to the rising temperatures associated with climate change than other mammals, due to their unique physiology.
In a new study, my colleagues and I found that sloths’ ability to adapt to warming temperatures varies between the cooler, high-altitude and warmer, low-altitude forests of Costa Rica.
Unlike most mammals, sloths do not actively regulate their body temperature. Like reptiles, they rely heavily on ambient temperature to do so. This affects all aspects of their survival, including digestion, metabolism and movement. Combined with their extremely low-calorie, relatively inflexible leaf-based diet, these traits mean sloths have much less energy at their disposal than most other mammals.
As sloth body temperatures become hotter with rising temperatures, their metabolic rate increases. But those with sharply increasing metabolic rates are at risk of lower survival rates when temperatures rise, compared with other sloths.
The author, Heather Ewart, returns a wild three-fingered sloth back to its point of capture following the application of a GPS tracking collar and accelerometer. Heather Ewart, CC BY-NC-ND
Together with colleagues, including the founder of UK-based Sloth Conservation Foundation Rebecca Cliffe, I found that their degree of vulnerability depends on the altitude of the forests where each sloth originates from.
We calculated the metabolic rates of high- and low-altitude sloths across a range of temperatures using a method called respirometery. This involves putting a sloth in a large, closed box (comfortably) to measure how much oxygen it consumes at each temperature within an allotted time period.
Lowland sloths were able to slow their metabolic rate when temperatures became too hot. This is an important survival mechanism that may benefit these populations as climate change continues.
Highland sloths were unable to slow their metabolic rate, which increased with temperature and became critical above 32°C. Highland sloths are at another disadvantage – cooler, high-altitude forests tend to be smaller due to the slower growth rate of trees at higher elevations coupled with habitat loss. Highland sloths are therefore much less able to migrate and are more restricted than lowland sloths.
Sloths can’t adapt their metabolism quickly so are at risk from rising temperatures. Rebecca Cliffe, CC BY-NC-ND
Sloths with higher metabolic rates use more energy, so they need to eat more food to produce more energy. However, due to their extremely slow rates of food intake and digestion, sloths take much longer to process food into energy than other mammals. Essentially, sloths cannot simply eat more food to match their energy requirements or achieve “energy balance” – the state where calories consumed equals calories burnt through physical activity.
Combined with inflexible migration options, the restricted metabolism of highland sloths makes them especially vulnerable to climate change. However, while lowland sloths appear to have more flexible metabolic responses to warming temperatures, they won’t be able to escape the effects of climate change if temperature increases are too extreme, putting their survival at risk as well.
There is a considerable lack of data on the current status and abundance of sloths. No comprehensive, long-term population monitoring has been conducted at a scale that reflects the true challenges sloths face.
Conserving cooler microclimates
My team of ecologists, who have been studying sloth behaviour and abundance across Costa Rica for 15 years, are concerned about how sloths are being affected by climate change. Areas once highly populated are now devoid of sloths, driven primarily by habitat loss and fragmentation resulting from extensive destruction of rainforests.
Costa Rica has transformed into a predominantly urban society over the past 40 years, with its urban footprint increasing by 112%. In the Talamanca province, where our team currently tracks wild sloths, urban sprawl has increased substantially with an estimated 3,000 sloths lost annually. Electrocution is one of the leading causes of admissions to wild animal sanctuaries in Costa Rica, partly because sloths use power lines to cross between fragmented forests in certain places.
A two-fingered sloth uses power lines over a busy road to move between trees. Heather Ewart, CC BY-NC-ND
Both native sloth species of Costa Rica are now listed as conservation concerns. Globally, an estimated 40% of all sloth species are threatened with extinction. Climate change poses a serious threat – and sloth conservation efforts need to take this into account. We predict that rising temperatures will have devastating consequences for sloths’ ability to maintain their energy balance and survive.
Sloth conservation is crucial, as they play a vital role in keeping the rainforest ecosystem healthy. Sloths are herbivores (plant eaters) that help regulate plant growth and recycle nutrients. They are an integral part of the food web, hosting a diverse ecosystem of unique organisms in their fur and serving as prey for other animals, such as ocelots and jaguars.
Protecting sloths is an incredibly complex challenge. Right now, natural habitats must be preserved and restored to support cooler microclimates. Particularly in vulnerable high-altitude regions, remaining forest fragments should be reconnected by building wildlife corridors – strips of natural habitat that connect fragmented areas and allow animals to move more easily.
Sloth conservation can only be achieved by addressing the root issue: climate change. A global, coordinated effort is required, with strict adherence to international climate accords such as the Paris agreement to limit global warming to below 1.5°C and prevent irreversible damage to rainforests.
If climate change continues unchecked, sloths won’t be able to migrate like other species. Once their environment becomes too hot, their survival is unlikely. Sloth conservation is directly linked to the actions humanity now takes to preserve our planet.
Don’t have time to read about climate change as much as you’d like?
Heather Ewart does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
But some risks of AI are still poorly understood. These include the very particular risks to Indigenous knowledges and communities.
There’s a simple reason for this: the AI industry and governments have largely ignored Indigenous people in the development and regulation of AI technologies. Put differently, the world of AI is too white.
AI developers and governments need to urgently fix this if they are serious about ensuring everybody shares the benefits of AI. As Aboriginal and Torres Strait Islander people like to say, “nothing about us, without us”.
Indigenous concerns
Indigenous peoples around the world are not ignoring AI. They are having conversations, conducting research and sharing their concerns about the current trajectory of it and related technologies.
A well-documented problem is the theft of cultural intellectual property. For example, users of AI image generation programs such as DeepAI can artificially generate artworks in mere seconds which mimic Indigenous styles and stories of art.
This demonstrates how easy it is for someone using AI to misappropriate cultural knowledges. These generations are taken from large data sets of publicly available imagery to create something new. But they miss the storying and cultural knowledge present in our art practices.
AI technologies also fuel the spread of misinformation about Indigenous people.
The internet is already riddled with misinformation about Indigenous people. The long-running Creative Spirits website, which is maintained by a non-Indigenous person, is a prominent example.
Generative AI systems are likely to make this problem worse. They often conflate us with other global Indigenous peoples around the world. They also draw on inappropriate sources, including Creative Spirits.
During last year’s Voice to Parliament referendum in Australia, “no” campaigners also used AI-generated images depicting Indigenous people. This demonstrates the role of AI in political contexts and the harm it can cause to us.
Another problem is the lack of understanding of AI among Indigenous people. Some 40% of the Aboriginal and Torres Strait Islander population in Australia don’t know what generative AI is. This reflects an urgent need to provide relevant information and training to Indigenous communities on the use of the technology.
We must think more expansively about AI and all the other computational systems in which we find ourselves increasingly enmeshed. We need to expand the operational definition of intelligence used when building these systems to include the full spectrum of behaviour we humans use to make sense of the world.
Key to achieving this is the idea of “Indigenous data sovereignty”. This would mean Indigenous people retain sovereignty over their own data, in the sense that they own and control access to it.
The National Agreement on Closing the Gap also affirms the importance of Indigenous data control and access.
This is reaffirmed at a global level as well. In 2020, a group of Indigenous scholars from around the world published a position paper laying out how Indigenous protocols can inform ethically created AI. This kind of AI would centralise the knowledges of Indigenous peoples.
For example, the guardrails include the need to ensure additional transparency and make extra considerations when it comes to using data about or owned by Aboriginal and Torres Strait Islander people, to “mitigate the perpetuation of existing social inequalities”.
Indigenous Futurisms
Grace Dillon, a scholar from a group of North American Indigenous people known as the Anishinaabe, first coined the term “Indigenous Futurisms”.
Ambelin Kwaymullina, an academic and futurist practitioner from the Palyku nation in Western Australia, defines it as:
visions of what-could-be that are informed by ancient Aboriginal cultures and by our deep understandings of oppressive systems.
These visions, Kwaymullina writes, are “as diverse as Indigenous peoples ourselves”. They are also unified by “an understanding of reality as living, interconnected whole in which human beings are but one strand of life amongst many, and a non-linear view of time”.
So how can AI technologies be informed by Indigenous ways of knowing?
A first step is for industry to involve Indigenous people in creating, maintaining and evaluating the technologies – rather than asking them retrospectively to approve work already done.
Governments need to also do more than highlight the importance of Indigenous data sovereignty in policy documents. They need to meaningfully consult with Indigenous peoples to regulate the use of these technologies. This consultation must aim to ensure ethical AI behaviour among organisations and everyday users that honours Indigenous worldviews and realities.
AI developers and governments like to claim they are serious about ensuring AI technology benefits all of humanity. But unless they start involving Indigenous people more in developing and regulating the technology, their claims ring hollow.
Tamika Worrell does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Source: The Conversation (Au and NZ) – By Kate Fitz-Gibbon, Professor (Practice), Faculty of Business and Economics, Monash University, Monash University
The report provides a global snapshot of the abuse women athletes experience, who is most likely to perpetrate the violence, and makes recommendations on what should been done to promote safety of women and girls.
Off the back of the Paris Olympic and Paralympic games, where Australia cheered on the record-breaking success of women athletes, the report should be a wake-up call for Australian sports and clubs.
Abuse of women and girls in sport
Drawing on more than 100 submissions and consultations with 50 people, the report finds:
Women and girls in sport face widespread, overlapping and grave forms and manifestations of violence at all levels.
These abusive behaviours include coercive control, physical violence, corporal punishment, verbal abuse, social exclusion, bullying and identity abuse.
The impacts of this violence are wide-ranging: physical injuries, insomnia, fear and anxiety, reduced self-confidence, substance misuse, eating disorders, self harm, and decline in athletic performance and participation.
These impacts can extend well beyond the athlete’s involvement in their sport.
Women and girls also experience economic violence in sport. For example, when women athletes do not have control over their earnings, or when they are coerced into signing exploitative contracts.
The report notes women athletes also experience heightened rates of abusive and harassing behaviours in online settings. This includes sexual harassment and threats, racism, ridicule, body shaming, sexualised comments, stalking, doxing and revenge porn.
Perpetrators are wide-ranging. They include coaches, managers, spectators, teachers, peers, sports lawyers, referees and medical staff.
The report describes sexual harassment and abuse as “rampant” and acknowledges the high rate of sexual violence, in particular with relationships between coaches and athletes.
This includes grooming of younger athletes, where power and control dynamics, combined with an abuse of trust between an adult and child athlete, provide the conditions for sexual abuse to proliferate.
It follows a 2023 report from the UN Educational, Scientific and Cultural Organisation (UNESCO) and UN Women, which estimates 21% of girls worldwide have experienced at least one form of sexual abuse as a child in sport.
Is this a problem in Australia?
Australians often pride themselves on how sports bring the nation, communities and families together but we too have a wide-reaching problem in this area.
In 2021, a review of Swimming Australia found women athletes and coaches had experienced physical and mental abuse while the “Change the Routine” review of Gymnastics Australia revealed child abuse and neglect, misconduct, bullying, abuse, sexual harassment and assault towards gymnasts.
More recently, a review by Sports Integrity Australia into Australian volleyball, which revealed systemic verbal and physical abuse of athletes, prompted a formal apology to past athletes.
And a 2024 Deakin University study showed 87% of Australian sportswomen had experienced online harm within the past year.
A lack of accountability and consequence
In the traditionally male-dominated culture of sport, abusers have often gone unsanctioned, while those who experience abuse often leave their sport early and with significant consequences to their careers, financial stability, and mental and physical wellbeing.
There are examples where abuse has been minimised or ignored by those in leadership to protect the reputation of the team or the sporting code, and where coaches have been able to move between teams without consequence.
The first complaint against Nassar was made in 1997. Despite this, and the numerous other complaints which followed, Nassar remained in his coaching position with USA Gymnastics and Michigan State University until 2015. In December 2017 he was convicted of numerous counts of sexual abuse of minors.
Outcomes of investigations by sporting bodies often remain confidential. For example, in 2017 the Fremantle Dockers and the AFL were criticised for their use of a “confidentiality agreement” in settling a sexual harassment matter.
This impunity demonstrates a significant lack of accountability.
The barriers to reporting abuse in sport
There are significant barriers to reporting.
Women elite athletes may fear losing their funding and sponsorship deals if they report abuse.
In Australia, the Royal Commission into Institutional Responses to Child Sexual Abuse heard child athletes are most at risk of experiencing abuse by a person of authority (such as a coach) when they are about to achieve their best performance.
As the UN Report states, it is at this time that “there is very little to gain by revealing the abuse and too much to lose”.
This must change.
When sporting codes put a desire to win above safeguarding and accountability, the clear message sent to victims is that violence is excusable, and that sporting heroes are immune to the consequences of their abusive actions.
Raising awareness around early identification of abusive behaviours is key.
The UN report reveals athletes often feel uncertain and uncomfortable in identifying early forms of abusive behaviours and lack information on what supports are available to them when they do.
Ensuring a suite of reporting pathways is also critical. There is no one-size-fits-all model.
Why Australia should take the lead
Participating in sport has significant benefits. But sport settings must be safe for all.
Many sporting organisations and clubs have recognised the problem of abuse of women and girls in sport, rolling out respect and responsibility programs, sexual harassment policies, as well as clearer reporting and investigation policies.
This is a good start but must be built on.
Indeed, the safety of women and girls must be a key focus of the Australian High Performance “Win Well” strategy for the Brisbane 2032 Olympic and Paralympic Games.
Recent initiatives and policy changes should be monitored to examine how they work and whether they deliver safer outcomes for women and girls in sport at all levels.
Responses to proven allegations of abuse must hold perpetrators to account. And critically, investigations must be independent, transparent and timely.
The UN report reminds us “sports is a microcosm of society”.
Violence against women and children in Australia has been declared a national emergency – ensuring the safety of women and girls in all sport settings is one critical component of addressing that crisis.
Kate has received funding for family violence-related research from the Australian Research Council, Australian Institute of Criminology, Australia’s National Research Organisation for Women’s Safety, the Victorian, Queensland and ACT governments, the Commonwealth Department of Social Services and the Victorian Women’s Trust. This piece is written by Kate Fitz-Gibbon in her role at Monash University and is wholly independent of Kate Fitz-Gibbon’s role as Chair of Respect Victoria.
How can a desert burn? Australia’s vast deserts aren’t just sand dunes – they’re often dotted with flammable spinifex grass hummocks. When heavy rains fall, grass grows quickly before drying out. That’s how a desert can burn.
When our Karajarri and Ngurrara ancestors lived nomadic lifestyles in what’s now called the Great Sandy Desert in northwestern Australia, they lit many small fires in spinifex grass as they walked. Fires were used seasonally for ceremonies, signalling to others, flushing out animals, making travel easier (spinifex is painfully sharp), cleaning campsites, and stimulating fresh vegetation growth ready for foraging or luring game when people returned a few months later. The result was a patchwork desert.
After colonisation, this ended. Without management, the spinifex and grassy deserts began to burn in some of the largest fires in Australia.
But now the work of caring for desert country (pirra) with fire (jungku, or warlu) has begun again. We are Karajarri and Ngurrara rangers who care for 110,000 square kilometres of the Great Sandy Desert. Our techniques have changed – we now drop incendiaries from helicopters to cover more distance – but our goals are similar. Guided by our elders, we are combining traditional knowledge with modern technologies and science to refine how we manage fire in a changing world.
In research published today, we and our co-authors paired analysis of historic fire patterns with five years of fauna surveys. Put together, we found mature spinifex was important for creatures of the Great Sandy Desert – and that means we should burn small and often, like our ancestors.
Fire and sand
In the 1940s and ‘50s, the Royal Australian Air Force photographed the Great Sandy Desert from the air. These photos were taken before our people moved to settlements and pastoral stations between the 1960s and ’80s.
That means these aerial photographs capture a time when traditional burns were still happening.
Our ranger teams are studying these photographs to draw out the fire patterns produced by our ancestors.
These photographs tell a story. Our ancestors burned many small areas, creating a complicated patchwork of spinifex at different stages of regrowth after fire.
But they also left a great deal of mature spinifex – large old hummocks that hadn’t burnt for years. This patchwork of burned and unburned areas made it hard for bushfires to spread far and fast. When traditional burning practices stopped, bushfires became common.
The knowledge contained in these old photos is very valuable. The images give us clear goals for our fire management. We combine this with guidance from elders and information on fuel loads across Country gleaned from remote sensing and weather modelling, to plan our fire management.
We could see where our ancestors burnt (white patches) in the Karajarri Indigenous Protected Area in this aerial photo from the late 1940s. National Library of Australia, CC BY-NC-ND
What does fire mean for desert creatures?
Australian deserts are remarkably biodiverse, especially in reptiles. In a single clump of mature spinifex, you might find up to 18 different species of lizard. Then there are snakes and goannas, as well as mammals such as marsupial moles found only in the arid zone.
Spinifex hummocks are crucial to many of these species, offering shelter, food and prey. What does fire do to spinifex-dwellers?
On this topic, scientific knowledge is playing catchup with Indigenous traditional knowledge but we see value in using the scientific method – a universal language – to help us manage Country, and tell other people about what we are doing.
The past few decades have been a time of major change for the Great Sandy Desert. Cultural burns stopped, and feral animals such as camels and cats grew in number. As a result, many native animals are disappearing or already gone.
We think larger, more frequent fires play a part. Our Karajarri and Ngurrara rangers are using science to make sure our patchwork burns – known as right-way fire – are good for native animals.
Between 2018 and 2022, we surveyed reptiles and mammals from 32 sites across the Karajarri and Warlu Jilajaa Jumu (Ngurrara) Indigenous Protected Areas in the desert. We caught almost 3,800 mammals and reptiles from 77 species. Reptiles made up the lion’s share, with 66 species. We also recorded when fire had come through, and how big the burnt patches were.
The data showed reptile species care a lot about where they live. Some prefer recently burned areas, where the spinifex is gone or still very small. Others like old spinifex, huge hummocks going unburned for years. And others still liked mid-sized spinifex.
We found mammals were rare in recently burned areas and more common in mature spinifex. We also found more mammal diversity in areas with fine-scale patchworks of fires.
This shows we must keep our fires small, burning different areas at different times, and protect enough mature spinifex.
This patchwork approach will help spinifex hopping mice, desert mice, planigales, dunnarts, and dozens of small reptile species to survive. But it will also help now-rare game species, the marlu (red kangaroo in Walmajarri language) and pijarta (emu in Karajarri).
Our research tells us returning to the traditional burning techniques of our ancestors is still the right thing to do – even though the desert has changed.
Karajarri Rangers talk about the Pirra Junkgu-Warlu project.
Rare finds
Scientists have rarely surveyed the Great Sandy Desert. As a result, our surveys have turned up important findings.
The kaluta (Dasykaluta rosamondae), for instance, is a feisty little carnivorous marsupial. We found it on the Canning Stock Route, 500km further north than the distribution known to scientists.
Similarly, we found the threatened Dampierland sandslider (Lerista separanda), a vividly coloured skink, in the Karajarri Indigenous Protected Area, expanding its distribution 450km southeast. Karajarri people call sandsliders winkajurta, or “lice eaters”, because in the old days you could use them to hunt lice in your hair.
Our research gives us confidence that bringing back traditional burns helps desert creatures. We want more people to know that right-way fire is part of healthy Country, including our own mob and tourists who pass through, so we can all look after the desert.
In our work, we take our old people out onto Country to get advice on burning and their knowledge of animals. As one told us, seeing the old ways return made him “real happy [and] to come alive” – just like the desert.
We thank Karajarri and Ngurrara Traditional Owners and acknowledge past and present elders. Thanks to the many rangers and coordinators who helped in these surveys, and our partners: Environs Kimberley, Charles Darwin University, Western Australian Department of Biodiversity, Conservation and Attractions, and Indigenous Desert Alliance. Special thanks to Hamsini Bijlani, our project coordinator.
Braedan Taylor and other rangers in this project were funded by the Australian Government’s Indigenous Protected Area Program, Indigenous Ranger Program, and the National Environmental Science Program via the Threatened Species Recovery Hub; by the Western Australia State Natural Resource Management, Aboriginal Ranger Program, Lotteries West, and via in kind support from the Department of Biodiversity, Conservation and Attractions; by the Indigenous Desert Alliance/10Deserts; and by the Australian Research Council.
Jacqueline Shovellor receives funding from the same sources as the lead author.
Frankie McCarthy receives funding from the same sources as the lead author.
Sarah Legge receives funding from the Australian Research Council. The work reported here was partly funded by the National Environmental Science Program via the Threatened Species Recovery Hub.
Thomas Narda receives funding from the same sources as the lead author.
Vitamin D is essential for bone health, immune function and overall wellbeing. And it becomes even more crucial as we age.
New guidelines from the international Endocrine Society recommend people aged 75 and over should consider taking vitamin D supplements.
But why is vitamin D so important for older adults? And how much should they take?
Young people get most vitamin D from the sun
In Australia, it is possible for most people under 75 to get enough vitamin D from the sun throughout the year. For those who live in the top half of Australia – and for all of us during summer – we only need to have skin exposed to the sun for a few minutes on most days.
The body can only produce a certain amount of vitamin D at a time. So staying in the sun any longer than needed is not going to help increase your vitamin D levels, while it will increase your risk of skin cancer.
But it’s difficult for people aged over 75 to get enough vitamin D from a few minutes of sunshine, so the Endocrine Society recommends people get 800 IU (international units) of vitamin D a day from food or supplements.
Why you need more as you age
This is higher than the recommendation for younger adults, reflecting the increased needs and reduced ability of older bodies to produce and absorb vitamin D.
Overall, older adults also tend to have less exposure to sunlight, which is the primary source of natural vitamin D production. Older adults may spend more time indoors and wear more clothing when outdoors.
As we age, our skin also becomes less efficient at synthesising vitamin D from sunlight.
The kidneys and the liver, which help convert vitamin D into its active form, also lose some of their efficiency with age. This makes it harder for the body to maintain adequate levels of the vitamin.
All of this combined means older adults need more vitamin D.
Deficiency is common in older adults
Despite their higher needs for vitamin D, people over 75 may not get enough of it.
Studies have shown one in five older adults in Australia have vitamin D deficiency.
In higher-latitude parts of the world, such as the United Kingdom, almost half don’t reach sufficient levels.
This increased risk of deficiency is partly due to lifestyle factors, such as spending less time outdoors and insufficient dietary intakes of vitamin D.
It’s difficult to get enough vitamin D from food alone. Oily fish, eggs and some mushrooms are good sources of vitamin D, but few other foods contain much of the vitamin. While foods can be fortified with the vitamin D (margarine, some milk and cereals), these may not be readily available or be consumed in sufficient amounts to make a difference.
In some countries such as the United States, most of the dietary vitamin D comes from fortified products. However, in Australia, dietary intakes of vitamin D are typically very low because only a few foods are fortified with it.
Why vitamin D is so important as we age
Vitamin D helps the body absorb calcium, which is essential for maintaining bone density and strength. As we age, our bones become more fragile, increasing the risk of fractures and conditions like osteoporosis.
Keeping bones healthy is crucial. Studies have shown older people hospitalised with hip fractures are 3.5 times more likely to die in the next 12 months compared to people who aren’t injured.
Vitamin D may also help lower the risk of respiratory infections, which can be more serious in this age group.
There is also emerging evidence for other potential benefits, including better brain health. However, this requires more research.
According to the society’s systematic review, which summarises evidence from randomised controlled trials of vitamin D supplementation in humans, there is moderate evidence to suggest vitamin D supplementation can lower the risk of premature death.
The society estimates supplements can prevent six deaths per 1,000 people. When considering the uncertainty in the available evidence, the actual number could range from as many as 11 fewer deaths to no benefit at all.
Should we get our vitamin D levels tested?
The Endocrine Society’s guidelines suggest routine blood tests to measure vitamin D levels are not necessary for most healthy people over 75.
There is no clear evidence that regular testing provides significant benefits, unless the person has a specific medical condition that affects vitamin D metabolism, such as kidney disease or certain bone disorders.
Routine testing can also be expensive and inconvenient.
In most cases, the recommended approach to over-75s is to consider a daily supplement, without the need for testing.
You can also try to boost your vitamin D by adding fortified foods to your diet, which might lower the dose you need from supplementation.
Even if you’re getting a few minutes of sunlight a day, a daily vitamin D is still recommended.
Elina Hypponen receives funding from the National Health and Medical Research Foundation, Medical Research Future Fund, Australian Research Council, and Arthritis Australia.
Joshua Sutherland’s studentship is funded by the Australian Research Training Program Scholarship, and he volunteers on the board for the Australasian Association and Register of Practicing Nutritionists.
Sexual harassment is all too common in hospitality and tourism. One Australian survey found almost half of the respondents had been sexually harassed, compared to about one in three in workplaces more generally.
Hospitality and tourism are marked by intense and close interpersonal interactions and dismissive treatment by some customers, including verbal and physical aggression, bullying and sexual suggestions.
Workers who are young, female, low-paid and casual are especially vulnerable.
The widely held view that “the customer is always right” gives customers power. The power imbalance is magnified where tipping makes up a substantial part of workers’ earnings.
What newspapers report
To examine how sexual harassment is reported, we identified about 2,000 newspaper articles across a number of countries published between 2017 and 2022 dealing with the treatment of hotel room attendants, airline cabin crew and massage therapists. We zeroed in on 273 for closer analysis.
This was a period in which the public awareness of sexual harassment climbed with the rise of the #MeToo movement and media coverage probably peaked.
Media coverage matters because of its effect on public opinion.
Computer-assisted thematic analysis showed four different types of coverage, some overlapping, relating to legal matters, celebrities, power dynamics, and calls to action.
The language used varied according to the countries in which the newspapers were located.
In the United States and the United Kingdom, the accused were often described by their social or economic status, with cases involving famous people getting a lot of attention. In Asia and Africa, the reports focused on basic details such as the offender’s age and where they lived.
Women infantilised
But universally we found the terms used to describe victims were highly gendered and dated in ways that suggested subservience and undermined their professional skills. Cabin crew were called “air hostesses”. Room attendants were called “maids”.
Framing these professionals as modern-day servants has the potential to foster and perpetuate an expectation that sexual harassment is to be expected.
Reports involving celebrity harassers highlighted victims’ narratives with emotionally charged quotes using words such as “awful” and “terrible”. These words were perhaps intended to evoke empathy for the victims but also serve to further victimise them.
Female aggression under-reported
In all cases, women were heavily featured as victims but never as aggressors. It is a gender bias that does not match the established statistics, which show that almost one-quarter of aggressors are women.
This misrepresentation creates a skewed understanding of who commits and suffers from sexual harassment. It has the potential to discourage victims of harassment by women from coming forward.
It’s important for the tourism industry to foster secure and dignified working conditions. But it is also important that the media reflect the actual behaviour of aggressors and victims.
Done better, reporting could help
The media could play a crucial role in bringing about better policies and practices in these industries by emphasising the severe consequences of ignoring the problem and the benefits of taking proactive steps.
More respectful and accurate reporting might be able to help drive lasting change, making a positive difference in the lives of the skilled workers on whom so many of us depend.
The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.
New Zealand’s economy has been described as a “housing market with bits tacked on”. Buying and selling property is a national sport fuelled by the rising value of homes across the country.
But the wider public has little understanding of how those property valuations are created – despite their being a key factor in most banks’ decisions about how much they are willing to lend for a mortgage.
Automated valuation models (AVM) – systems enabled by artificial intelligence (AI) that crunch vast datasets to produce instant property values – have done little to improve transparency in the process.
These models started gaining traction in New Zealand in the early 2010s. The early versions used limited data sources like property sales records and council information. Today’s more advanced models include high-quality geo-spatial data from sources such as Land Information New Zealand.
AI models have improved efficiency. But the proprietary algorithms behind those AVMs can make it difficult for homeowners and industry professionals to understand how specific values are calculated.
In our ongoing research, we are developing a framework that evaluates these automated valuations. We have looked at how the figures should be interpreted and what factors might be missed by the AI models.
In a property market as geographically and culturally varied as New Zealand’s, these points are not only relevant — they are critical. The rapid integration of AI into property valuation is no longer just about innovation and speed. It is about trust, transparency and a robust framework for accountability.
AI valuations are a black box
In New Zealand, property valuation has traditionally been a labour-intensive process. Valuers would usually inspect properties, make market comparisons and apply their expert judgement to arrive at a final value estimate.
But this approach is slow, expensive and prone to human error. As demand for more efficient property valuations increased, the use of AI brought in much-needed change.
But the rise of these valuations models is not without its challenges. While AI offers speed and consistency, it also comes with a critical downside: a lack of transparency.
AVMs often operate as “black boxes”, providing little insight into the data and methodologies that drive their valuations. This raises serious concerns about the consistency, objectivity and transparency of these systems.
What exactly the algorithm is doing when an AVM estimates a home’s value is not clear. Such opaqueness has real-world consequences, perpetuating market imbalances and inequities.
Without a framework to monitor and correct these discrepancies, AI models risk distorting the property market further, especially in a country as diverse as New Zealand, where regional, cultural and historical factors significantly influence property values.
Transparency and accountability
A recent discussion forum with real estate industry insiders, law researchers and computer scientists on AI governance and property valuations highlighted the need for greater accountability when it comes to AVMs. Transparency alone is not enough. Trust must be built into the system.
This can be achieved by requiring AI developers and users to disclose data sources, algorithms and error margins behind their valuations.
Additionally, valuation models should incorporate a “confidence interval” – a range of prices that shows how much the estimated value might vary. This offers users a clearer understanding of the uncertainty inherent in each valuation.
But effective AI governance in property valuation cannot be achieved in isolation. It demands collaboration between regulators, AI developers and property professionals.
Bias correction
New Zealand urgently needs a comprehensive evaluation framework for AVMs, one that prioritises transparency, accountability and bias correction.
This is where our research comes in. We repeatedly resample small portions of the data to account for situations where property value data do not follow a normal distribution.
This process generates a confidence interval showing a range of possible values around each property estimate. Users are then able to understand the variability and reliability of the AI-generated valuations, even when the data are irregular or skewed.
Our framework goes beyond transparency. It incorporates a bias correction mechanism that detects and adjusts for constantly overvalued or undervalued estimates within AVM outputs. One example of this relates to regional disparities or undervaluation of particular property types.
By addressing these biases, we ensure valuations that are not only accountable or auditable but also fair. The goal is to avoid the long-term market distortions that unchecked AI models could create.
The rise of AI auditing
But transparency alone is not enough. The auditing of AI-generated information is becoming increasingly important.
New Zealand’s courts now require a qualified person to check information generated by AI and subsequently used in tribunal proceedings.
In much the same way financial auditors ensure accuracy in accounting, AI auditors will play a pivotal role in maintaining the integrity of valuations.
Based on earlier research, we are auditing the artificial valuation model estimates by comparing them with the market transacted prices of the same houses in the same period.
It is not just about trusting the algorithms but trusting the people and systems behind them.
The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.
Source: The Conversation (Au and NZ) – By Nicholas Godfrey, Senior Lecturer, College of Humanities, Arts and Social Sciences, Flinders University
The Texas Chain Saw Massacre is a product of a unique time in American filmmaking, when independent exploitation films were nastier than ever, and equally capable of piercing the mainstream consciousness.
Tobe Hooper’s 1974 film arrived in a recently transformed exhibition landscape. The 1967 outcry over onscreen violence in Bonnie and Clyde marked the end of Hollywood’s Motion Picture Production Code and the introduction of film ratings.
Films like Easy Rider (1969) elevated the standing of formerly disreputable exploitation fare within Hollywood. By 1973, The Exorcist was packing out cinemas and producing lines around city blocks with the promise of the most unremitting horror film yet made.
The Texas Chain Saw Massacre was shot quickly on a shoestring budget, financed in part by the newly-formed Texas Film Commission. The film assembled its cast and crew from Austin’s circles of recent college graduates and dropouts.
Its plot is straightforward enough: a group of young people are stranded when they run out of gas in rural Texas. They are terrorised and subsequently murdered by an eccentric local family, including the chainsaw wielding Leatherface – a nonverbal, childlike giant who wears masks made from the skin of his flayed victims.
We learn this family have lost their jobs at the local slaughterhouse with the introduction of bolt gun technologies, leaving them sell roadside meat made from human victims.
This detail has inspired a range of thematic interpretations for the film, encompassing commentary on class and family, gender and animal rights.
The film lays bare the horrors of meat production, inflicted on human victims. The family home is the site where these themes come into conflict.
Porn and violence on screen
The Texas Chain Saw Massacre was picked up by the Bryanston Distributing Company. In 1972, Bryanston was the distributor for the theatrical release of the hardcore pornographic film Deep Throat. The film’s success shifted popular discourse around pornography, and helped Bryanston widen the theatrical release for The Texas Chain Saw Massacre.
In subsequent years, media reported on alleged abusive on-set conditions on Deep Throat, along with claims Bryanston was connected with organised crime. Director Hooper, and many of the Chain Saw Massacre cast, alleged they never received their share of box office from the distributor.
The Texas Chain Saw Massacre’s proximity to Deep Throat stoked controversy, conflating concern about increasingly extreme depictions of sex and violence onscreen.
Two years earlier, young filmmaker Wes Craven had transitioned from making pornography to horror film. His low budget rape-revenge exploitation film The Last House on the Left (1972) was originally developed as a hardcore pornographic film. This approach was abandoned when it entered production.
Unlike Craven’s notorious film, The Texas Chain Saw Massacre is not overtly sexualised. While there may be a sexual undertone to Leatherface’s pursuit of Sally and her companions, it does not escalate to onscreen acts of sexual violence.
Regardless, the film drew condemnation, particularly in the United Kingdom, where it was banned, and later figured in public debates about the censorship of “video nasties” in the 1980s.
For my part, I remember encountering The Texas Chain Saw Massacre at the video rental store as a child: its title, cover and R-rating promised horrors beyond comprehension, many years before I actually saw the film itself.
Horrors implied, rather than shown
Beyond its controversies, The Texas Chain Saw Massacre played an important role in the developing field of horror film studies. It figures prominently in Robin Wood’s taxonomy of “reactionary” horror movies (which uphold traditional values) and “progressive” horror movies, which take a more ambivalent stance on the figure of the monster, challenging conservative social values. Wood counts The Texas Chain Saw Massacre in the latter category.
It is also central to Carol J. Clover’s influential codification of the “final girl” narrative trope, in which a sole young woman is able to withstand the monster’s onslaught.
Alongside Halloween (1978), The Texas Chain Saw Massacre helped steer the trajectory of American horror films in the 1980s.
Halloween is situated within the manicured surroundings of suburbia, and conveys its menace through the slick technical qualities of its gliding camera, and John Carpenter’s staccato synth score.
By contrast, The Texas Chain Saw Massacre locates its horror in the backroads and decrepit farmhouses of central Texas. The idea of Texas looms large, connoting a place of lawlessness, violence and danger.
Hooper punctuates his long shots with extreme close ups via rapid editing. The film’s most grotesque horrors are implied, rather than shown. Its most visceral impact comes from its extended chase sequences, and via its soundtrack: Sally’s piercing screams, and Leatherface’s ever-present chainsaw.
While the Texas Chain Saw Massacre spawned several sequels and influenced even more imitators over the years, from the Ramones to Wolf Creek (2005) and X (2022), it has rarely been matched in its intensity, and its harrowing, visceral impact.
Nicholas Godfrey does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
In Plant City, 20 miles inland from Tampa, at least 35 people had been rescued by dawn, City Manager Bill McDaniel said. While the storm wasn’t as extreme as feared, McDaniel said his city had flooded in places and to levels he had never seen. Traffic signals were out. Power lines and trees were down. The sewage plant had been inundated, affecting the public water supply.
Evacuating might seem like the obvious move when a major hurricane is bearing down on your region, but that choice is not always as easy as it may seem.
Evacuating from a hurricane requires money, planning, the ability to leave and, importantly, a belief that evacuating is better than staying put.
I recently examined years of research on what motivates people to leave or seek shelter during hurricanes as part of a project with the Federal Emergency Management Agency and the Natural Hazards Center. I found three main reasons that people didn’t leave.
Evacuating can be expensive
Evacuating requires transportation, money, a place to stay, the ability to take off work days ahead of a storm and other resources that many people do not have.
With 1 in 9 Americans facing poverty today, many have limited evacuation options. During Hurricane Katrina in 2005, for example, many residents did not own vehicles and couldn’t reach evacuation buses. That left them stranded in the face of a deadly hurricane. Nearly 1,400 people died in the storm, many of them in flooded homes.
Two days ahead of landfall, Milton was a Category 5 hurricane. About 5 million people were under evacuation orders, and highways were crowded.
Gas shortages and traffic jams can leave people stranded on highways and unable to find shelter before the storm hits. This happened during Hurricane Floyd in 1999 as 2 million Floridians tried to evacuate.
People who experienced past evacuations or saw news video of congested highways ahead of Hurricane Milton might not leave for fear of getting stuck.
Health, pets and being physically able to leave
The logistics of evacuating are even more challenging for people who are disabled or in nursing homes. Additionally, people who are incarcerated may have no choice in the matter – and the justice system may have few options for moving them.
Evacuating nursing homes, people with disabilities or prison populations is complex. Many shelters are not set up to accommodate their needs. In one example during Hurricane Floyd, a disabled person arrived at a shelter, but the hallways were too narrow for their wheelchair, so they were restricted to a cot for the duration of their stay. Moving people whose health is fragile, and doing so under stressful conditions, can also worsen health problems, leaving nursing home staff to make difficult decisions.
At least 700 people stayed in chairs or on air mattresses at River Ridge Middle/High School in New Port Richey, Fla., during Hurricane Milton. AP Photo/Mike Carlson
But failing to evacuate can also be deadly. During Hurricane Irma in 2017, seven nursing home residents died in the rising heat after their facility lost power near Fort Lauderdale, Florida. In some cases, public water systems are shut down or become contaminated. And flooding can create several health hazards, including the risk of infectious diseases.
In a study of 291 long-term care facilities in Florida, 81% sheltered residents in place during the 2004 hurricane season because they had limited transportation options and faced issues finding places for residents to go.
Some shelters allow small pets, but many don’t. This high school-turned-shelter in New Port Richey, Fla., had 283 registered pets. AP Photo/Mike Carlson
People with pets face another difficult choice – some choose to stay at home for fear of leaving their pet behind. Studies have found that pet owners are significantly less likely to evacuate than others because of difficulties transporting pets and finding shelters that will take them. In destructive storms, it can be days to weeks before people can return home.
Risk perception can also get in the way
People’s perceptions of risk can also prevent them from leaving.
If people have experienced a hurricane before that didn’t do significant damage, they may perceive the risks of a coming storm to be lower and not leave.
Video from across Florida after Hurricane Milton shows flooding around homes, trees down and other damage. At least five people died in the storm, and more than 3 million homes lost power.
People had fears about safety and whether shelter environments could meet their needs. For example, religious minorities were not sure whether shelters would be clean, safe, have private places for religious practice, and food options consistent with faith practices. Diabetics and people with young children also had concerns about finding appropriate food in shelters.
How to improve evacuations for the future
There are ways leaders can reduce the barriers to evacuation and shelter use. For example:
Building more shelters able to withstand hurricane force winds can create safe havens for people without transportation or who are unable to leave their jobs in time to evacuate.
Arranging more shelters and transportation able to accommodate people with disabilities and those with special needs, such as nursing home residents, can help protect vulnerable populations.
Opening shelters to accommodate pets with their owners can also increase the likelihood that pet owners will evacuate.
Public education can be improved so people know their options. Clearer risk communication on how these storms are different than past ones and what people are likely to experience can also help people make informed decisions.
Being prepared saves lives. Many areas would benefit from better advance planning that takes into account the needs of large, diverse populations and can ensure populations have ways to evacuate to safety.
Carson MacPherson-Krutsky works for the Natural Hazards Center (NHC) at the University of Colorado Boulder. She receives grant and contract funding for her work at NHC through the National Science Foundation, the U.S. Army Corps of Engineers, the Federal Emergency Management Agency, and other funders.
In Plant City, 20 miles inland from Tampa, at least 35 people had been rescued by dawn, City Manager Bill McDaniel said. While the storm wasn’t as extreme as feared, he said his city had flooded in places and to levels he had never seen. Traffic signals were out. Power lines and trees were down. The sewage plant had been inundated, affecting the public water supply.
Evacuating might seem like the obvious move when a major hurricane is bearing down on your region, but that choice is not always as easy as it may seem.
Evacuating from a hurricane requires money, planning, the ability to leave and, importantly, a belief that evacuating is better than staying put.
I recently examined years of research on what motivates people to leave or seek shelter during hurricanes as part of a project with the Federal Emergency Management Agency and the Natural Hazards Center. I found three main reasons that people didn’t leave.
Evacuating can be expensive
Evacuating requires a car, gas money, a place to stay, the ability to take off work days ahead of a storm and other resources that many people do not have.
With 1 in 9 Americans facing poverty today, many have limited evacuation options. During Hurricane Katrina in 2005, for example, many residents did not own vehicles and couldn’t reach evacuation buses. That left them stranded in the face of a deadly hurricane. Nearly 1,400 people died in the storm, many of them in flooded homes.
Two days ahead of landfall, Milton was a Category 5 hurricane. About 5 million people were under evacuation orders, and highways were crowded.
Gas shortages and traffic jams can leave people stranded on highways and unable to find shelter before the storm hits. This happened during Hurricane Floyd in 1999 as 2 million Floridians tried to evacuate.
People who experienced past evacuations or saw news video of congested highways ahead of Hurricane Milton might not leave for fear of getting stuck.
Health, pets and being physically able to leave
The logistics of evacuating are even more challenging for people who are disabled or in nursing homes. Additionally, people who are incarcerated may have no choice in the matter – and the justice system may have few options for moving them.
Evacuating nursing homes, people with disabilities or prison populations is complex. Many shelters are not set up to accommodate their needs. In one example during Hurricane Floyd, a disabled person arrived at a shelter, but the hallways were too narrow for their wheelchair, so they were restricted to a cot for the duration of their stay. Moving people whose health is fragile, and doing so under stressful conditions, can also worsen health problems, leaving nursing home staff to make difficult decisions.
At least 700 people stayed in chairs or on air mattresses at River Ridge Middle/High School in New Port Richey, Fla., during Hurricane Milton. AP Photo/Mike Carlson
But failing to evacuate can also be deadly. During Hurricane Irma in 2017, seven nursing home residents died in the rising heat after their facility lost power near Fort Lauderdale, Florida. In some cases, public water systems are shut down or become contaminated. And flooding can create several health hazards, including the risk of infectious diseases.
In a study of 291 long-term care facilities in Florida, 81% sheltered residents in place during the 2004 hurricane season because they had limited transportation options and faced issues finding places for residents to go.
Some shelters allow small pets, but many don’t. This high school-turned-shelter in New Port Richey, Fla., had 283 registered pets. AP Photo/Mike Carlson
People with pets face another difficult choice – some choose to stay at home for fear of leaving their pet behind. Studies have found that pet owners are significantly less likely to evacuate than others because of difficulties transporting pets and finding shelters that will take them. In destructive storms, it can be days to weeks before people can return home.
Risk perception can also get in the way
People’s perceptions of risk can also prevent them from leaving.
If people have experienced a hurricane before that didn’t do significant damage, they may perceive the risks of a coming storm to be lower and not leave.
Video from across Florida after Hurricane Milton shows flooding around homes, trees down and other damage. At least five people died in the storm, and more than 3 million homes lost power.
People had fears about safety and whether shelter environments could meet their needs. For example, religious minorities were not sure whether shelters would be clean, safe, have private places for religious practice, and food options consistent with faith practices. Diabetics and people with young children also had concerns about finding appropriate food in shelters.
How to improve evacuations for the future
There are ways leaders can reduce the barriers to evacuation and shelter use. For example:
Building more shelters able to withstand hurricane force winds can create safe havens for people without transportation or who are unable to leave their jobs in time to evacuate.
Arranging more shelters and transportation able to accommodate people with disabilities and those with special needs, such as nursing home residents, can help protect vulnerable populations.
Opening shelters to accommodate pets with their owners can also increase the likelihood that pet owners will evacuate.
Public education can be improved so people know their options. Clearer risk communication on how these storms are different than past ones and what people are likely to experience can also help people make informed decisions.
Being prepared saves lives. Many areas would benefit from better advance planning that takes into account the needs of large, diverse populations and can ensure populations have ways to evacuate to safety.
Carson MacPherson-Krutsky works for the Natural Hazards Center (NHC) at the University of Colorado Boulder. She receives grant and contract funding for her work at NHC through the National Science Foundation, the U.S. Army Corps of Engineers, the Federal Emergency Management Agency, and other funders.
It wasn’t that long ago that the Federal Reserve, the central bank for the United States, was worrying that annual inflation would surpass 9% in the middle of 2022. The U.S. economy hadn’t seen prices rise that fast since the 1980s, and most everyone feared that a series of interest rate hikes would plunge the economy into a recession.
What a difference two years can make.
Inflation cooled to 2.4% in September 2024, according to consumer price index data released by the Labor Department on Oct. 10. That’s down from 2.5% the previous month and in line with market expectations of 2.3% to 2.4%. The inflation rate peaked at 8.9% in June 2022 – a 41-year high.
The news brings the Fed – and its chair, Jerome Powell – much closer to reaching its 2% inflation target. It also marks the fourth straight month that year-over-year price changes have been below 3% and the third consecutive month of declining inflation rates.
Speaking as an economist and finance professor, I think this could be a big deal for the Federal Reserve, which next meets – and could again cut interest rates – in November.
Fodder for another rate cut?
The Fed has what’s called a dual mandate: It pursues both low inflation and stable employment, two goals that can sometimes be at odds. Cutting interest rates can help employment but worsen inflation, while hiking them can do the opposite.
Since inflation started to take off during the COVID-19 pandemic, Fed officials have emphasized that their job isn’t done until price increases are back down to the 2% target.
But in light of recent labor market news, Powell and his colleagues have changed their messaging a bit. This indicates that the upside risks of inflation are lower than the risks associated with a weakening labor market.
And in September, the Fed slashed the federal funds rate by 0.5 percentage point, or 50 basis points – the first cut since it began hiking rates in March 2022. The move came as unemployment had ticked up to 4.3% in July, job openings plummeted and broader labor markets weakened.
Increasingly optimistic markets
Equity markets rallied on the news of the September rate cut. Investors believe reductions in the federal funds rate, which is a prime rate that helps to dictate mortgage rates, auto loans, credit card rates and home equity lines of credit, will spur increases in investment and consumption, guiding the economy to a so-called soft landing instead of a recession.
Between today’s inflation news and the unexpectedly sunny jobs report on Oct. 4, investors and markets have a lot of news to digest as they consider what path interest rates will take in the months ahead. Many continue to believe that we may well see two 25-basis-point cuts by the end of 2024 – and so do I.
Jason Reed does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
The world is facing multiple — potentially catastrophic — crises, including inequality, poverty, food insecurity, climate change and biodiversity loss. These issues are interconnected and require systemic solutions, as changes in one system affects others.
However, human systems have largely failed to acknowledge their connection to ecological systems. Most modern societies have dominating and exploitative relationships with nature, which are underpinned by imperialist and dualistic thinking that divides living beings into racial, gender, class or species hierarchies.
Our current mindset, with its focus on competition, growth and profit, has been a key contributor to social and ecological crises. Even more alarming is that this mindset has depleted nature to the point that it may soon fail to sustain human and non-human lives entirely.
While current sustainability efforts, such as those outlined in Earth for All: A Survival Guide for Humanity — a collaboration between scientists and economists from around the world — and the United Nations’ Pact for the Future offer pathways for action, they often fall short. These initiatives, though well-intentioned, remain rooted in a business-as-usual approach.
This isn’t enough. What’s needed is a transformative shift in how we interact with the natural world. A reciprocal relationship between humans and nature, where humans give back to the environment as much as we take, is essential. Sustainable and equitable well-being must be placed at the centre of human societies.
Central to this transformation is the need to ensure good lives for all while staying within the Earth’s planetary boundaries. These boundaries are the limits within which humanity can safely operate without causing irreversible environmental harm. This will require a new economic mindset that enables people to live with nature, instead of destroying it.
Change is daunting, but possible
Though the scale of change needed may seem daunting, it’s achievable and already in motion in some places. In many communities around the world, like Puget Sound on the northwestern coast of Washington state, people are already living in ways that allow humans and ecosystems to flourish.
In other regions, like Ecuador and Sumas First Nation, new possibilities are emerging for building human societies that operate within the planetary boundaries. Humans are exceptionally adaptable and have the advantage of foresight and the ability to transform entire systems through ethical collaboration.
Individual action is one necessary element to accelerate this shift. Change often starts small, with individuals and small groups adjusting their lives. But while personal choices do matter, individuals must also push for systemic changes in their communities, organizations, and broader society.
To make nature-connected living more widely accessible, collaborative, equitable and intentional efforts are needed. This involves intercultural communication, collaboration and open dialogue to ensure diverse perspectives are considered in decision-making processes.
Thoughtfully considering the direct and indirect impacts of our action, including the immediate and long-term consequences of any decisions, will create more equitable and sustainable systems.
People looking to create meaningful change should seek to support a range of groups and organizations dedicated to environmental and social justice. This includes Indigenous leaders and treaty protocols, local authorities, environmental advocacy groups, community organizations or labour unions. A good example of this is the work being done by the UNESCO-recognized biosphere reserves.
Creativity — the essence of adaptability — flourishes when different knowledge systems are woven together. However, this must be done ethically and involve consensual and collaborative exchanges to ensure no knowledge system is exploited or undervalued. We must be careful to avoid repeating the mistakes of imperialism and domination that have created our current planetary crises.
In addition to rethinking how we approach knowledge, rebuilding strong, interconnected relationships between humans and nature also means rethinking our technological systems.
Technological innovation has been used to exploit the Earth for short-term gains, but it also holds great potential for positive change. It can either maintain or disrupt the status quo, depending on how we use it.
To build healthier relationships between people and nature, human societies need to adopt a systems thinking approach. This approach looks at the bigger picture, considering the ecological, cultural, political and social aspects of technology in an integrated manner. It ensures that innovation is guided by principles of sustainability and equity.
Climate change, biodiversity loss and resource depletion are not isolated problems — they are part of an interconnected web of crises that demand urgent and comprehensive action.
Incremental approaches are not enough to address the scale of these looming threats. Purposefully co-ordinated actions are needed to shift the current trajectory away from exploitation to one of mutual benefit for humans and the natural world.
What is needed is radical transformation aimed at creating just and flourishing relationships between nature and humanity for the benefit of all current and future life on Earth.
Christie Manning, Associate Professor of Environmental Studies at Macalester College; Jacqueline Corbett, Professor of Information Systems, Université Laval; and Simone Bignall, Senior Researcher at the University of Technology Sydney, co-authored this article.
Liette Vasseur receives funding from New Frontiers Research Program Exploration program in Canada.
Anders Hayden and Mike Jones do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.
Volunteers with Savage Freedoms Relief Operation coordinates aid in Swannanoa, on Oct. 7, 2024, after Hurricane Helene severely damaged the North Carolina town.Allison Joyce/AFP via Getty Images
The volunteers who take part in search-and-rescue operations and then support disaster survivors belong to organizations that have become more formal and established over the past decade. That’s what we found after spending more than four years volunteering alongside eight of these groups to better understand their role and the motivations of the people who participate in these efforts.
While we volunteered with these organizations, we observed them in action and interviewed their leaders and volunteers to learn why they were making the time and taking personal risks to save others. Many cited their personal values, expressed their need to belong to a group, and said it had helped them find a sense of purpose. Others shared that they were motivated by their personal circumstances and experiences or feelings of guilt, or that this kind of volunteering gave them a deep sense of satisfaction.
“I lost everything I owned in Katrina. They deemed my family’s property uninhabitable,” said a boater we’ll call Dylan to protect his anonymity. “I can’t sit here after knowing what it is to lose everything.”
Some volunteers said that one reason why they have repeatedly done this work is to counter stereotypes about people who engage in these efforts. When he’s heard people say, “Oh you’re just out there, doing it for the spotlight,” said Roger, he told us he wants to respond by saying, “Yeah, dude. If you flood, call me, I’ll come get you.”
While the organizations we researched were based in Louisiana and Texas, the volunteers who participate in these efforts come from across the U.S. and, in some cases, other countries. One volunteer we met was from the United Kingdom.
After Hurricane Helene destroyed roads in western North Carolina, rescue squads delivered aid by donkey and helicopter.
Why it matters
Since Hurricane Katrina struck the Gulf Coast in 2005, volunteers have been participating in search-and-rescue efforts after big disasters – especially in that region. But these volunteers come from all over.
Many of these groups are known as “Cajun Navy” organizations. Whether or not these organizations use the Cajun Navy branding in their names they share, a common mission of helping others in emergencies.
These volunteers aren’t just operating boats and helicopters. Others serve as dispatchers, handle logistics, and run social media operations.
Over time, some of the organizations have begun to team up with local emergency responders, signing memorandums of understanding with them. They partner with government agencies while assisting in disaster response and relief efforts, but they primarily operate with autonomy and are able to travel where they perceive the need is greatest.
But many of the eight groups we studied have become nonprofits or are in the process of doing so.
How we do our work
We were able to do this research by becoming volunteers ourselves. We took part in dispatch operations on the ground and remotely, and we supported logistics planning. We also observed and, in some cases, participated in search-and-rescue training and operations in the water and on land.
The Research Brief is a short take about interesting academic work.
Kyle Breen received funding from the National Science Foundation for this research. He currently holds funding from the Social Sciences and Humanities Research Council of Canada and the Canadian Institutes of Health Research for other research projects.
J. Carlee Purdum received funding from The National Science Foundation for this research and for other ongoing projects.
Source: The Conversation – Canada – By Majid Hashemi, Adjunct assistant professor, Economics Department, Queen’s University, Ontario
The social cost of carbon (SCC) is an essential tool for climate decision-making around the world. SCC is essentially a large cost-benefit calculation that helps policymakers compare the benefits of reducing carbon dioxide (CO2) emissions to the society-wide costs of continued use.
Most studies focus on the global level working with aggregate SCC values from countries around the world. This global value, however, hides an important nuance. When one looks at individual SCC values at the country level a clear picture emerges. Poorer countries have proportionally lower SCCs than richer ones.
The Government of Canada uses the same EPA value after exchange rate. When this global estimate (i.e., the aggregate damages to the entire planet) is broken down to country-specific estimates (i.e., the damages to a particular country), it reveals SCCs of less than US$1 for poor countries.
Does this imply that poorer countries bear lower costs due to climate change impacts? Not at all, in fact the reality is quite the opposite. Studies reveal that the damages associated with climate change are proportionally higher for lower-income countries. These damages are often hidden in SCC values in ways that reveal much about the inequalities of our modern world.
Why is the social cost of carbon lower?
The answer is the modelling approach.
To estimate the social cost of carbon, a complicated model integrates multidisciplinary scientific evidence into a single framework to analyze climate change damages. These models incorporate “damage functions” that account for various pathways through which climate change impacts societies.
Despite the comprehensive nature of these climate damage models, a critical disparity remains. The monetary value of damages is significantly smaller in poorer countries than in richer ones. Again, this does not mean the impacts are less severe; instead, it reflects the lower overall economic value of losses in these regions because of their lower overall income levels.
One of the three studies referenced by the U.S. EPA’s guidance on SCC finds climate-change-related agriculture damages and premature deaths account for 45 per cent and 49 per cent of the total global damages, respectively. In poorer countries these percentages are likely much lower given both a comparatively undervalued agricultural sector and lower ability to pay for life saving equipment.
Simply put, extreme global economic inequality hides the very real losses and damages experienced by many in poorer countries. This is because the comparative wealth gap between them and richer countries results in a lower relative SCC value.
What does this mean?
To a national policymaker, an almost zero SCC means that climate change-related projects will likely compete neck-and-neck with basic-needs projects (e.g., addressing malnutrition). From the global perspective, this leaves poorer countries with little incentive to allocate resources to the fight against climate change. Poor countries may even see their investments in such efforts as nothing more than donations to richer countries.
Indeed, from such a simple SCC-based perspective any CO2 emissions reduction step a poorer country takes could result in a higher SCC value in richer countries — a value which they are likely to receive very little of. What can be done to address this imbalance?
Additionally, international development assistance to climate adaptation funds should be more equitably aligned with SCC imbalances to ensure that richer countries — which will benefit more from emission reduction efforts — help bear the burden in supporting poorer countries’ adaptation and mitigation efforts.
While methods for estimating SCC values have become more sophisticated in recent years, addressing the global-versus-country-specific imbalance requires a combination of financial transfers and practical co-operation between richer and poorer nations. This will help ensure that the costs and benefits of global CO2 emissions reductions are shared more equally, accounting for both ethical and economic considerations.
Majid Hashemi does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.