What a great question! In fact, this is one of those questions humans will continue to ask until the end of time. That’s because we don’t actually know for sure.
But we can try and imagine what the edge of the universe might be, if there is one.
Looking back in time
Before we begin, we do need to go back in time. Our night sky has looked the same for all of human history. It’s been so reliable, humans from all around the world came up with patterns they saw in the stars as a way to navigate and explore.
To our eyes, the sky looks endless. With the invention of telescopes about 400 years ago, humans were able to see farther – more than just our eyes ever could. They continued to discover new things in the sky. They found more stars, and then eventually started to notice that there were a lot of strange-looking cosmic clouds.
Astronomers gave them the name “nebula” from the Latin word for “mist” or “cloud”.
It was less than 100 years ago that we first confirmed these cosmic clouds or nebulas were actually galaxies. They are just like Milky Way, the galaxy our own planet is in, but very far away.
What is amazing is that in every direction we look in the universe, we see more and more galaxies. In this James Webb Space Telescope image, which is looking at a part of the sky no bigger than a grain of sand, you can see thousands of galaxies.
It’s hard to imagine there is an edge where all of this stops.
The edge of the universe
However, there is technically an edge to our universe. We call it our “observable” universe.
This is because we don’t actually know if our universe is infinite – meaning it continues forever and ever.
Unfortunately, we might never know because of one pesky thing: the speed of light.
We can only ever see light that’s had enough time to travel to us. Light travels at exactly 299,792,458 metres per second. Even at those speeds, it still takes a long time to cross our universe. Scientists estimate the size of the universe is at least 96 billion light years across, and likely even bigger.
You can learn a little more about that and our universe as a whole in this video below.
What would we see if there was an edge?
If we were to travel to the very, very edge of the universe we think exists, what would there actually be?
Many other scientists and I theorise that there would just be … more universe!
As I said, there is a theory that our universe doesn’t actually have an edge, and might continue on indefinitely.
But there are other theories, too. If our universe does have an edge, and you cross it, you might just end up in a completely different universe altogether. (That is best saved for science fiction for now.)
Even though there isn’t a straightforward answer to your question, it is precisely questions like these that help us continue to explore and discover the universe, and allow us to understand our place within it. You’re thinking like a true scientist.
Sara Webb does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Source: The Conversation (Au and NZ) – By Michael Westaway, Australian Research Council Future Fellow, Archaeology, School of Social Science, The University of Queensland
The NSW Education Standards Authority has announced that teaching of the Aboriginal past prior to European arrival will be excluded from the Year 7–10 syllabus as of 2027.
Since 2012, the topic “Ancient Australia” has been taught nationally in Year 7 as part of the Australian Curriculum. In 2022, a new topic called the “deep time history of Australia” was introduced to provide a more detailed study of 65,000 years of First Nations’ occupation of the continent.
However, New South Wales has surprisingly dropped this topic from its new syllabus, which will be rolled out in 2027. Instead, students will only learn First Nations’ history following European colonisation in 1788.
This directly undermines the Alice Springs (Mparntwe) Education Declaration of 2020. This is a national agreement, signed by education ministers from all jurisdictions, which states:
We recognise the more than 60,000 years [sic] of continual connection by Aboriginal and Torres Strait Islander peoples as a key part of the nation’s history, present and future.
If the planned change to the syllabus goes through, the only Aboriginal history taught to NSW students would be that which reflects the destruction of traditional Aboriginal society. It also means Aboriginal students in NSW will be denied a chance to learn about their deep ancestral past.
The significance of Australia’s deep time past
Bruce Pascoe’s groundbreaking 2014 book Dark Emu (which sold more than 500,000 copies), and the associated documentary, have highlighted an enormous appetite for learning about Australia’s deep time past.
Hundreds of thousands of Australians engaged with Dark Emu. As anthropologist Paul Memmott notes, the book prompted a debate that encouraged a better understanding of Aboriginal society and its complexity.
It also generated research that investigated whether terms such as “hunter-gatherers” are appropriate for defining past Aboriginal society and economic systems.
In schools, teachers have used Pascoe’s book Young Dark Emu to introduce students to sophisticated land and aquaculture systems used by First Peoples prior to colonisation.
The book raises an important question. If you lived in a country that invented bread and the edge-ground axe – a culture that independently developed early trade and social living – and did all of this without resorting to land war – wouldn’t you want your children to know about it?
For many students, the history they learn at school is knowledge they carry into their adult lives – and knowledge is the strongest antidote to ignorance. Rather than abandoning the Aboriginal deep time story, schools should be encouraging students to engage with it.
Learning on Country
One of the strengths of the current NSW history syllabus is the requirement for students to undertake a “site study” in Years 8 and 9. Currently, NSW is the only jurisdiction that has made this mandatory.
Site studies are an excellent opportunity for students to learn on Country. Many teachers organise excursions to Aboriginal cultural sites where students can directly engage with local Traditional Owners and Elders.
New South Wales is brimming with sites of cultural significance to Aboriginal people. The map below highlightssome of these, ranging from megafauna sites, to extensive fish traps, to the enigmatic rock art galleries and ceremonial engravings (petroglyphs).
How students will miss out
The Ngambaa people and archaeologists from the University of Queensland are currently investigating one of the largest midden complexes in Australia. This complex, located at Clybucca and Stuart’s Point on the north coast, spans some 14 kilometres and dates back to around 9,000 years ago.
Middens, or “living sites”, are accumulations of shell that were built over time through thousands of discarded seafood meals. Since the shells help reduce the acidic chemistry of the soil, animal bones and plant remains are more likely to be preserved in middens.
For instance, the Clybucca-Stuarts Point midden complex contains remains from seals and dugongs. Both of these animals were once part of the local ecosystem, but no longer are.
The middens also extend back to before the arrival of dingoes, so studying them could help us understand how biodiversity changed once dingoes replaced thylacines and Tasmanian devils on the mainland.
Local school students, especially Aboriginal students, will be actively participating in this cutting-edge research alongside the Ngambaa people, archaeologists and teachers. Among other things, the students will learn how the Ngambaa people sustainably managed land and sea Country over thousand of years during periods of dramatic environmental change.
But innovative programs like this will no longer be as relevant if Australia’s deep time history is removed from the NSW syllabus.
An opportunity for leadership
The study of First Nations archaeological sites, history and cultures tells us a broader human story of continuity and adaptability over deep time. Indigenising the curriculum – wherein Aboriginal knowledge is braided with historical and archaeological inquiry – is a powerful way to reconcile different approaches to understanding the past.
The NSW Education Standards Authority’s proposed changes risk sending young people the message that Australia’s “history” before colonisation is not an important part of the country’s historic narrative.
But there is still time to show leadership – by reversing the decisions and by connecting teachers and students to powerful stories from Australia’s deep time past.
Michael Westaway receives funding from the Australian Research Council and Humanities and Social Science at the University of Queensland .
Bruce Pascoe is the author of the texts mentioned in this article, Dark Emu and Young Dark Emu: A Truer History. He also has positions on the boards of Black Duck Foods, the Twofold Aboriginal Corporation and First Languages Australia.
Louise Zarmati receives research funding from the ARC Centre of Excellence of Australian Biodiversity and Heritage.
Long-spined sea urchins have emerged as an environmental issue off Australia’s far south coast. Native to temperate waters around New South Wales, the urchins have expanded their range south as oceans warm. There, they devour kelp and invertebrates, leaving barren habitats in their wake.
Lobsters are widely accepted as sea urchins’ key predator. In efforts to control urchin numbers, scientists have been researching this predator-prey relationship. And the latest research by my colleagues and I, released today, delivered an unexpected result.
We set up several cameras outside a lobster den and placed sea urchins in it. We filmed at night for almost a month. When we checked the footage, most sea urchins had been eaten – not by lobsters, but by sharks.
This suggests sharks have been overlooked as predators of sea urchins in NSW. Importantly, sharks seem to very easily consume these large, spiky creatures – sometimes in just a few gulps! Our findings suggest the diversity of predators eating large sea urchins is broader than we thought – and that could prove to be good news for protecting our kelp forests.
A puzzling picture
The waters off Australia’s south-east are warming at almost four times the global average. This has allowed long-spined sea urchins (Centrostephanus rodgersii) to extend their range from NSW into waters off Victoria and Tasmania.
Sea urchins feed on kelp and in their march south, have reduced kelp cover. This has added to pressure on kelp forests, which face many threats.
Scientists have been looking for ways to combat the spread of sea urchins. Ensuring healthy populations of predators is one suggested solution.
Overseas research on different urchin species has focused on predators such as lobsters and large fish. It found kelp cover can be improved by protecting or reinstating these predators.
But despite this, no meaningful reduction in urchin populations, or increase in kelp growth, has been observed in NSW.
Why not? Could it be that lobsters are not eating urchins in great numbers after all? Certainly, there is little empirical evidence on how often predators eat urchins in the wild.
What’s more, recent research in NSW suggested the influence of lobsters on urchin populations was low, while fish could be more important.
Our project aimed to investigate the situation further.
We tied 100 urchins to blocks outside a lobster den off in Wollongong for 25 nights. This tethering meant the urchins were easily available to predators and stayed within view of our cameras.
Then we set multiple cameras to remotely turn on at sunset and turn after sunrise each day, to capture nocturnal feeding. We used a red-filtered light to film the experiments because invertebrates don’t like the white light spectrum.
We expected our cameras would capture lobsters eating the urchins. But in fact, the lobsters showed little interest in the urchins and ate just 4% of them. They were often filmed walking straight past urchins in search of other food.
Sharks, however, were very interested in the urchins. Both crested horn sharks (Heterodontus galeatus) and Port Jackson sharks (H. portusjacksonii) entered the den and ate 45% of the urchins.
As the footage below shows, sharks readily handled very large urchins (wider then 12 centimetres) with no hesitation.
Until now, it was thought few or no predators could handle urchins of this size. Larger urchins have longer spines, thicker shells and attach more strongly to the seafloor, making them harder to eat.
But the sharks attacked urchins from their spiny side, showing little regard for their sharp defences. This approach differs from other predators, such as lobsters and wrasses, which often turn urchins over and attack them methodically from their more vulnerable underside.
In fact, some sharks were so eager to eat urchins, they started feeding before the cameras turned on at sunset. This meant we had to film by hand.
Footage captured by the researchers showing crested horn sharks eating sea urchins. Horn sharks generally do not pose a threat to humans.
A complex food web
Our experiment showed the effect of lobsters on urchins in the wild is less than previously thought.
This may explain why efforts to encourage lobster numbers have not helped control urchin numbers.
We also revealed a little-considered urchin predator: sharks.
When interpreting these findings, however, a few caveats must be noted.
First, sharks (and lobsters) are not the only animals to prey on urchins. Other predators include bony fishes, and more are likely to be identified in future.
Second, other factors can control urchin numbers, such as storm damage and the influx of fresh water.
And finally, it is unsurprising that we found a key predator when we intentionally searched for it by laying out food. Tethering urchins creates an artificial environment. We don’t know if the results would be replicated in the wild.
And even though we now know some shark species eat sea urchins, we don’t yet know if they can control urchins numbers.
But our research does confirm predators capable of handling large urchins may be more widespread than previously thought.
Jeremy Day received funding from University of Newcastle, Ecological Society of Australia, Royal Zoological Soceity of New South Wales and Fisheries Research and Development Corporation.
Should young people be paid less than their older counterparts, even if they’re working the same job? Whether you think it’s fair or not, it’s been standard practice in many industries for a long time.
The argument is that young people are not fully “work-ready” and require more intensive employer support to develop the right skills for their job.
Why? They say the need to be fairly paid for equal work effort, as well as economic considerations such as the high cost of living and ongoing housing crisis, mean paying young adults less based on their age is out of step with modern Australia.
So is there a problem with our current system, and if so, how might we go about fixing it?
What are youth wages?
In Australia, a youth wage or junior pay rate is paid as an increasing percentage of an award’s corresponding full adult wage until an employee reaches the age of 21.
This isn’t the case in every industry – some awards require all adults to be paid the same minimum rates.
But for those not covered by a specific award, as well as those working in industries including those covered by the General Retail Industry Award, Fast Food Industry Award and Pharmacy Industry Award, employees younger than 21 are not paid the full rate.
Why pay less?
Conventionally, junior rates have been thought of as a “training wage”. Younger people are typically less experienced, so as they gain more skills on the job over time, they are paid a higher hourly rate.
But there are a few key problems with this approach, which may not be relevant given many employers’ expectations for their workers to start “job-ready” and a lack of consistency in the training they provide.
Training up and developing skills is an important part of building any career. But it isn’t always provided by their employers.
Many young workers train themselves in job-related technical education and short courses, often at their own expense and prior to starting work.
Employers reap the benefit of this pre-employment training and so a “wage discount” for younger workers may be irrelevant in this instance.
None of this is to say employers aren’t offering something important when they take on young employees.
Younger workers coming into employment relatively early have access to more than just a paid job, but also become part of a team, with responsibilities and job requirements that support “bigger-picture” life skills.
Those who employ them may be contributing to their broader social and cultural engagement, something that could be considered part of a more inclusive training package. Whether that justifies a significant wage discount is less clear.
There are growing calls for a rethink on the way we compensate young people for their efforts.
An application by the Shop Distributive and Allied Employees’ Association – the union for retail, fast food and warehousing workers – seeks to remove junior rates for adult employees on three key awards. This action will be heard by the Fair Work Commission next year.
Sally McManus, Secretary of the Australian Council of Trade Unions, said the peak union body will lobby the government to legislate such changes if this application fails. The Greens have added their support.
That doesn’t have to mean abolishing youth wages altogether. But 21 years of age is a high threshold, especially given we get the right to major adult responsibilities such as voting and driving by 18.
A transition strategy could consider gradually lowering this threshold, or increasing the wage percentages over time.
Lessons from New Zealand
We wouldn’t be the first to make such a bold change if we did.
Our geographically and culturally close neighbour, New Zealand, has already removed the “youth wage” – replacing it with a “first job” rate and a training wage set at 80% of the full award rate in 2008.
A common argument against abolishing youth wages – and increasing the minimum wage in general – is that it will stop businesses hiring young people and thus increase unemployment.
But a 2021 study that examined the effects of New Zealand’s experience with increasing minimum wages – including this change – found little discernible difference in employment outcomes for young workers.
The authors did note, however, that New Zealand’s economic downturn post-2008 had a marked effect on the employment of young workers more generally.
It’s easy to see how we arrived at the case for paying younger adults less. But younger workers should not bear the burden of intergenerational inequity by “losing out” on wages in the early part of their working life.
The debate we see now echoes the discussions about equal pay for equal work value run in the 1960s and ‘70s in relation to women’s unequal pay.
We were warned that paying women the same as men would cause huge economic dislocation. Such a catastrophe simply did not come to pass.
Kerry Brown is a member of the National Tertiary Education Union.
Recent research showing the richest New Zealanders pay less tax than their counterparts in nine similar OECD countries raises, yet again, serious questions about wealth, equality and fairness.
How unequal is the distribution of income in New Zealand? How do we compare with some of the countries we might benchmark against? And, if we don’t like what we see, can we change it?
The metric most widely used by economists to measure inequality in incomes is called the Gini coefficient (named after the Italian statistician Corrado Gini who developed it).
It brings together income data across all households, typically divided into groupings of 10% or 20% of the total. When there is no inequality of incomes between groups, Gini equals zero. When the top group captures all income, Gini equals 1.
Measuring inequality
The graph below shows Gini coefficients, before taxes and welfare payments (known as “transfers”), for all 37 countries in the OECD in 2019 (before the COVID pandemic disrupted household surveys). Ginis are ranked left to right, from least to most unequal.
The Gini before taxes and transfers is a measure of the inequality produced by the structures of a country’s economy: the way value chains operate, the markets for products and services, the scarcity of certain skills, rates of unionisation, and so on.
This gives us a measure of structural inequalities in a country. Governments, however, use taxes and transfers to shift income between households. They take taxes from some and boost incomes of the more disadvantaged.
Ginis of incomes after taxes and transfers give us a measure of how well members of a society can support similar standards of living. They are shown in the following graph, again from least to most unequal. These give us a measure of social inequalities.
Focusing just on social inequality, it is no surprise Scandinavian countries are among the least unequal, as well as Canada and Ireland. Neither is it surprising the UK and US approach the highest levels of social inequality in the OECD.
Inequalities in Australia and New Zealand lie between these, but further from the Scandinavians and closer to the Anglo-Americans.
Social inequality in NZ
When we look at the difference between structural and social inequalities, we can see the extent to which taxes and transfers – government redistribution of income – reduce inequality.
As we can see, New Zealand’s structural inequality, shaped by the economic reforms of the mid-1980s, is middling by comparison to other OECD countries.
But New Zealand’s social inequality lies near the bottom third of OECD measures. A halving of top income tax rates in the mid-1980s and the rollback of the welfare state in the 1990s (after then finance minister Ruth Richardson’s 1991 “mother of all budgets”) significantly contributed to this.
The downward columns in the following graph show the effect of government redistributive measures, ranked from most to least active. The result of these government redistributions in New Zealand is weaker even than in the laissez-faire economies of the United Kingdom and United States.
Where does NZ sit?
How do New Zealand’s inequalities compare with countries we might choose to benchmark against?
Below, the Scandinavian countries famous for their egalitarian social systems are shown in orange. In green are countries that tolerate slightly higher social inequality: Sweden, Canada and Ireland.
And the UK and US – exemplars of free-market capitalism that were the models for New Zealand’s reforms of the mid-1980s – are highlighted in grey.
Reducing inequality
How hard would it be to change? Could New Zealand, for example, reduce its level of social inequality to match Canada? Absolutely, yes.
Other OECD data show Canada significantly cut its inequalities between 2010 and 2019. The country moved from a position identical to Luxembourg (haven for Europe’s wealthy) to be roughly level with Sweden.
To match Canada’s level now, New Zealand would need to reduce structural inequalities further, or redistribute about as much as Norway and Denmark do. It can be done, in other words.
Indeed, Finland shows government redistributions can transform some of the worst levels of structural inequality to produce outcomes comparable to other Scandinavian countries.
New Zealand can aspire to goals for social equality matching those in the upper half of OECD countries. Beyond revisions to taxation and transfers, inequalities in health and education would also need to come down to reduce the social and economic costs of poverty and disadvantage that should bring shame to us all.
The author acknowledges the contribution of data provided by Max Rashbrooke.
Colin Campbell-Hunt does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
While many New Zealanders have heard of open banking, few understand its benefits, according to new research from BNZ.*
Open banking gives bank customers the power to control and securely share their financial data with trusted third parties like fintechs.
Access to that data means banks and fintechs can create highly tailored products and services, such as apps that offer insights into spending habits, budget planning and savings goals, or that instantly share financial information with multiple lenders, making it easier and faster to apply for a loan.
“Our survey found that while 60% of respondents have heard of open banking, only a quarter (26%) have some understanding of what it means,” says Karna Luke, BNZ Executive, Customer Products and Services.
“However, after learning more about its capabilities, nearly three-quarters (73%) expressed an interest in using open banking services.
“This shows that New Zealanders are very open to new ways of managing their finances but need the right information to feel confident about using the technology,” says Luke.
The survey also shed light on some risky practices highlighting a need for greater education. Two-thirds (66%) of respondents reported having used payment services that rely on screen scraping. This practice puts users’ data at risk by requiring them to share their online banking login credentials with third parties to access certain services.
“Open banking provides a safe and secure way to share your financial data with trusted third parties without ever having to disclose your banking login details. It’s much more secure than screen scraping, but our survey shows a big gap between awareness and understanding of open banking’s benefits, particularly around security,” says Luke.
Bridging the knowledge gap
Luke says education is key to building the trust and confidence needed to drive greater adoption of open banking and realise its benefits.
“At BNZ, we’ve been collaborating with fintechs since 2018 to develop innovative products and services that showcase open banking’s potential, and we’ve developed content and resources to inform and engage our customers about the benefits. Already, more than 250,000 BNZ customers are using apps and other services made possible through open banking.”
“While we’ve made good progress, there’s still more work to be done to educate New Zealanders about the benefits of open banking and build trust in its capabilities. This will be crucial to ensure that everyone can take advantage of the huge potential open banking offers.”
Luke highlighted the importance of the Consumer Data Right (CDR), which is currently progressing through Parliament as part of the Customer and Product Data Bill. The CDR sets rules around how customer data is shared and managed and ensures legal safeguards are in place to protect New Zealanders.
“While banks have been working hard to build the technology needed for open banking, the CDR will provide the rules and protections necessary to ensure people feel secure and confident using these new services,” Luke says.
“The Government’s commitment to investigate opportunities for early adoption of open banking by government agencies, in line with recommendations from the Commerce Commission, is also a welcome move which could significantly boost public trust and understanding.
“We’re committed to working alongside regulators and the wider industry to ensure that open banking delivers on its promise of greater financial empowerment and choice for all New Zealanders.”
For more information about open banking and BNZ’s initiatives, visit bnz.co.nz/openbanking.
*Source: BNZ Voice customer panel survey, 18th to 28th July 2024. Total responses: n=355. The profile of participating customers was not controlled for this survey.
There’s a common thread linking our experience of pandemics over the past 700 years. From the black death in the 14th century to COVID in the 21st, public health authorities have put emergency measures such as isolation and quarantine in place to stop infectious diseases spreading.
As we know from COVID, these measures upend lives in an effort to save them. In both the recent and distant past they’ve also given rise to collective unrest, confusion and resistance.
So after all this time, what do we know about the role public health communication plays in helping people understand and adhere to protective measures in a crisis? And more importantly, in an age of misinformation and distrust, how can we improve public health messaging for any future pandemics?
Last year, we published a Cochrane review exploring the global evidence on public health communication during COVID and other infectious disease outbreaks including SARS, MERS, influenza and Ebola. Here’s a snapshot of what we found.
A key theme emerging in analysis of the COVID pandemic globally is public trust – or lack thereof – in governments, public institutions and science.
Mounting evidence suggests levels of trust in government were directly proportional to fewer COVID infections and higher vaccination rates across the world. It was a crucial factor in people’s willingness to follow public health directives, and is now a key focus for future pandemic preparedness.
Here in Australia, public trust in governments and health authorities steadily eroded over time.
Initial information from governments and health authorities about the unfolding COVID crisis, personal risk and mandated protective measures was generally clear and consistent across the country. The establishment of the National Cabinet in 2020 signalled a commitment from state, territory and federal governments to consensus-based policy and public health messaging.
During this early phase of relative unity, Australians reported higher levels of belonging and trust in government.
When state, territory and federal governments have conflicting policies on protective measures, people are easily confused, lose trust and become harder to engage with or persuade. Many tune out from partisan politics. Adherence to mandated public health measures falls.
Our research found clarity and consistency of information were key features of effective public health communication throughout the COVID pandemic.
We also found public health communication is most effective when authorities work in partnership with different target audiences. In Victoria, the case brought against the state government for the snap public housing tower lockdowns is a cautionary tale underscoring how essential considered, tailored and two-way communication is with diverse communities.
The much-touted “miracle” drug ivermectin typifies the extraordinary traction unproven treatments gained locally and globally. Ivermectin is an anti-parasitic drug, lacking evidence for viruses like COVID.
Australia’s drug regulator was forced to ban ivermectin presciptions for anything other than its intended use after a sharp increase in people seeking the drug sparked national shortages. Hospitals also reported patients overdosing on ivermectin and cocktails of COVID “cures” promoted online.
The Lancet Commission on lessons from the COVID pandemic has called for a coordinated international response to countering misinformation.
As part of this, it has called for more accessible, accurate information and investment in scientific literacy to protect against misinformation, including that shared across social media platforms. The World Health Organization is developing resources and recommendations for health authorities to address this “infodemic”.
National efforts to directly tackle misinformation are vital, in combination with concerted efforts to raise health literacy. The Australian Medical Association has called on the federal government to invest in long-term online advertising to counter health misinformation and boost health literacy.
People of all ages need to be equipped to think critically about who and where their health information comes from. With the rise of AI, this is an increasingly urgent priority.
Australian health ministers recently reaffirmed their commitment to the new Australian Centre for Disease Control (CDC).
From a science communications perspective, the Australian CDC could provide an independent voice of evidence and consensus-based information. This is exactly what’s needed during a pandemic. But full details about the CDC’s funding and remit have been the subject of some conjecture.
Many of our key findings on effective public health communication during COVID are not new or surprising. They reinforce what we know works from previous disease outbreaks across different places and points in time: tailored, timely, clear, consistent and accurate information.
The rapid rise, reach and influence of misinformation and distrust in public authorities bring a new level of complexity to this picture. Countering both must become a central focus of all public health crisis communication, now and in the future.
This article is part of a series on the next pandemic.
Rebecca Ryan receives funding from the National Health and Medical Research Council through funding to Australian Cochrane entities, and was previously commissioned by the World Health Organization to undertake a rapid evidence review on communication for COVID-19 prevention and control (2020).
Shauna Hurley does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
In recent months, many Canadian employers in both the public and private sectors have implemented return-to-office mandates, requiring workers that transitioned to remote or hybrid work during the COVID-19 pandemic to work in-person again.
Employers are justifying these mandates by arguing they improve productivity, build more collaborative teams and improve mentorship for junior employees.
Employers are not the only group ecstatic about these mandates. Municipalities and business owners are also expressing hope that the presence of office workers will spin off into greater consumer spending at restaurants and other businesses near office buildings. The expectation is that office workers will once again start spending money on coffee, lunch or after-work beverages.
Our recent study investigated which operational, demographic and land use factors affected restaurant survival during the first year of the pandemic in London, Ont.
We found no significant differences between restaurants that failed and restaurants that survived based on proximity to office uses. Instead, operational decisions made by restaurants individually were much more predictive of their survival than any geographic factor, including the presence of offices.
Restaurants are seen along Richmond Street in downtown London, Ontario, in June 2021. (Alexander Wray), CC BY-NC-SA
We found that restaurants located in areas receiving more CERB (Canadian Emergency Response Benefit) payments, and with a higher density of entertainment venues around them, were less likely to survive.
Restaurants that adapted by offering pickup and delivery options were more likely to survive, though only for those that did their own delivery in-house rather than relying on platforms like UberEats and SkipTheDishes. Restaurants that had drive-thrus, held liquor licenses, or had been established for more than five years were more likely to survive. These older, more established restaurants were likely more resilient because of financial stability and customer loyalty.
Table-service restaurants fared better than fast food outlets, likely because they could offer large patio dining spaces during the summer. Restaurants with liquor licenses substantially benefited, especially after a regulatory change by the Ontario government that allowed alcohol sales with takeout and delivery — a first for the province.
In short, restaurant success was driven more by individual business decisions rather than being in a specific location. People working remotely instead of in the office did not significantly affect restaurant survival during the first year of the pandemic.
Downtown struggles
As Canadian downtowns look to recover, many face ongoing challenges. Activity levels are down by about 20 per cent from pre-pandemic levels in many places, lagging behind many similarly sized downtowns in the United States.
While violent incidents are rare, the social incivilities and disorder on display — public urination and defecation, open drug use, visible tents and property crime — contributes to a perception that Canadian downtowns are unsafe. This perception, whether accurate or not, has an impact on the willingness of people to engage with their downtowns.
A way forward
The damage to the reputation of Canada’s downtowns has been done. Downtown London now has the highest office vacancy rate in the country. The Workplace Safety Insurance Board of Ontario, for instance, recently chose to consolidate its offices in the outskirts of London, rather than downtown.
Many people now elect to spend their time and money in areas that have embraced the “experience economy.” These are places that provide highly manicured entertainment and shopping destinations, with restaurants being the bedrock of enabling high quality experiences in these areas.
These are places that are developing highly attractive economies that provide people with the safe, fun and exciting experiences they are looking for locally and internationally. Instead of trying to force unwilling workers back to the office, Canadian cities should instead focus on developing downtowns that people genuinely want to visit and experience.
One potential way to do this is to provide wrap-around support services and direct pathways to stable housing across the entire community, as the City of London has done. By spreading care and outreach services across the entire city, rather than concentrating them exclusively in downtown areas, the negative effects from Canada’s homelessness crisis can be reduced on urban cores.
This type of strategy will direct those who need help away from downtowns, and may even permanently lift them out of poverty. In turn, Canadian downtowns can return to being places for everyone to shop, eat, relax, and work in comfort.
Alexander Wray is President of the Town and Gown Association of Ontario, and a Board Member of Mainstreet London.
Jamie Seabrook, Jason Gilliland, and Sean Doherty do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.
It is very common for children to have a day or two away from school due to illness. But children can also miss much longer periods of schooling if they have a serious illness or injury.
This could be a severe episode of mental illness, a diagnosis of Type 1 diabetes or in my family’s case, our youngest child being hit by a car at a pedestrian crossing, requiring months of rehab.
After the initial shock, treatment and recovery, families then need to navigate a complex return to school – to make things as normal as possible for the student while handling their ongoing medical needs.
How can families support their child?
How many students are missing school?
There are many reasons why children may need to have a significant break from school.
So they recommended returning to school gradually. Students may just go for half days or for a few hours initially.
To make this as smooth as possible, parents or caregivers should meet with the school before you hope to return. This meeting should include the student if possible, relevant teachers (such as class teachers and year-level coordinators) and school nurse.
Not all schools have a dedicated nurse. But if there is one available, they can play an important liaison role and manage a child’s medications or situation at school. If there is no nurse, make sure you include the school’s administration team.
The meeting with the school should make a clear plan around what new support the student needs and how they will receive this. They may need changes to their uniform, timetable or where they physically go in the school. Students may also need extra time to do work, extra academic help and extra breaks.
Families may also want to schedule regular catch-ups with the school.
Children can be worried about not fitting in, especially if something significant has happened to them that makes them feel different from their peers. They may not want a huge fuss when they come back.
Arranging time to talk to or see friends before they come back can help ease a student into their new routine.
Depending on the situation, you could enlist a trusted buddy to help with bags or walk a bit more slowly with them between classes.
Or students may get special permission to leave class a bit early to avoid crowds, or to be able to go and see the nurse without asking the teacher each time and drawing attention to themselves.
As your child returns, make sure the focus is not just on catching up academically but catching up with friends as well. If their hours are reduced at school, try and allow for social time (such as including recess or lunch) as well as lessons.
Your child will likely be dealing with a lot, both mentally and physically. So keep talking to them as much as possible about how they are feeling and going as they return.
Things may have changed for them (and for you), but with time and support, school can feel like a normal part of life again.
Sarah Jefferson does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Limestone pinnacles of the Nambung National Park karst.Matej Lipar
Almost one-sixth of Earth’s land surface is covered in otherworldly landscapes with a name that may also be unfamiliar: karst. These landscapes are like natural sculpture parks, with dramatic terrain dotted with caves and towers of bedrock slowly sculpted by water over thousands of years.
However, it can be quite challenging to figure out exactly when karst landscapes formed. In our new work published today in Science Advances, we show a new way to find the age of these enigmatic landscapes, which will help us understand our planet’s past in more detail.
Flowstones, stalactites and caverns within Jenolan Caves, NSW, Australia. Matej Lipar
The challenge
Karst is defined by the removal of material. The rock towers and caves we see today are what is left after water dissolved the rest during wet periods of the past.
This is what makes their age hard to determine. How do you date the disappearance of something?
Traditionally, scientists have loosely bracketed the age of a karst surface by dating the material above and beneath. However, this approach blurs our understanding of ancient climate events and how ecosystems responded.
Geological clocks
In our study, we found a way to measure the age of pebble-sized iron nodules that formed at the same time as a karst landscape.
This method has the technical name of (U/Th)-He geochronology. In it, we measure how much helium is produced by the natural radioactive decay of tiny amounts of the elements uranium and thorium in the iron nodules. By comparing the amounts of uranium, thorium and helium in a sample, we can very accurately calculate the age of the nodules.
How iron nodules can reveal their age. Milo Barham
This world-famous site is renowned for its otherworldly karst landscape of acres of limestone pillars towering metres above a sandy desert plain. The Pinnacles form part of the most extensive belt of wind-blown carbonate rock in the world, stretching more than 1,000km along coastal southwestern WA.
The Western Australia ThermoChronology Hub (WATCH) ultra-high vacuum gas extraction line for measurements of radiogenic helium. Martin Danišik
We examined multiple microscopic shards of iron nodules that were removed from the surface of limestone pinnacles. These nodules formed in the soil that lay on top of the limestone during the period of intense weathering that created the karst. As a result, they serve as time capsules of the environmental conditions that shaped the area.
A scanning electron microscope image of iron-rich cement (lighter grey in centre) binding darker grey, rounded quartz sand grains within an analysed nodule. Aleš Šoster
The big wet
We consistently found an age of around 100,000 years for the growth of the iron nodules. This date is supported by known ages from the rocks above and beneath the karst surface, proving the reliability of our new approach.
At the same time as chemical reactions caused growth of the iron-rich nodules within the ancient soil, limestone bedrock was rapidly and extensively dissolved to leave only remnant limestone pinnacles seen today.
From examining the entire rock sequence in the area, we think this period of intensive weathering was the wettest time in this part of WA over at least the past half a million years.
We don’t know what drove this increased rainfall. It may have been changes to atmospheric circulation patterns, or the greater influence of the ancient Leeuwin Current that runs along the shore.
Iron-rich nodules are not unique to the Nambung Pinnacles. They have recently been used to track dramatic past environmental change elsewhere in Australia.
Dating these iron nodules will help to better document the dramatic fluctuations in Earth’s climate over the past three million years as ice sheets have grown and shrunk.
Understanding the timing and environmental context of karst formation throughout this time offers profound insights into past climate conditions, environments and the landscapes in which ancient creatures lived.
Dark iron-rich nodules attached to the side of the base of a limestone pinnacle in the Nambung National Park. Matej Lipar
Climate changes and resulting environmental shifts have been crucial in shaping ecosystems. In particular, they have had a profound influence on our ancient hominin and human ancestors.
By linking karst formation to specific climatic intervals, we can better understand how these environmental changes may have affected early human populations.
Looking forward
The more we know about the conditions that led to the formation of past landscapes and the flora and fauna that inhabited them, the better we can appreciate the evolutionary pressures that shaped the ecosystems we see today. This in turn offers valuable information for preparing for future changes.
As human-driven climate change accelerates, learning about past climate variability and biosphere responses equips us with knowledge to anticipate and mitigate future impacts.
The ability to date karst features with greater precision may seem like a small thing – but it will help us understand how today’s landscapes and ecosystems might respond to ongoing and future climate changes.
Milo Barham has previously received research funding from the Minerals Research Institute of Western Australia.
Andrej Šmuc, John Allan Webb, Kenneth McNamara, Martin Danisik, and Matej Lipar do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.
Source: The Conversation (Au and NZ) – By Michael Odei Erdiaw-Kwasie, Lecturer in Sustainability| Business and Accounting Discipline, Charles Darwin University
Pollution and waste, climate change and biodiversity loss are creating a triple planetary crisis. In response, UN Environment Programme executive director Inger Andersen has called for waste to be redefined as a valuable resource instead of a problem. That’s what urban mining does.
We commonly think of mining as drilling or digging into the earth to extract precious resources. Urban mining recovers these materials from waste. It can come from buildings, infrastructure and obsolete products.
Urban mining can recover these “hidden” resources in cities around the world. It offers sustainable solutions to the problems of resource scarcity and waste management. And it happens in the very cities that are centres of overconsumption and hotspots for the greenhouse gas emissions driving climate change.
What sort of waste can be mined?
Materials such as concrete, pipes, bricks, roofing materials, reinforcements and e‑waste can be recovered for reuse. Urban waste can be “mined” for metals such as gold, steel, copper, zinc, aluminium, cobalt and lithium, as well as glass and plastic. Mechanical or chemical treatments are used to retrieve these metals and materials.
The extent of this fast-growing problem is driving the growth of urban mining around the world. We are then salvaging materials whose supply is finite, while reducing the impacts of waste disposal.
In Europe, the focus is largely on construction and demolition waste. Europe produces 450 million to 500 million tonnes of this waste each year – more than a third of all the region’s waste. Through its urban mining strategy, the European Commission aims to increase the recovery of non-hazardous construction and demolition waste to at least 70% across member countries by 2030.
In Asia, urban mining has focused on e‑waste. However, the region recovers only about 12% of its e‑waste stock. Rates of e‑waste recycling vary greatly: 20% for East Asia, 1% for South Asia, and virtually zero for South-East Asia. China, Japan and South Korea are leading the way in Asia.
Australia is on the right track. Our recovery rate for construction and demolition materials climbed to 80% by 2022 — the highest among all types of waste streams. However, we recover only about a third of the value of materials in our e-waste.
The OECD forecasts that global materials demand will almost double from 89 billion tonnes in 2019 to 167 billion tonnes in 2060. The United Nations’ Global Waste Management Outlook 2024 shows the amount of waste and costs of managing it are soaring too. It’s estimated the world will have 82 million tonnes of e‑waste to deal with by 2030.
These trends mean urban mining is becoming ever more relevant and important.
Urban mining also helps cut greenhouse gas emissions. Unlocking resources near where they are needed reduces transport costs and emissions. Urban mining also provides resource independence and creates employment.
In addition, increasing recovery and recycling rates reduce the pressure on finite natural resources.
Urban mining underpins circular economy alternatives such as the “deposit and return” schemes that give people financial incentives to return e‑waste and containers for recycling in cities such as Singapore, Sydney, Darwin and San Francisco. By 2030, San Francisco aims to halve disposal to landfill or incineration and cut solid waste generation by 15%.
What more needs to be done?
Governments have a role to play by adopting and enforcing policies, laws and regulations that encourage recycling through urban mining instead of sending waste to landfill. European Union laws, for example, mandate increased recycling targets for municipal waste overall and for packaging waste, including 80% for ferrous metals and 60% for aluminium.
In Australia, 2019 legislation prohibits landfills from accepting anything with a plug, battery or cord. Anything with a plug is designated as e-waste.
Product design is an important consideration. A designer must balance a product’s efficiency with making it easy to recycle. Products with greater efficiency and easy-to-recycle parts are more likely to use less energy, lead to less waste and hence less natural resource extraction.
Our urban mining research documents a more sustainable approach to product design. Increasing product stewardship initiatives are expected to encourage better product design and standards that promote reuse and recycling, producer responsibility and changes in consumer behaviour.
Good information about the available resources is essential too. The Urban Mine Platform, ProSUM and Waste and Resource Recovery Data Hub collect data on e‑waste, end-of-life vehicles, batteries and building and mining waste. These centralised databases allow easy access to data on the sources, stocks, flows and treatment of waste.
Traditional mining is not the only method for extracting raw materials for the green transition. Waste is set to be increasingly recycled, reducing demand for virgin materials. A truly circular economy can become a reality if governments develop and apply an urban mining agenda.
Michael Odei Erdiaw-Kwasie receives funding from the Foundation for Rural and Regional Renewal (FRRR).
Matthew Abunyewah receives funding from the Foundation for Rural and Regional Renewal (FRRR) and Northern Western Australia and Northern Territory Drought Resilience Adoption and Innovation Hub (Northern Hubb)
Patrick Brandful Cobbinah receives funding from Lincoln Institute of Land Policy. He is a member of Planning Institute of Australia.
Source: The Conversation (Au and NZ) – By Anna-Sophie Jürgens, Senior Lecturer in Science Communication (Pop Culture Studies), Australian National University
Warner
Like two-headed playing cards, Joker stories are about dual identity, doubles and duplicity.
Throughout DC comics and films, the Joker turns others into facsimiles of himself, grinning widely. He shares his state of mind through infectious laughter and mass “clownification”, creating copies as he goes.
Film sequel Joker: Folie à Deux, directed by Todd Phillips and released in cinemas today, participates in this rich tradition. It also challenges it by introducing a Joker haunted by his own lost futures – the glam clown, homicidal entertainer and irresistible lover he could have become.
What can we learn from the Joker character about our cultural fascination with duplication and disintegration?
Madness by imitation
Doubling, split consciousness and double meanings have been ingredients in Joker stories since the character’s creation in the 1940s.
He offers different origin stories himself in the 2008 movie blockbuster The Dark Knight (with Heath Ledger as the Joker). He is presented as many in the recent comic series Three Jokers. The Joker shuffles his own “selves like a croupier deals cards” in the 2007 Batman comic The Clown at Midnight.
Within the DC clowniverse, the Joker turns others into Joker copies and clowns, usually through the use of biological or chemical weapons or poisons, virology, hypnotism or sheer charisma. Joker copies include Joker fans and followers in clown costumes and masks, as in the 2019 film starring Joaquin Phoenix. In comics he is described as having an influence that
[…] affects people, on an almost subconscious, primal level. For most people – regular people – he inspires fear. For the less stable people – he simply inspires.
For more than 80 years, his laughter has spread like a virus and caused mass-clownification countless times.
‘The whole world smiles with you.’ The new Joker sequel plays with dual identity and shadow selves.
Multiplying his potency
Joker stories tend to revolve around three scenarios of imitation, doubling and multiplication: several people acting as one (that is, the Joker), one person acting as many (as in Batman: R.I.P. when Batman tries to understand the Joker by experiencing his state of mind like a second consciousness), and a number of personalities nestled within the Joker wreaking havoc. All of these scenarios are powerful reminders clown laughter and humour need not be funny.
The Joker character was inspired by famous films from the 1920s and ’30s, including Robert Wiene’s The Cabinet of Dr Caligari (1920), F.W. Murnau’s Nosferatu (1922), Fritz Lang’s Metropolis (1926), Roland West’s The Bat (1926) and Paul Leni’s The Man Who Laughs (1928). Many of these works feature hapless or unhappy (comic) performers, who all struggle with identity.
The cultural mould to which the Joker belongs is linked with the more than century-old fascination with doppelgangers, male nervousness, violent and involuntary laughter and the loss of agency and sense of the self.
The Joker has long played with ideas of duality. IMDB/Warner
Haunting through absence
The new sequel, Joker: Folie à Deux, draws on all these very Joker traditions. Arthur Fleck and his Joker (Phoenix again) struggles with his split identities.
Set two years after the events of the previous film, Fleck is a patient at Arkham State Hospital, where he meets the dual character Lee Quinzel/Harley Quinn (played by Lady Gaga). She wants him to lean into his Joker self.
Although she is neither the clown nor a scientist as she’s portrayed in other stories, she also wants to be a Joker version. Arthur himself wants to be the Joker, but for reasons both external and internal he ends up not really becoming the Joker we recognise from the first film.
The sequel is ultimately a trick played on the audience. “There is no Joker,” Arthur confirms at the end, just Arthur. Folie à Deux is about a broken dream’s loveliness.
The Joker is a collective dream that fails to come true. He appears in the form of fantasies. He is the past, but at the same time present and absent. This is how the concept of hauntology has been defined – a split between realities. The film glamorises and exploits disillusion as we watch the Joker and his future possibilities disintegrate.
In this way, Joker: Folie à Deux is a clown version of ruin porn, inviting us to enjoy the “decay” of a character. It gives us glimpses of a post-double version of the Joker, a non-Joker, left in pieces.
Joker: Folie à Deux is in cinemas now.
Anna-Sophie Jürgens does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
When we start to go grey depends a lot on genetics.
Your first grey hairs usually appear anywhere between your twenties and fifties. For men, grey hairs normally start at the temples and sideburns. Women tend to start greying on the hairline, especially at the front.
The most rapid greying usually happens between ages 50 and 60. But does anything we do speed up the process? And is there anything we can do to slow it down?
You’ve probably heard that plucking, dyeing and stress can make your hair go grey – and that redheads don’t. Here’s what the science says.
What gives hair its colour?
Each strand of hair is produced by a hair follicle, a tunnel-like opening in your skin. Follicles contain two different kinds of stem cells:
keratinocytes, which produce keratin, the protein that makes and regenerates hair strands
melanocytes, which produce melanin, the pigment that colours your hair and skin.
The amount of the different pigments determines hair colour. Black and brown hair has mostly eumelanin, red hair has the most pheomelanin, and blonde hair has just a small amount of both.
So what makes our hair turn grey?
As we age, it’s normal for cells to become less active. In the hair follicle, this means stem cells produce less melanin – turning our hair grey – and less keratin, causing hair thinning and loss.
As less melanin is produced, there is less pigment to give the hair its colour. Grey hair has very little melanin, while white hair has none left.
Unpigmented hair looks grey, white or silver because light reflects off the keratin, which is pale yellow.
Grey hair is thicker, coarser and stiffer than hair with pigment. This is because the shape of the hair follicle becomes irregular as the stem cells change with age.
Interestingly, grey hair also grows faster than pigmented hair, but it uses more energy in the process.
Oxidative stress is an imbalance of too many damaging free radical chemicals and not enough protective antioxidant chemicals in the body. It can be caused by psychological or emotional stress as well as autoimmune diseases.
Environmental factors such as exposure to UV, pollution, as well as smoking and some drugs, can also play a role.
Melanocytes are more susceptible to damage than keratinocytes because of the complex steps in melanin production. This explains why ageing and stress usually cause hair greying before hair loss.
Scientists have been able to link less pigmented sections of a hair strand to stressful events in a person’s life. In younger people, whose stems cells still produced melanin, colour returned to the hair after the stressful event passed.
4 popular ideas about grey hair – and what science says
1. Does plucking a grey hair make more grow back in its place?
No. When you pluck a hair, you might notice a small bulb at the end that was attached to your scalp. This is the root. It grows from the hair follicle.
Plucking a hair pulls the root out of the follicle. But the follicle itself is the opening in your skin and can’t be plucked out. Each hair follicle can only grow a single hair.
It’s possible frequent plucking could make your hair grey earlier, if the cells that produce melanin are damaged or exhausted from too much regrowth.
2. Can my hair can turn grey overnight?
Legend says Marie Antoinette’s hair went completely white the night before the French queen faced the guillotine – but this is a myth.
Melanin in hair strands is chemically stable, meaning it can’t transform instantly.
Acute psychological stress does rapidly deplete melanocyte stem cells in mice. But the effect doesn’t show up immediately. Instead, grey hair becomes visible as the strand grows – at a rate of about 1 cm per month.
Not all hair is in the growing phase at any one time, meaning it can’t all go grey at the same time.
Temporary and semi-permanent dyes should not cause early greying because they just coat the hair strand without changing its structure. But permanent products cause a chemical reaction with the hair, using an oxidising agent such as hydrogen peroxide.
Accumulation of hydrogen peroxide and other hair dye chemicals in the hair follicle can damage melanocytes and keratinocytes, which can cause greying and hair loss.
4. Is it true redheads don’t go grey?
People with red hair also lose melanin as they age, but differently to those with black or brown hair.
This is because the red-yellow and black-brown pigments are chemically different.
Producing the brown-black pigment eumelanin is more complex and takes more energy, making it more susceptible to damage.
Producing the red-yellow pigment (pheomelanin) causes less oxidative stress, and is more simple. This means it is easier for stem cells to continue to produce pheomelanin, even as they reduce their activity with ageing.
With ageing, red hair tends to fade into strawberry blonde and silvery-white. Grey colour is due to less eumelanin activity, so is more common in those with black and brown hair.
Your genetics determine when you’ll start going grey. But you may be able to avoid premature greying by staying healthy, reducing stress and avoiding smoking, too much alcohol and UV exposure.
Theresa Larkin does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
“The climate crisis is a health crisis.” So says World Health Organization Director-General Tedros Ghebreyesus.
The World Economic Forum agrees. Its report this year highlighted how climate change is taking a toll on global health due to increasingly frequent extreme weather events.
These issues are on the official agenda here too, especially since severe tropical cyclone Gabrielle caused extensive damage in the South-west Pacific and northern New Zealand in early 2023.
Between February 13 and 14 it slammed into Te Tairāwhiti/East Coast and Te Matau a Māui/Hawkes Bay, with disastrous results for the land and its inhabitants. Communities were displaced, homes destroyed, power and telecommunications cut, water systems compromised, and many roads and bridges badly damaged.
Shortly after Gabrielle hit, Manatū Hauora/Ministry of Health commissioned us to investigate the impacts of adverse weather events on health systems and community health and wellbeing.
Our community research teams interviewed 143 residents in the two affected regions. They included first responders, heath workers, council staff and members of the public. Their stories were emotional, powerful and insightful.
Our recently published report amplifies these community voices and local knowledge, and offers recommendations about planning for future, inevitable events. Here we offer five key messages.
1. Prioritise vulnerable people
Many older people and those with disabilities or existing health conditions were deprioritised or simply forgotten during evacuations and in the days and weeks after the cyclone. As one community responder in Tairāwhiti recalled:
Some of them couldn’t move out because they were so old and frail. The water was so powerful, they couldn’t move anywhere. Some just stayed in their room until somebody turned up. For instance, there was a lady [who] was stuck in her wheelchair, and by the time people found her, the water was at her neck.
Our report identified the need for health and social services to work more closely to ensure at-risk, vulnerable older people and those with disabilities or complex needs are prioritised during evacuations, so their medical and physical needs are met during and after an extreme weather event.
2. Invest in mental health support and trauma recovery
Those in the most affected communities had high levels of stress, grief and trauma during and after emergencies and evacuations.
Staff and volunteers in front-line roles during the state of emergency experienced similar mental health effects. Many felt mental health support was not there when they needed it most.
Almost everyone we spoke to had some negative mental health impacts. These included sleep disruption, rain anxiety and stress from road closures, insurance claims and land instability.
Māori participants also told of their grief over environmental damage and destruction, highlighting the links between whenua (land) and hauora (health). They described drawing on cultural practices to support whānau recovery. For example, a leader of local volunteer efforts spoke about the personal impact of the cyclone:
I was not good […] it was seeing the impact on how it was for your own community whānau. I think it hit me quite a bit later on. I fell into depression […] It just built up over time. I’m still in healing therapy for the last probably six to seven months since Gabrielle, just trying to get my wairua [spirit] and my tinana [body] and everything back in place.
Overall, the research shows a need for greater awareness and investment in weather-related trauma recovery and mental health support.
3. Ensure medical supplies can reach remote areas
Rural and isolated communities had heightened health challenges, particularly due to road and communication failures.
Transporting medical staff into these communities often required creative solutions (driving, using helicopters or hiking through bush and across farmland when roads were damaged, for example).
Access to medicines was a major concern. It took co-ordinated effort to get pharmaceuticals to such communities. Helicopters were crucial in getting supplies and patients in and out of remote areas. Not everyone who needed attention received it, however.
The most effective responses involved organisations (such as the NZ Police and Civil Defence) working together with communities. As one police officer told us:
Our whānau up the coast needed medicine, prescriptions. Getting access from the helicopter to the home was a challenge. So, the police leant in and helped out. We used [an all-terrain vehicle] to get to places and spaces to get medicine in.
People need to be prepared for power and telcommunications failures. Getty Images
4. Resource and co-ordinate local support networks
Fiscally challenged health systems were stretched during the emergency and struggled with power and telecommunications outages. But we heard of many health workers going “above and beyond” to care for patients and communities.
Many continued working even when their own families, homes and communities were directly under threat. Anticipating this and supporting these workers will be important as adverse weather becomes more frequent with climate change.
We also found marae, schools, local social services and non-profit organisations played key roles after the cyclone, but were often outside the direct ambit of the health system.
Often the people working in these organisations have strong community relationships and knowledge that is essential to supporting emergency and recovery processes. These connections should be mapped and integrated for future events.
5. Shift resources and build common will
Local communities are full of knowledge. Many have learnt from recent events to better prepare their families, workplaces and organisations.
Whānau told us about the importance of having cash in case of power outages and telecommunications failure. Others identified battery-powered radio as a critical source of information when systems were down. Pharmacists and doctors told of the importance of hard-copy evidence of prescriptions, to be able to dispense when electronic systems are out.
Checking in on neighbours, sharing resources and making time for a cup of tea were all important for people in the recovery and rebuilding phases. A key lesson is to harness the power of community connections, trust and relationships in climate change resilience and recovery.
Although knowledge, experience and wisdom lie in the hands of communities, our research highlights how financial resources mostly sit with central government. The challenge is to shift resources and build common will for climate action, before the inevitable next event.
The report is receiving attention in parliament. We hope local experience can be central to planning around the health impacts of climate change and decision-making at all levels.
We acknowledge the important contributions of our wider research team and community partners, particularly Manu Caddie (Te Weu Charitable Trust), Josie McClutchie (project lead), Dayna Chaffey, Haley Maxwell and Hiria Philip-Barbara (community researchers) in Tairāwhiti, and Emma Horgan and John Bell (Sustainable HB Centre for Climate & Resilience) in Hawkes Bay.
Holly Thorpe received support from the Manatū Hauora/Ministry of Health funding secured to conduct this research.
Fiona Langridge received support from the Ministry of Health funding secured to conduct this research.
George Laking received funding from The Ministry of Health to conduct the research. He is an Executive Board member of OraTaiao, the New Zealand Climate and Health Council.
Judith McCool receives funding from the Ministry of Health (Polynesia Health Corridors) and the Health Research Council.
Source: The Conversation (Au and NZ) – By Shane Clifton, Associate Professor of Practice, School of Health Sciences and the Centre for Disability Research and Policy, University of Sydney
It’s about time city councils did more to make our cities accessible. I recently tried to buy tickets to two Sydney Fringe Festival events, only to be told by the box office that the venues were not wheelchair-accessible.
Sydney remains a place where people with disability feel like they don’t belong. The same is true of other Australian cities. But local councils don’t bear all the blame.
Event organisers are responsible for selecting venues. In the case of the Fringe Festival, they chose locations inaccessible to wheelchair users and others with mobility challenges. It’s a bitter irony that a fringe festival, which ostensibly empowers artists and creatives on the margins, would exclude people with disability.
If event organisers (and every one of us) decided never to hire inaccessible venues, then the market might solve the issue. But those of us with disability are realistic enough to know most people don’t care – or don’t give us a thought. The market hasn’t solved the problem, so it’s up to governments.
The problems go beyond arts venues
Inaccessible venues are only the tip of the iceberg. Countless restaurants, shops and offices are inaccessible, with steps on entry, inaccessible bathrooms and narrow and cluttered aisles.
“Spend the day in my wheelchair” programs are sometimes criticised for trivialising the challenge of disability. However, they do unmask how frustrating and alienating our cities and towns can be.
Google Maps now indicates whether premises are accessible. Those that are bear the universal symbol of disability access – the stylised blue wheelchair. Even then, a person with a disability is just as likely as not to turn up and discover a lift has broken down, a doorway has been blocked off, a bathroom has been used for storage, or a venue is only partially accessible (it’s always the cool spaces that are out of reach).
Landowners and businesses typically complain providing access for the few affected people is too costly. In reality, making our public spaces accessible often requires little more than determined creative design. The costs are a mere fraction of what we spend on other things we judge as more important.
We also underestimate the value added by accessible design.
The Kerb-Cut Effect, for example, describes how designing for people with disability often benefits everyone. The term refers to the impact of activist action in California in the 1970s. Disability advocates in the city of Berkeley poured concrete onto road kerbs to create ramps giving wheelchair users access to footpaths.
These ramps also proved valuable to parents pushing children in strollers, older people and cyclists. Refined into kerb cuts, they spread rapidly around the world.
There are many other examples. Television captioning, developed for people who are deaf and hard of hearing, is now widely used by non-disabled people. Audiobooks, developed for people who are blind, are now a common way that many other people enjoy books.
Accessible venues will not just benefit wheelchair users. Older people, those with impaired mobility and people who push prams and tow suitcases all benefit. Indeed, if we make venues accessible to those on the margins, no one is excluded.
the design of products, environments, programs and services to be usable by all people, to the greatest extent possible, without the need for adaptation or specialised design.
Why use steps that exclude some people when everyone can use a ramp or a lift?
Accessibility in cities is about more than just wheelchairs; it requires a comprehensive approach to urban planning to meet the varied needs of all citizens. This includes providing sensory aids like audio signals, braille signage and visual measures for people who are blind, deaf or hard of hearing. It’s also crucial that information on public services and events is easily available to everyone in formats they can access and understand.
My focus has been on access to public spaces, but we also need to turn our attention to private homes. Wheelchair users and people with other mobility impairments can’t access most private houses in Australia. There is a drastic lack of accessible housing for people with disability and the cost of retrofitting access is exorbitant.
New South Wales is yet to follow the lead of other states and territories by signing up to the Silver Liveable Housing Design Standards. These standards are part of the revised National Construction Code. They require new housing developments to offer basic accessibility for all people.
We can and must do better. Every level of government can contribute to change.
However, new builds and renovations are often decided upon at the regional level. This means local councils should bear much of the responsibility.
A determined effort by our mayors and councillors to insist premises are accessible will be better for everyone. From a selfish perspective, it might mean I could go out to dinner or a festival without worrying if I can get in the door.
Shane Clifton is affiliated with the Centre for Disability Research and Policy at the University of Sydney.
The budget surplus for last financial year has come in at $15.8 billion, well exceeding the $9.3 billion that was forecast in the May budget.
Treasurer Jim Chalmers, just back from talks in Beijing on China’s economic outlook, will announce the result on Monday.
The government says the better-than-forecast outcome has been driven entirely by lower spending. Revenue was also lower than the budget anticipated. Areas of savings included the National Disability Insurance Scheme, payments to the states, and various grant programs that don’t exist anymore.
This is the government’s second consecutive surplus. The May budget has predicted deficits for the coming years.
Across 2022-23 and 2023-24 the budget position has improved by a cumulative $172.3 billion, compared with what was forecast in the official Pre-election Economic and Fiscal Outlook, released immediately before the 2022 election.
The government says it has made $77.4 billion in savings, including $12.2 billion in 2023-24.
Payments were 25.2% of GDP in 2023-24. This compared to the PEFO forecast of 27.1%
Chalmers said this was the “first government to post back-to-back surpluses in nearly two decades”. The surpluses hadn’t come at the expense of cost-of-living relief, he said in a statement.
Speaking in Beijing on Friday Chalmers said it remained to be seen whether China’s just-announced stimulus measures would work.
“But we’ve seen on earlier occasions when the authorities here, the administration here, steps in to support activity in the economy that is typically a good thing for Australia – good for our businesses and workers, our industries, our investors, and good for the global economy as well.
“Like a lot of people around the world, we have been concerned about the softer conditions here in the Chinese economy. Subject to the details [of measures] that will be made public in good time, any efforts to boost growth and support activity here is a welcome one around the world and especially at home in Australia.”
Chalmers on Monday is likely to face further questions on the Treasury’s work on negative gearing, news of which leaked out last week.
Michelle Grattan does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Today we removed an article titled “Should we ditch big exam halls? Our research shows how high ceilings are associated with a lower score”, because the original research has been found to contain errors and has been retracted by the academic journal that published it.
The Conversation’s article, published on July 3, 2024, was based on a study published online by The Journal of Environmental Psychology on June 26, 2024. It looked at the impact of ceiling heights on the exam performance of Australian students, and found that even after accounting for other factors such as age or past exam experience, higher ceiling heights were statistically correlated with poorer exam results.
After the study was published, a query from a reader of the journal article led the authors to review their calculations.
The authors discovered some honest errors in their work, leading them to conclude that the relationship between ceiling heights and exam score was “more nuanced” than presented in the paper.
The revised research manuscript was reviewed by the same anonymous peer-reviewers who looked at the original research. One reviewer did not feel comfortable assessing the statistical corrections, one advised against publishing the corrected manuscript, and a third recommended revisions.
On this basis, the Journal of Environmental Psychology rejected the amended version. The journal’s response can be found here.
The authors, lead by Isabella Bower, apologise for the error, and are working to resubmit their updated research to another journal.
The Conversation has decided that, in light of the current status of the research, the most appropriate option is to retract our coverage of the study. We are committed to providing accurate and reliable information, and to acknowledging errors in an open and transparent way when they occur.
Source: The Conversation (Au and NZ) – By Martie-Louise Verreynne, Professor in Innovation and Associate Dean (Research), The University of Queensland
Humans are increasingly engaging with wearable technology as it becomes more adaptable and interactive. One of the most intimate ways gaining acceptance is through augmented reality (AR) glasses.
Last week, Meta debuted a prototype of the most recent version of their AR glasses – Orion. They look like reading glasses and use holographic projection to allow users to see graphics projected through transparent lenses into their field of view.
Meta chief Mark Zuckerberg called Orion “the most advanced glasses the world has ever seen”. He said they offer a “glimpse of the future” in which smart glasses will replace smartphones as the main mode of communication.
But is this true or just corporate hype? And will AR glasses actually benefit us in new ways?
Old technology, made new
The technology used to develop Orion glasses is not new.
In the 1960s, computer scientist Ivan Sutherland introduced the first augmented reality head-mounted display. Two decades later, Canadian engineer and inventor Stephen Mann developed the first glasses-like prototype.
Throughout the 1990s, researchers and technology companies developed the capability of this technology through head-worn displays and wearable computing devices. Like many technological developments, these were often initially focused on military and industry applications.
In 2013, after smartphone technology emerged, Google entered the AR glasses market. But consumers were disinterested, citing concerns about privacy, high cost, limited functionality and a lack of a clear purpose.
This did not discourage other companies – such as Microsoft, Apple and Meta – from developing similar technologies.
Looking inside
Meta cites a range of reasons for why Orion are the world’s most advanced glasses, such as their miniaturised technology with large fields of view and holographic displays. It said these displays provide:
compelling AR experiences, creating new human-computer interaction paradigms […] one of the most difficult challenges our industry has ever faced.
Orion also has an inbuilt smart assistant (Meta AI) to help with tasks through voice commands, eye and hand tracking, and a wristband for swiping, clicking and scrolling.
With these features, it is not difficult to agree that AR glasses are becoming more user-friendly for mass consumption. But gaining widespread consumer acceptance will be challenging.
A set of challenges
Meta will have to address four types of challenges:
ease of wearing, using and integrating AR glasses with other glasses
psychological factors such as social acceptance, trust in privacy and accessibility.
These factors are not unlike what we saw in the 2000s when smartphones gained acceptance. Just like then, there are early adopters who will see more benefits than risks in adopting AR glasses, creating a niche market that will gradually expand.
This will allow for broader applications in education (for example, virtual classrooms), remote work and enhanced collaboration tools. Already, Orion’s holographic display allows users to overlay digital content and the real world, and because it is hands-free, communication will be more natural.
Creative destruction
Smart glasses are already being used in many industrial settings, such as logistics and healthcare. Meta plans to launch Orion for the general public in 2027.
By that time, AI will have likely advanced to the point where virtual assistants will be able to see what we see and the physical, virtual and artificial will co-exist. At this point, it is easy to see that the need for bulky smartphones may diminish and that through creative destruction, one industry may replace another.
This is supported by research indicating the virtual and augmented reality headset industry will be worth US$370 billion by 2034.
The remaining question is whether this will actually benefit us.
There is already much debate about the effect of smartphone technology on productivity and wellbeing. Some argue that it has benefited us, mainly through increased connectivity, access to information, and productivity applications.
But others say it has just created more work, distractions and mental fatigue.
If Meta has its way, AR glasses will solve this by enhancing productivity. Consulting firm Deloitte agrees, saying the technology will provide hands-free access to data, faster communication and collaboration through data-sharing.
It also claims smart glasses will reduce human errors, enable data visualisation, and monitor the wearer’s health and wellbeing. This will ensure a quality experience, social acceptance, and seamless integration with physical processes.
But whether or not that all comes true will depend on how well companies such as Meta address the many challenges associated with AR glasses.
Martie-Louise Verreynne receives funding from the ARC and NHMRC.
The development of “superhuman” strength and power has long been admired in many cultures across the world.
This may reflect the importance of these physical fitness characteristics in many facets of our lives from pre-history to today: hunting and gathering, the construction of large buildings and monuments, war, and more recently, sport.
Potentially, the current peak of human strength and power is demonstrated in the sport of strongman.
What is strongman?
Strongman is becoming more common, with competitions now available at regional, national and international levels for men and women of different ages and sizes.
Strongman training and competitions typically involve a host of traditional barbell-based exercises including squats, deadlifts and presses but also specific strongman events.
The specific strongman events – such as the vehicle pull, farmer’s walk, sandbag/keg toss or stones lift – often require competitors to move a range of awkward, heavy implements either higher, faster or with more repetitions in a given time period than their competitors.
Researching one of the greats
Strongman has enjoyed substantial growth and development since the introduction of the World’s Strongest Man competition in the late 1970s.
However, from a scientific perspective, there are few published studies focusing on athletes at the elite level.
In particular, very little is currently known about the overall amount of muscle mass these athletes possess, how their mass is distributed across individual muscles and to what extent their tendon characteristics differ to people who are not training.
However a recent study sought to shed some light on these extreme athletes. It examined the muscle and tendon morphology (structure) of one of the world’s strongest ever men – England’s Eddie Hall.
Measuring an exceptionally strong person such as Hall – who produced a 500kg world record deadlift and won the “World’s Strongest Man” competition in 2017 – provided the opportunity to understand what specific muscle and tendon characteristics may have contributed to his incredible strength.
Eddie Hall is one of world strongman’s finest competitors.
What can we learn from a single case study?
A limited number of athletes reach the truly elite level of strongman and even fewer set world records or win premier events.
Because it’s so difficult to recruit even a small group of such rare athletes, conducting a case study with one elite strongman provided a unique opportunity to understand more about his muscle and tendon characteristics.
Case studies have many limitations, including an inability to determine cause and effect or generalise findings to other individuals from the same group.
However, the study of Hall was insightful, as his muscle and tendon results could be compared directly with various groups from the authors’ earlier published research.
These groups included untrained people, people who have regularly resistance trained for several years, and competitive track sprinters.
The inclusion of these comparative populations allowed meaningful interpretation of what makes Hall’s muscle and tendon characteristics so special.
What they found
Hall’s lower body muscle size was almost twice that of an untrained group of healthy active young men.
And the manner in which his muscle mass was distributed across his lower body exhibited a very specific pattern.
Three long thin muscles, referred to as “guy ropes”, were particularly large (some 2.5 to three times bigger) compared to untrained people.
The guy rope muscles connect to the shin bone via a shared tendon and provide stability to the thigh and hips by fanning out and attaching to the pelvis at diverse locations.
Highly developed guy rope muscles would be expected to offer enhanced stability with heavy lifting, carrying and pulling.
Hall’s thigh (quadriceps) muscle structure was more than twice that of untrained people, yet the tendon at the knee that is connected to this muscle group was only 30% larger than an untrained population.
This finding indicates muscle and tendon growth, within this case of extreme quadriceps muscle development, do not occur to the same extent.
What do the results mean?
The obvious implication is, the larger the relevant muscles, the greater the potential for strength and power.
However, sports like strongman and even everyday activities like climbing stairs, carrying groceries and lifting objects off the ground require the coordinated activity of many stabilising muscles as well as major propulsive muscles such as the quadriceps.
While Hall’s quadriceps were substantially bigger than untrained people, the largest relative differences occurred in the calves and the long thin “guy rope” muscles that help stabilise the hip and knee.
These results pose a question about whether additional or more specific training for these smaller muscles may further enhance strength and power.
This could benefit strongman athletes as well as everyday people.
Also, the relatively small differences in tendon size between Hall and untrained populations suggests tendons do not grow to the same extent as muscles do.
As muscular forces are transmitted through tendons to the bones, the substantially greater growth of muscle than tendon may mean athletes such as Hall have a greater relative risk of tendon than muscle injury.
Justin Keogh is the Associate Dean of Research, Faculty of Health Sciences and Medicine, Bond University, an exercise scientist and a former strongman competitor.
Tom Balshaw is a Lecturer in Kinesiology, Strength and Conditioning employed by Loughborough University
Source: The Conversation (Au and NZ) – By Alister McKeich, Lecturer and Researcher in Law, Criminology and Indigenous Studies, Victoria University, Victoria University
The onslaught in the Middle East has brought to the world’s attention once again the “crime of crimes”, genocide.
Both the the International Court of Justice and International Criminal Court (ICC) have brought allegations of genocide against Israel as a state and Israeli and Hamas leaders as individuals.
The Australian government’s response to the Gaza crisis has included temporarily freezing of A$6 million of funding to the United Nations Relief and Works Agency for Palestine. Though funding has been flowing again since March, Prime Minister Anthony Albanese has been referred to the ICC by a law firm for being “an accessory to genocide”.
Against this backdrop, Australia’s own genocide legislation is under parliamentary scrutiny. A bill tabled by independent Senator Lidia Thorpe (for whom I work as a casual legal researcher) seeks to change the way Australia deals with genocide.
So what do our current laws say and what’s the case for changing them?
Yet it was not until 2002, once the ICC was established, that the Commonwealth Criminal Code was amended to create a new division of atrocity crimes.
Through this legislation, Australia may prosecute any person accused of a Rome Statute crime (such as genocide) under Australian law.
At the moment, written consent from the attorney-general is required before legal proceedings about genocide and other atrocity crimes can commence. This is called the “attorney-general’s fiat”.
Further, the attorney-general’s decision is final. It “must not be challenged, appealed against, reviewed, quashed or called into question”.
Thorpe’s bill seeks to overturn these two measures.
The explanatory memorandum in the 2002 amendment did not say why the attorney-general’s consent was necessary.
Consent from an attorney-general (or similar position) is not an international requirement.
Australia is only one of a handful of other countries (including the United Kingdom, New Zealand and Canada) where the fiat also exists.
Why is it a problem?
The Australian government has justified the rule on the basis that prosecutions for atrocity crimes against individuals could affect Australia’s international relations and national security.
However, submissions from legal experts and community groups to a senate inquiry looking at the issue point out flaws.
They say this rule prevents access to justice for victims and survivors of atrocity crimes. It can also create the potential for government bias.
Submissions also say the lack of explanation or appeal process ignores fundamental principles of jurisprudence.
Has the rule been used?
The attorney-general’s fiat has been used in a limited number of cases.
In 2009, Palestinian rights groups Australians for Palestine issued a request for consent for the prosecution of former Israeli prime minister Ehud Olmert, who was visiting at the time.
The Australian Centre for International Justice states in its submission how then-attorney-general Robert McClellend denied the request. He cited matters of international state sovereignty and the difficulties of pursuing such a case in an overseas jurisdiction.
Then, in 2011, Arunchalam Jegastheeswaran, an Australian citizen of Tamil
background, sought the attorney-general’s consent for the prosecution of then Sri Lankan President, Mahinda Rajapaksa, who was due to visit Australia.
McClellend again denied the request, saying Rajapaska was protected under “head of state immunity”. This concept is controversial in international law, given it’s often heads of state who commit atrocity crimes.
Head of state protection was also offered to former Myanmar (Burma) leader Aung San Suu Kyi, who was in government when the 2017 genocide against the Rohingya was committed.
With Suu Kyi due to be in Australia for an ASEAN conference in 2018, the Australian Rohingya community sought a prosecution. It was denied by then attorney-general Christian Porter.
And in 2019, retired Sri Lankan General Jagath Jayasuriya visited Australia. Despite concerted efforts to raise evidence to prosecute Jayasuriya of war crimes, delays with the Australian Federal Police meant the case never reached the point of attorney-general consent.
First Nations plaintiffs such as Paul Coe and Robert Thorpe have also sought to bring cases of genocide before the domestic courts, with no success.
What would changing the laws mean?
As it’s unlikely an attorney-general would consent to prosecutions against its own government, submissions to the inquiry argue the rule creates a direct conflict of interest.
For First Nations people seeking justice for crimes of “ongoing genocide” perpetuated by the Commonwealth, any government is hardly going to rule in their favour.
Some Indigenous community groups argue the high rates of First Nations children in protection, deaths in custody, hyper-incarceration and cultural, land and environmental damage amount to genocide crimes.
Submissions to the inquiry recommend instead of requiring the consent of the attorney-general, claims of genocide should be directed to the Commonwealth Director of Public Prosecutions. This would ensure greater independence from government.
The director has a mandate for this sort of work. It already investigates similar crimes such as people smuggling, human trafficking, slavery and child exploitation.
Internationally, the implications of this bill, if passed, will be consequential. The Australian Centre for International Justice estimates up to 1,000 Australian citizens have returned to Israel to fight as part of the Israel Defense Forces. Israel has been accused of serious atrocity crimes in Gaza.
Should any of those citizens return, there could be attempts to mount a case. The government would then have to consider Australia’s political and economic ties with Israel.
Whether the bill is passed will depend on parliament. But the situation highlights a paradox: the state itself will be deciding whether to remove its own inbuilt protections against charges of genocide.
Alister McKeich is a casual legal researcher with the office of Senator Lidia Thorpe.
Imagine scrolling through social media or playing an online game, only to be interrupted by insulting and harassing comments. What if an artificial intelligence (AI) tool stepped in to remove the abuse before you even saw it?
This isn’t science fiction. Commercial AI tools like ToxMod and Bodyguard.ai are already used to monitor interactions in real time across social media and gaming platforms. They can detect and respond to toxic behaviour.
The idea of an all-seeing AI monitoring our every move might sound Orwellian, but these tools could be key to making the internet a safer place.
However, for AI moderation to succeed, it needs to prioritise values like privacy, transparency, explainability and fairness. So can we ensure AI can be trusted to make our online spaces better? Our two recent research projects into AI-driven moderation show this can be done – with more work ahead of us.
Whether it’s a single offensive comment or a sustained slew of harassment, such harmful interactions are part of daily life for many internet users.
The severity of online toxicity is one reason the Australian government has proposed banning social media for children under 14.
But this approach fails to fully address a core underlying problem: the design of online platforms and moderation tools. We need to rethink how online platforms are designed to minimise harmful interactions for all users, not just children.
This is where proactive AI moderation offers the chance to create safer, more respectful online spaces. But can AI truly deliver on this promise? Here’s what we found.
‘Havoc’ in online multiplayer games
In our Games and Artificial Intelligence Moderation (GAIM) Project, we set out to understand the ethical opportunities and pitfalls of AI-driven moderation in online multiplayer games. We conducted 26 in-depth interviews with players and industry professionals to find out how they use and think about AI in these spaces.
Interviewees saw AI as a necessary tool to make games safer and combat the “havoc” caused by toxicity. With millions of players, human moderators can’t catch everything. But an untiring and proactive AI can pick up what humans miss, helping reduce the stress and burnout associated with moderating toxic messages.
But many players also expressed confusion about the use of AI moderation. They didn’t understand why they received account suspensions, bans and other punishments, and were often left frustrated that their own reports of toxic behaviour seemed to be lost to the void, unanswered.
Participants were especially worried about privacy in situations where AI is used to moderate voice chat in games. One player exclaimed: “my god, is that even legal?” It is – and it’s already happening in popular online games such as Call of Duty.
Our study revealed there’s tremendous positive potential for AI moderation. However, games and social media companies will need to do a lot more work to make these systems transparent, empowering and trustworthy.
Right now, AI moderation is seen to operate much like a police officer in an opaque justice system. What if AI instead took the form of a teacher, guardian, or upstander – educating, empowering or supporting users?
Enter AI Ally
This is where our second project AI Ally comes in, an initiative funded by the eSafety Commissioner. In response to high rates of tech-based gendered violence in Australia, we are co-designing an AI tool to support girls, women and gender-diverse individuals in navigating safer online spaces.
We surveyed 230 people from these groups, and found that 44% of our respondents “often” or “always” experienced gendered harassment on at least one social media platform. It happened most frequently in response to everyday online activities like posting photos of themselves, particularly in the form of sexist comments.
Interestingly, our respondents reported that documenting instances of online abuse was especially useful when they wanted to support other targets of harassment, such as by gathering screenshots of abusive comments. But only a few of those surveyed did this in practice. Understandably, many also feared for their own safety should they intervene by defending someone or even speaking up in a public comment thread.
These are worrying findings. In response, we are designing our AI tool as an optional dashboard that detects and documents toxic comments. To help guide us in the design process, we have created a set of “personas” that capture some of our target users, inspired by our survey respondents.
We allow users to make their own decisions about whether to filter, flag, block or report harassment in efficient ways that align with their own preferences and personal safety.
In this way, we hope to use AI to offer young people easy-to-access support in managing online safety while offering autonomy and a sense of empowerment.
We can all play a role
AI Ally shows we can use AI to help make online spaces safer without having to sacrifice values like transparency and user control. But there is much more to be done.
Other, similar initiatives include Harassment Manager, which was designed to identify and document abuse on Twitter (now X), and HeartMob, a community where targets of online harassment can seek support.
Until ethical AI practices are more widely adopted, users must stay informed. Before joining a platform, check if they are transparent about their policies and offer user control over moderation settings.
The internet connects us to resources, work, play and community. Everyone has the right to access these benefits without harassment and abuse. It’s up to all of us to be proactive and advocate for smarter, more ethical technology that protects our values and our digital spaces.
The AI Ally team consists of Dr Mahli-Ann Butt, Dr Lucy Sparrow, Dr Eduardo Oliveira, Ren Galwey, Dahlia Jovic, Sable Wang-Wills, Yige Song and Maddy Weeks.
Dr Lucy Sparrow receives funding from the eSafety Commissioner’s Preventing Tech-Based Abuse Against Women grant program for the “AI Ally” project.
Dr Eduardo Oliveira receives funding from the eSafety Commissioner’s Preventing Tech-Based Abuse Against Women grant program for the “AI Ally” project.
Dr Mahli-Ann Butt receives funding from the eSafety Commissioner’s Preventing Tech-Based Abuse Against Women grant program for the “AI Ally” project.
Source: The Conversation (Au and NZ) – By Adrian Beaumont, Election Analyst (Psephologist) at The Conversation; and Honorary Associate, School of Mathematics and Statistics, The University of Melbourne
The US presidential election will be held on November 5. In analyst Nate Silver’s aggregate of national polls, Democrat Kamala Harris leads Republican Donald Trump by 49.3–46.0 – a slight widening of the competition since last Monday, when Harris led Trump by 49.2–46.2.
President Joe Biden’s final position before his withdrawal as Democratic candidate on July 21 was a national poll deficit against Trump of 45.2–41.2.
There will be a debate on Tuesday evening US time between the vice-presidential candidates, Democrat Tim Walz and Republican JD Vance. Vice-presidential debates in previous elections have not had a significant influence on the contest.
The US president isn’t elected by the national popular vote, but by the Electoral College, in which each state receives electoral votes equal to its federal House seats (population based) and senators (always two). Almost all states award their electoral votes as winner-takes-all, and it takes 270 electoral votes to win (out of 538 total).
The Electoral College is biased to Trump relative to the national popular vote, with Harris needing at least a two-point popular vote win in Silver’s model to be the Electoral College favourite.
In Silver’s polling averages, Harris leads Trump by one to two points in Pennsylvania (19 electoral votes), Michigan (15), Wisconsin (ten) and Nevada (six). If Harris wins all these states, she is likely to win the Electoral College by at least a 276–262 margin. Trump is ahead by less than a point in North Carolina (16 electoral votes) and Georgia (16), and if Harris wins both, she wins by 308–230.
In Silver’s model, Harris has a 56% chance to win the Electoral College, up from 54% last Monday but down from her peak of 58% two days ago. Earlier this month, there were large differences in win probability between Silver’s model and the FiveThirtyEight model, which was more favourable to Harris. But these models have nearly converged, with FiveThirtyEight now giving Harris a 59% win probability.
There are still more than five weeks until election day, so polls could change in either Trump’s or Harris’ favour by then. Harris’ one to two point leads in the key states are tenuous, and this explains why Trump is still rated a good chance to win.
Silver wrote on September 1 that polls in 2020 and 2016 were biased against Trump, but polls in 2012 were biased against Barack Obama. In the last two midterm elections (2022 and 2018), polls have been good. It’s plausible there will be a polling error this year, but which candidate such an error would favour can’t be predicted.
On Sunday, Silver said if there was a systematic error of three or four points in the polls in either Trump’s or Harris’ favour, that candidate would sweep all the swing states and easily win the Electoral College. There are other scenarios in which one candidate underperforms the polls with some demographics but overperforms with other demographics.
I wrote about the US election for The Poll Bludger last Thursday, and also covered bleak polls and byelection results in Canada for the governing centre-left Liberals ahead of an election due by October 2025, a dreadful poll for UK Labour Prime Minister Keir Starmer, the new French prime minister, a German state election and a socialist win in Sri Lanka’s presidential election.
Upwardly revised economic data
Last Thursday, a revised estimate of June quarter US GDP was released. There was a large upward revision in real disposable personal income compared to the previously reported figures. This has resulted in the personal savings rate being revised up to 4.9% in July from the previously reported 2.9%, and it was 4.8% in August.
With these upward revisions, Silver’s economic index that averages six indicators is now at +0.25, up from +0.09. As the incumbent party’s candidate, a better economy than was previously believed should help Harris.
Coalition gains narrow lead in Essential
In Australia, a national Essential poll, conducted on September 18–22 from a sample of 1,117 people, gave the Coalition a 48–47 lead (including undecided voters) after a 48–48 tie in early September. It’s the Coalition’s first lead in the Essential poll since mid-July.
Primary votes were 35% Coalition (steady), 29% Labor (down one), 12% Greens (down one), 8% One Nation (steady), 2% UAP (up one), 9% for all Others (up one) and 5% undecided (steady).
Anthony Albanese’s net approval was up five points since August to –5, with 47% disapproving and 42% approving. Peter Dutton’s net approval was down one to net zero.
On social media regulations, 48% thought them too weak, 43% about right and 8% too tough. By 67–17, voters supported imposing an age limit for children to access social media (68–15 in July). By 71–12, voters supported making doxing (the public release of personally identifiable data) a criminal offence (62–19 in February).
By 49–18, voters supported Labor’s Help to Buy scheme, and by 57–13 they supported the build-to-rent scheme. The questions give detail that few voters would know.
Voters were told the Liberals and Greens had combined to delay Labor’s housing policies in the senate. By 48–22, voters thought the Liberals and Greens should pass the policies and argue for their own policies at the next election, rather than block Labor’s policies. Greens voters supported passing by 55–21.
Labor keeps narrow lead in Morgan
A national Morgan poll, conducted September 16–22 from a sample of 1,662 people, gave Labor a 50.5–49.5 lead, unchanged from the September 9–15 Morgan poll.
Primary votes were 37.5% Coalition (steady), 32% Labor (up 1.5), 12.5% Greens (steady), 5% One Nation (down 0.5), 9.5% independents (down 0.5) and 3.5% others (down 0.5).
The headline figure is based on respondent preferences. By 2022 election preference flows, Labor led by an unchanged 52–48.
Adrian Beaumont does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
A 1935 school photograph taken in Kandos, NSW.Author provided, courtesy of the Kandos Museum.
In the town of Kandos, New South Wales, there’s the local Kandos Museum run by volunteers. The museum holds relics from the cement works that once defined the town, but there are other treasures, too.
As part of the Cementa24 festival, I became fixated on the museum’s collection of school photos. Neatly organised into ring-bound folders by the volunteers, the group portraits span decades of students from Kandos Public School and Kandos High School, from 1924 through to the 1990s.
A photo album made by volunteers at the Kandos Museum. Author provided
I enlarged and cropped some of these photos to turn them into street posters to scatter around town. I asked permission before sticking a few outside the local pub, the radio station, the post office and the op shop. I spot the locals smiling as they pass them, stopping to look for someone they know. I watch them point at the pictures and hear them naming names.
Working on this project, I can’t stop thinking about the weight of these photographic rituals. School photos aren’t just memories; they hold social histories. Through them, you can trace changes in hairstyles, fashion, attitudes and even migration – yet there’s something homogeneous and unchangeable about how they’re made.
School photo rules
There’s always a physical hierarchy in these photos. The photographer organises the group to ensure compositional acuity. The students are lined up in rows, with tall people in the back and shorter people in front – evenly spaced, arranged by height and symmetry.
When was the rule made that says this is how a group should look? Balanced, orderly and with everyone fitting neatly into place, whether they socially do or not. Somehow I always ended up on the edge of the middle row. The social dynamics of the playground found their way into the organisation of our bodies, forever captured in a split second.
A photo of Kandos’ 5th Form, 1967. Author provided, courtesy of the Kandos Museum
Looking at the Kandos photos from the 1940s through to the 1970s, then at my children’s photos from 2013 to 2024, and my own school photos in the 1980s and ‘90s, I can see the difference in public, private and catholic school uniforms. I can see the difference in racial diversity (or lack thereof) between a small regional town, inner-city Sydney and suburban southwest Sydney. I can also see how much photographic technology has changed.
Despite this, the organisational structure of the school photo remains the same. The kids still stand stiffly in their rows, with identical tunics and ties. Standing too close, someone’s elbow digs into someone else’s side.
As a photographer now, I often think about these school photos and the rituals that have remained largely unchanged in Australia. Every year, kids are shuffled onto tiered steps. Those in the front put their knees together, hands in laps, while the girls must “try to look like ladies”. Then there are the “nobodies” in the middle row (or is that just me reading into it?)
The perils of posing
Posing for school photos can be complicated. One year my daughter came home from school and declared the photographer was sexist because he made all the girls sit in the front row while the boys got to stand. I asked her why sitting was sexist. She couldn’t explain – she was eight years old – but she certainly felt the power difference between sitting with your knees pressed together and standing tall.
And what about the solo portrait? I still think about my kindergarten class from 1979. The group photo was fine. I was happy, standing next to my new best friend. But my solo portrait was a disaster. I looked possessed, my eyes half-closed, lashes blurred, caught mid-blink.
My mother didn’t buy the solo photo, but she kept the group one. After that I promised myself it would never happen again. I told myself every year: “don’t blink, don’t blink”. Back then, photography was on film. There were no re-dos, no instant feedback, no photoshop and no AI. Once the camera clicked, that was it.
‘Don’t blink, don’t blink,’ I’d think, while trying to keep my eyes open. Author provided
At the end of primary school, I’d visit my best friend’s house and envy the neat, chronological line of her school photos framed on her kitchen wall. Year by year, there she was, changing just slightly – a slow, steady record of growing up. I didn’t know why, but seeing framed evidence of time passing made me emotional. Maybe it was the certainty of the way her life was so neatly documented.
My own school photos never made it to the wall in such a tidy fashion. But they did make it into my father’s wallet, my mother’s purse, in frames above the piano, on the fridge, in photo albums and in many a drawer.
Small acts of rebellion
The 1950s photos are formal and solemn. Back then you stood straight, faced the camera and no one smiled too much. By the 1970s and ’80s, the kids started to smirk – with hair loosened, mullets, and bodies shifting like they were trying to resist the pose. In one photo, the basketball team boys have their shoes off, feet raised above the blistering asphalt in the summer heat. The rules were still there, but you can see them pushing back.
Bare feet raised in a photo of the Kandos High School Open basketball team, 1975. Author provided, courtesy of the Kandos Museum.
What if we invited the rituals to change? What if students could self-organise, be silly, pull faces, wear their own clothes, and resist gender binaries and institutional uniformity?
Some of the photos in the Kandos albums hint at this potential for small acts of rebellion. There’s the girl pulling a face, one laughing in profile. In one photo there’s a kid wearing a non-regulation jumper, and another in which they were clearly allowed to be silly because the teacher is laughing too.
Photographic rebellion in the class of 1996. Author provided, courtesy of the Kandos Museum.
In the pre-digital era, these small mishaps and moments of failure were captured unpolished and unfiltered. Those are the images I find myself drawn to; these are often the best ones. They reveal how uncomfortable it can be being photographed and how forced a pose can feel. Shirking a smile and a stiff stance is maybe the only power we have in that brief moment.
Cherine Fahd does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
More than 1.1 million Australians are estimated to be living with an eating disorder. Around one-third of these people are neurodivergent.
So why are neurodivergent people, such as autistic people and those with ADHD, more likely to experience eating disorders than the broader population? And how does this impact their treatment?
First, what is neurodivergence?
Neurodivergence, or the state of being neurodivergent, is a term for people whose cognitive functioning differs from what society considers “typical”. Many conditions broadly fall under neurodivergence, including (but not limited to):
autism
attention-deficit/hyperactivity disorder (ADHD)
dyslexia
Tourette’s syndrome.
Our understanding of neurodivergence has come a long way. Neurodivergence used to be considered a linear “spectrum” ranging from less to more neurodivergent.
We now know every neurodivergent person will have a unique experience across a range of dimensions. This includes sensory processing, motor abilities and executive functioning (working memory, cognitive flexibility and inhibition).
Conceptualising these differences ends up looking more like a colour wheel.
What are eating disorders?
Eating disorders are complex and potentially life-threatening mental health conditions. They cause persistent and significant disturbances in thoughts, feelings and behaviours related to body weight, food and/or eating.
Many factors are likely to contribute to the development of an eating disorder. But research shows neurodivergent people are disproportionately affected.
One review found around 22.9% of autistic people had an eating disorder, compared with 2% in the general population. In another review, people with ADHD were four times more likely to be diagnosed with an eating disorder than people without ADHD.
Why are eating disorders more common among neurodivergent people?
Science has not pinpointed an exact reason why eating disorders are more common among neurodivergent people. But here’s what we know so far.
Neurodivergent people are more likely to experience feeding difficulties, sensory sensitivities and disordered eating.
A United States study assessing the eating behaviour of neurodivergent children found around 70% of autistic children displayed “atypical” eating behaviours. This includes food selectivity and a hypersensitivity to food textures. It compares with 4.8% of neurotypical children.
Similarly, autistic children may choose or reject foods based on texture more than other children. They may prefer foods with a consistent texture, bland taste and neutral colour (for example, chicken nuggets, plain pasta and rice).
Selective eating (having limited accepted foods and food aversions) has been associated with avoidant/restrictive food intake disorder (ARFID). This is an eating disorder characterised by avoidance and aversion to food and eating that is not related to body image. ARFID is commonly associated with autism, with one study estimating 21% of autistic people will experience it in their lifetime.
Other neurodivergent traits, such as perfectionism and a preference for routine, have been associated with disordered eating and eating disorders.
Research on adolescent girls found those with anorexia nervosa are more likely to exhibit neurodivergent (in this case, autistic) traits and behaviours. These include developing rules, resistance to change and a hyperfocus on body weight. These features are commonly seen in anorexia nervosa, an eating disorder characterised by restricted food intake, an intense fear of weight gain and body image disturbances.
Meanwhile, impulsivity symptoms in ADHD have been associated with binge eating disorder. This can involve recurrent episodes of eating large amounts of food in a short period of time. Impulsivity may also be linked to bulimia nervosa, characterised by compensatory behaviours to prevent weight gain after binge eating (such as exessive exercise).
Some studies indicate a link between ADHD, alexithymia (difficulty experiencing, identifying and expressing emotions) and overeating behaviours such as emotional eating.
Finally, neurodivergent people are more likely to identify as part of the LGBTQIA+ community, experience trauma and also have a mental health condition. Each of these considerations increases the likelihood someone will experience an eating disorder.
How does this affect treatment?
Despite the overlap between eating disorders and neurodivergence, current treatment approaches don’t meet the diverse needs of those affected.
Eating disorder treatment often has moderate success at best. For neurodivergent people, the outcomes are worse than for their neurotypical counterparts.
Cognitive behavioural therapy (CBT), a broad range of treatments based on the interaction between thoughts, feelings and behaviours, is less beneficial for neurodivergent people. Yet this is often part of treatment for eating disorders. Autistic women have suggested CBT is less accessible due to its blanket approach and the assumption they have the skills needed to benefit.
Such care recognised and safely accommodates the multiple ways neurodivergence is related to feeding and eating behaviour.
Research suggests eating disorder treatment can be successfully adapted for neurodivergent people based on the following principles:
1. equal partnership. Including neurodivergent people as equal partners in their care and as decision-makers, and elevating their own experiences
2. embracing and celebrating differences. Neurodivergent traits should not be considered a deficit, or something to be “treated” or “fixed”. Rather, neurodivergent traits should be celebrated to nourish a positive sense of identity
3. accommodations. Neurodivergent traits and preferences are respected and accommodated. As an example, this might include reducing sensory inputs (the smell, sounds and lights) in a dining area, or a meal plan that is predictable and considers a person’s sensory sensitivities.
Breanna Lepre works for The University of Queensland and is a member of Dietitians Australia. Breanna is neurodivergent and has lived experience of an eating disorder.
Lauren Ball works for The University of Queensland and receives funding from the National Health and Medical Research Council, Queensland Health and Mater Misericordia. She is a Director of Dietitians Australia, a Director of the Darling Downs and West Moreton Primary Health Network and an Associate Member of the Australian Academy of Health and Medical Sciences.
Source: The Conversation (Au and NZ) – By Hamish Bradley, Adjunct Lecturer, Anaesthetist and Aeromedical Retrieval Specialist, The University of Western Australia
From the creeks that wind through inner city Melbourne to the far outback in Western Australia, snake season is beginning.
Over the cooler months snakes have been in state of brumation. This is very similar to hibernation and characterised by sluggishness and inactivity. As warmer conditions return both snakes and humans become more active in the outdoors, leading to an increased likelihood of interaction. This may happen when people are hiking, dog-walking or gardening.
The risk of being bitten by a snake is exceptionally small, but knowing basic first aid could potentially save your, or another person’s, life.
Snake bite should always be treated as a life-threatening emergency, and if you are bitten in rural or remote Australia, you will often receive an air medical emergency pick up to a regional or metropolitan hospital for advanced care.
The effects of snake bites vary, depending on the species of snake and first aid measures undertaken.
calling for help (dialing 000 or activating an emergency beacon)
applying a pressure immobilisation bandage
resting.
Why pressure is important
Snake venom is carried within the lymphatic system. This is a collection of tiny tubes throughout the body that return fluid outside of blood vessels back to the blood stream.
Muscles act as a “pump” to help the fluid move through this system. That’s why being still, or immobilisation, is vital to slow the spread of venom.
A firm pressure immobilisation bandage, applied as tight as you would for a sprained ankle, will compress these tubes and help limit the venom’s spread.
Ideally bandage the entire limb on which the bite occurred and apply a splint to help further with immobilisation. It is very important that the blood supply to the limb is not limited by this bandage.
Never attempt to capture or kill the snake for identification. This risks further bites and is not required for specialist care. The decision about when to give antivenom (if any) is based on the geographical location, symptoms, the results of blood tests and discussion with a toxicologist.
The tyranny of distance
People living in rural and remote locations may also have limited access to health care, including access to ambulance services, snake bite first aid such as bandages and splints, and to antivenom.
Over one year (as a component of a larger three-year study) we collected information on the pre-hospital care and in-flight care with the Royal Flying Doctors Service Western Operations.
During this time, 85 people from regional, rural, remote and very remote Western Australia were flown by Royal Flying Doctor Service to hospital for suspected or confirmed snake bites. Reassuringly, only five of these patients (6%) ultimately received a toxicologist’s diagnosis of envenomation.
To move or not to move?
Troublingly, 38 (45%) of the 85 snake bite victims continued to move around and be active following their suspected snake bite. This raises questions about whether people lack knowledge of first-aid guidelines, or whether this is a consequence of being isolated, with limited access to health care.
Either way, our as-yet-unpublished research highlights the vulnerability of Australia’s rural and remote people. All patients eventually received a pressure immobilisation bandage, with an average time from bite to application of 38 minutes. Three quarters of the patients made their way to health-care site by foot, or private car, arriving on average 65 minutes after the bite.
Rest and compression with a bandage are vital elements of snakebite first aid. Microgen/Shutterstock
What needs to change?
Our results indicate rural and remote Australians need innovative health-care solutions beyond the metropolitan guidelines, particularly when outside ambulance service areas.
Basic snake bite first aid education needs to be not only reiterated but also a pragmatic approach is required in these geographically isolated locations. This would involve being vigilant, staying safe and, when isolated, always carrying emergency technology to call for help.
The authors wish to acknowledge the efforts required through this research project as it continues, including by Fergus Gardiner, Kieran Hennelly, Rochelle Menzies, James Anderson, Alex McMillan and John Fisher. Hamish Bradley is an Aeromedical Retrieval Specialist and Principal Investigator in this project.
Alice Richardson receives funding from NHMRC.
Breeanna Spring is affiliated with Australian College of Midwives, Australian College of Nursing.
Hamish Bradley does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Source: The Conversation (Au and NZ) – By Milton Speer, Visiting Fellow, School of Mathematical and Physical Sciences, University of Technology Sydney
Water flows in mainland Australia’s most important river system, the Murray-Darling Basin, have been declining for the past 50 years. The trend has largely been blamed on water extraction, but our new research shows another factor is also at play.
We investigated why the Darling River, in the northern part of the basin, has experienced devastating periods of low flow, or no flow, since the 1990s. We found it was due to a decrease in rainfall in late autumn, caused by climate change.
The research reveals how climate change is already affecting river flows in the basin, even before water is extracted for farm irrigation and other human uses.
Less rain will fall in the Darling River catchment as climate change worsens. This fact must be central to decisions about how much water can be taken from this vital natural system.
The Darling River runs from the town of Bourke in northwest New South Wales, south to the Murray River in Victoria. Together, the two rivers form the Murray-Darling river system.
The Indigenous name for the Darling River is the Baaka. For at least 30,000 years the river has been an Indigenous water resource. On the river near Wilcannia, remnants of fish traps and weirs built by Indigenous people can still be found today.
The Darling River was a major transport route from the late 19th to the early 20th century.
In recent decades, the agriculture industry has extracted substantial quantities of water from the Darling’s upstream tributaries, to irrigate crops and replenish farm dams. Water has also been extracted from Menindee Lakes, downstream in the Darling, to benefit the environment and supply the regional city of Broken Hill.
A river in trouble
Natural weather variability means water levels in the Darling River have always been irregular, even before climate change began to be felt.
In recent years, however, water flows have become even more irregular. This has caused myriad environmental problems.
Compounding the droughts, smaller flows that once replenished the system have now greatly reduced. Our research sought to determine why.
What we found
We examined rainfall and water flows in the Darling River from 1972 until July 2024. This includes from the 1990s – a period when global warming accelerated.
We found a striking lack of short rainfall periods in April and May in the Darling River from the 1990s. The reduced rainfall led to long periods of very low, or no flow, in the river.
Since the 1990s under climate change, shifts in atmospheric circulation have generated fewer rain-producing systems. This has led to less rain in inland southeast Australia in autumn.
The river system particularly needs rainfall in the late autumn months, to replenish rivers after summer.
The periods of little rain were often followed by extreme floods. This is a problem because the rain fell on dry soils and soaked in, rather than running into the river. This reduced the amount of water available for the environment and human uses.
In addition to the fall in autumn rainfall, we found the number of extreme annual rainfall totals for all seasons has also fallen since the 1990s.
We also examined monthly river heights at Bourke, Wilcannia and Menindee. We found periods of both high and low water levels before the mid-1990s. But the low water levels at all three locations from 2000 onwards were the lowest in the period.
Ensuring water for all
Australia is the driest inhabited continent on Earth. Ensuring steady water supplies for human use has always been challenging.
Falls in Darling River water levels in recent decades have largely been attributed to water extraction for farm dams, irrigation and town use.
But as our research shows, the lack of rainfall in the river catchment – as a result of climate change – is also significant. The problem will worsen as climate change accelerates.
This creates a huge policy challenge. As others have noted, the Murray-Darling Basin Plan does not properly address climate change when determining how much water can be taken by towns and farmers.
Both the environment and people will benefit from ensuring the rivers of the basin maintain healthy flows into the future. As our research indicates, this will require decision-makers to consider and adapt to climate change.
The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.
Source: The Conversation (Au and NZ) – By Dr Rico Merkert, Professor in Transport and Supply Chain Management and Deputy Director, Institute of Transport and Logistics Studies (ITLS), University of Sydney Business School, University of Sydney
Qatar Airways has announced plans to buy a 25% minority stake in Virgin Australia from its owner, US private equity firm Bain Capital.
The two airlines have already had a strong relationship as “codeshare partners” since 2022. Codesharing is where airlines agree to sell seats on each other’s flights. This new announcement, however, is a big step up.
All of this will, of course, be subject to approval from both Australia’s Foreign Investment Review Board and the Australian Competition and Consumer Commission (ACCC). But there could be a range of winners if it goes ahead.
Perhaps most importantly for Australian travellers, the move means Virgin Australia will be able to compete as it once did on long-haul international routes.
This is because a proposed “wet lease” agreement – in which one airline provides full aircraft, crew and relevant services to another – could see Virgin Australia start operating its own flights from Brisbane, Melbourne, Perth and Sydney to Doha as early as mid-2025.
It’s also a win for Bain Capital, which had been trying to offload some of its stake in the airline after acquiring it in crisis in 2020.
So with the prospect of a renewed international foothold for Virgin Australia, could we soon see more competition – and real consumer benefits – on the “Kangaroo Route” between Australia and Europe?
Clearer skies for Qatar?
As you might remember, Qatar Airways’ previous attempts to expand in Australia haven’t always gone smoothly.
Today’s announcement comes little more than a year after Transport Minister Catherine King controversially blocked a request by Qatar to double the number of flights its state-owned airline Qatar Airways was allowed to fly into major Australian airports.
Given the intense public backlash to this decision, it’s possible a renewed application by Qatar would have been more successful. A large expansion of flights by Turkish Airlines was later quietly approved.
But this new deal may diminish the need to try again. By wet-leasing wide-body aircraft so Virgin Australia can operate its “own” long-haul routes to Doha (connecting into Europe), Qatar will effectively bypass the need to get government approval for the additional flights.
Back in 2023, my calculations suggested Qatar’s application to expand should have been approved. Capacity on the Kangaroo Route was only back to 70% of pre-COVID levels. That meant the major players operating flights – including the Qantas–Emirates alliance – could charge significantly more than before the pandemic.
Using the latest flight schedule data, we can show that the capacity between Australia and the Middle East is still 17% below what it was before the pandemic. If Virgin Australia’s proposed long-haul re-entry goes ahead, we could see much more capacity on these routes, and a formidable challenger to the Emirates–Qantas arrangement.
It’s easy to see why Virgin and Qatar might be excited. The deal will extend Virgin Australia’s reach – and that of its frequent flyers – into Europe and other destinations via Doha. But this goes both ways, and could also mean more demand on its domestic network.
Similarly, the additional flights into Doha will feed Qatar Airways’ network, an airline that seems to be going from strength to strength.
Despite historical troubles at Doha’s main airport, Qatar Airways is now one of the world’s largest airlines. It has once again been ranked as the world’s best airline by the independent air transport rating organisation Skytrax.
Both airlines were also keen to point out benefits of the partnership they said would go beyond additional services and increasing competition in the Australian market.
These include the potential to work together towards various sustainability initiatives and on developing Western Sydney’s aviation ecosystem, providing exciting new opportunities for employment and training.
Not yet a done deal
However, they’re still a long way from the finishing line. Whether this deal will actually materialise remains to be seen.
It is worth noting this is not the first time Virgin Australia has been part-owned by an airline in the Middle East. Before Virgin Australia’s collapse into administration in April 2020, Etihad held a 21% equity stake.
Further, it remains to be seen what aircraft Virgin Australia will actually get access to and how the service will be perceived. Qatar Airways is guaranteed a transaction win through the wet-lease, without taking on the brand and profit risks of operating these services.
How much concern this will stir at Qantas also remains to be seen, but one thing is clear. Project Sunrise – Qantas’ plan to bypass the Middle Eastern hubs and connect Australia directly with Europe – could soon become much more important.
Emirates is unlikely to emerge as the winner of this move, now set to face increased competition not only on services connecting Australia with the Middle East, but also across its broader network through Dubai.
Qatar Airways acquiring a stake in Virgin Australia will also create interesting dynamics within the Oneworld Alliance, in which both Qantas and Qatar Airways are key partners. There are certainly interesting times ahead.
Dr Rico Merkert receives funding from the ARC and various industry partners. He loves to work with and for airlines, including Qantas and Virgin Australia.
Meteorologists are again predicting a possible La Niña this summer, which means Australia may face wetter and cooler conditions than normal.
It would be the fourth La Niña in Australia in five years, and highlights the need for Australians to prepare for what may be an extreme weather season.
Typically, a La Niña or its counterpart, El Niño, signals its arrival earlier in the year. Signs of this potential La Niña are emerging fairly late. That’s where new research by my colleagues and I may help in future.
La Niña and El Niño explained
La Niña and its opposite phase, El Niño, are created by changes in ocean temperatures in the Pacific Ocean’s equatorial region. Together, the two phenomena are known as the El Niño Southern Oscillation.
The oscillation is said to be in the positive phase during an El Niño and the negative phase during a La Niña. When sitting between the two, the cycle is in neutral phase.
In the United States, the National Oceanic and Atmospheric Administration put the likelihood at 71%. Australia’s Bureau of Meteorology is in “watch” mode, predicting a 50% chance of a La Niña weather pattern forming later this year.
La Niña occurs when strengthening winds change currents on the ocean surface, pulling cool water up from the deep.
The winds also cause warm surface waters in the western Pacific and north of Australia, bringing increased rainfall and clouds. This usually means above-average rainfall and cooler temperatures for Australia, particularly in the east and north.
Conversely, an El Niño weather pattern generally brings hotter temperatures across Australia, and less rainfall in the east and north.
The Bureau of Meteorology is in La Niña ‘watch mode’. Bureau of Meteorology
Paths of destruction
La Niña or El Niño events can cause devastation around the world.
The El Niño in 2015–16, for example, caused crops to fail and affected the food security and nutrition of almost 60 million people globally.
In Australia, El Niño events can bring increased risk of drought, bushfires and heatwaves, and water shortages.
Meanwhile, rainfall associated with La Niña conditions can lead to greater crop yield. But particularly heavy rainfall can wash crops away. It also heightens flood risks for some communities.
These far-reaching impacts mean it’s essential to plan ahead when a La Niña or El Niño is on the cards. But predicting these events has always been tricky.
Both types of events usually develop in the Southern Hemisphere autumn, peak in late spring or summer, and weaken by the next autumn. But it’s now late spring without a clear La Niña declaration. Why the delay?
Climate change is one factor. The Bureau of Meteorology says as oceans absorb heat from global warming, it’s harder to spot the specific warming patterns linked to La Niña.
The sheer complexity of the ocean-atmosphere system adds to the difficulty. The computer models used to predict El Niño and La Niña are improving all the time.
But scientists still need more information on deep ocean processes, and how winds affect the oscillation.
Predictions are hardest during the Southern Hemisphere’s autumn. That’s because the cycle then is very susceptible to change – teetering at a point where either a La Niña or El Niño could develop.
That’s why the earliest an El Niño or La Niña can be predicted is usually around May or June.
But new research offers a way to predict the events much earlier – and start preparing if necessary.
Better, earlier forecasts
The study, which I led, assessed the likelihood of La Niña or El Niño events occurring in succession – either in the eastern or central region of the Pacific Ocean.
This distinction is important. For Australia, El Niño and La Niña events peaking in the Central Pacific, close to our continent, have greater impacts here compared to those peaking in the east, closer to South America.
We analysed weather observations, and the sequence of past El Niño and La Niña events, over the past 150 years. We also examined climate models for future changes in transitions between El Niño and La Niña events.
From this, we determined the likelihood of an El Niño or La Niña occurring in two consecutive years.
We found most El Niño events are followed by neutral conditions the next year (with a likelihood of 37–56%).
But La Niña behaves differently. In 40% of cases, a Central Pacific El Niño could follow an Eastern Pacific La Niña. And there is a 28% chance of two consecutive La Niña events in the Central Pacific.
These results allow for more advanced predictions. By identifying patterns in this way, the odds of an El Niño or La Niña can be predicted up to a year in advance.
El Niño or La Niña are the result of complex interactions between winds and sea in the Pacific Ocean. Shutterstock
Looking ahead
So, what does our research suggest for Australia? Will a La Niña develop here this year?
From September last year, Australia experienced a strong Eastern Pacific El Niño. So our findings suggest there is only a 17% chance of La Niña this year.
If the La Niña arrives, it will likely peak in the Central Pacific, potentially affecting Australia rainfall. But overall, any La Niña that develops this late is likely to be weak and relatively short-lived.
Our research also found that as climate change accelerates, the El Niño Southern Oscillation is likely to shift. For example, the odds of two consecutive El Niños peaking in the central Pacific region will likely increase. And we can expect fewer calm, neutral years between events.
We hope our research enables more accurate, long-range forecasts, giving communities additional time to plan and prepare.
Mandy Freund receives funding from the ARC Centre of Excellence for 21st Century Weather
Australia has a long history of domestic airlines collapsing, often affecting thousands of travellers, yet the industry provides little or no recompense.
Even the federal government’s recently released aviation discussion paper recognised the need for change by recommending important protections for passengers. These included making airlines honour refunds if flights were cancelled or significantly delayed.
The 2024 Aviation White Paper included the most consumer friendly proposals in 30 years. However, there was one significant omission in the 156-page report.
There was no mention of insolvency protection for airline passengers. To put it simply, if a domestic or international airline collapses there is little likelihood passengers who paid airfares will receive a refund.
In most cases, passengers affected by airline collapses receive little or no compensation. Fewer than 20% of Australian domestic passengers pay for domestic travel insurance compared to the 90% of Australians who buy insurance when they fly internationally.
A history of failed airlines
Since 1990 we have seen the rise and fall of multiple Australian airlines. This includes Compass Mark 1, Compass Mark 2, Ansett Airlines, Impulse Air and Aussie Air.
In May, Bonza collapsed after less than a year of operation. And more recently, services operated by REX (Regional Air Express) between capital cities stopped and its regional services are under pressure.
Virgin and Qantas immediately volunteered to honour the inter-city bookings of some REX ticket holders. However, nearly all affected Bonza passengers lost their money because no other airlines flew the same routes.
The risk of both domestic and international airline collapses affecting Australian travellers is real. Consumers are as entitled to be protected from that risk as they are from many other travel related risks.
The UK and European approach
The UK approach to insolvency insurance has worked well since 1973. The UK scheme is known as “ATOL” or Air Travel Operators Licence. It applies to package tour companies who sell air travel combined with land tours or accommodation
This user-pays, government-guaranteed insurance cover is compulsory for all British travellers who book a package tour. It costs only A$5 per person. It guarantees a full refund and return flights to the passenger’s point of origin if the tour operator goes out of business.
As part of a 2024 book I co-edited with Bruce Prideaux, I focused on the collapse of the famous British tour operator, Thomas Cook in 2019.
I also compared insolvency consumer protection in the UK with that of Australia and New Zealand.
The Thomas Cook experience
When Thomas Cook collapsed in the United Kingdom and Europe, 600,000 British and European Union passengers were fully refunded the cost of their tours and flown to their port of departure under their regions’ respective schemes. And the cost of their disrupted tours was refunded.
Funding built into the UK scheme covered full refunds to affected passengers at negligible cost to government which guaranteed the scheme.
By contrast, a far smaller collapse of two Australian based tour operators, Tempo Holidays and Bentours in September 2019 affected fewer than 1,000 passengers.
However not all the affected travellers were refunded due to the limitations of the insolvency scheme run by what was then the Australian Federation of Travel Agents.
Under this scheme travellers only receive insolvency protection if they pay by credit or debit card. There is a reliance on banks to refund if a tour operator becomes insolvent. If the passenger paid for their tour by cheque or cash, no refund applied.
What Australia needs
There are three key categories of business insolvency which affect travellers. The collapse of an airline, the collapse of a tour operator and the collapse of a travel agent.
If the Australian government is genuinely interested in protecting travel consumers at minimal cost to the taxpayer we should be using the UK and European schemes as a model.
A compulsory user-pays, government guaranteed insolvency protection scheme would cost the consumer very little and would be an ideal safety net for consumers in the event that their travel company goes bust.
David Beirman is affiliated in an honorary basis with DFAT’s Consular Consulting Group, a stakeholder group which advises DFAT on government travel advisories and broader issues of tourism safety and security.
Last month, a delegation led by Brendan Crabb, head of the Burnet Institute, a prestigious medical research body, met Anthony Albanese in the prime minister’s parliament house office.
Its members, who included Lidia Morawska from Queensland University of Technology, a world-leading expert on air quality and health, also blitzed ministers and staffers. They were pitching for the federal government to spearhead a comprehensive policy on clean indoor air and for the issue to be put on the national cabinet’s agenda.
They pointed out to Albanese that indoor air is an outlier in our otherwise comprehensive public health framework. Despite people spending the majority of their time inside, indoor air quality is mostly unregulated, in contrast to the standards that apply to, for example, food and water.
There are multiple health and economic reasons to be concerned about this air quality but a major one is to limit the transmission of airborne diseases, such as COVID.
For many of us, COVID has become just a bad memory, despite its lasting and mixed legacies. For instance, without the pandemic, fewer people would now be working from home. More small businesses would be flourishing in our CBDs. Arguably, fewer children would be trying to catch up from inadequate schooling.
While the media have largely lost interest in COVID, and people are now rather blase about it, the disease is still taking a toll.
In 2023 there were about 4,600 deaths attributed to COVID, and almost certainly more in reality, given Australia that year had 8,400 “excess deaths” (defined as actual deaths above expected deaths).
Up to July this year there were 2,503 COVID deaths.
In nursing homes, whilst survival rates from COVID are much improved with vaccination and antivirals, as of September 19, there were 117 active outbreaks with 59 new outbreaks in that past week. There had been 900 deaths for the year so far.
Long COVID has become a serious issue, with varying respiratory, cardiac, cognitive and immunological symptoms. It is estimated between 200,000 and 900,000 people in Australia currently have long COVID.
The Albanese government is presently awaiting the report it commissioned into how the COVID pandemic was handled.
The inquiry has looked at the performance of the Morrison government, but its terms of reference didn’t include the states. That limits its usefulness, but there were politics involved, given high profile state Labor governments.
Not that the state and territory leaders of that time are around anymore (apart from the ACT’s Andrew Barr). Those faces that became so familiar from their daily news conference have disappeared into the never-never: Victoria’s Dan Andrews, Western Australia’s Mark McGowan, New South Wales’ Gladys Berejiklian, Queensland’s Annastacia Palaszczuk.
COVID variously made or tarnished leaders’ reputations. McGowan, in particular, reached stratospheric heights of popularity. Andrews deeply divided people.
In general, however, COVID boosted support for leaders and increased public trust in them and in government. In times of uncertainty, the public looked to known institutions and to authority figures. Since then, trust has eroded again.
Experts came into their own during the pandemic but then found themselves in the middle of the political bickering. In retrospect, some of them were wrong.
In the broad, especially in terms of the death rate and the economy, Australia navigated the crisis well. But drill down, and the story is more complex, as documented by two leading economists, Steven Hamilton (based in Washington and connected to the Australian National University) and Richard Holden (from UNSW).
In their just-published book, Australia’s Pandemic Exceptionalism, their bottom-line conclusion is that Australia was very impressive in its (vastly expensive) economic response but it was a mixed picture on the health side.
While Australia was quick out of the blocks in closing the national border and bringing in other measures, it fell down dramatically on two fronts. The Morrison government failed to order a wide variety of vaccines and it failed to buy enough Rapid Antigen Tests (RATs).
The “vaccine procurement strategy was an unmitigated disaster,” Hamilton and Holden write. This was not just “the greatest failure of the pandemic – it was arguably the greatest single public policy failure in Australian history”.
“We put all our vaccine eggs in just two baskets”, both of which failed to differing degrees. This was “a terrible risk to take. Pandemics are times for insurance, not gambling,” they write.
“And while our tax and statistical authorities marshalled their forces to operate much faster and more nimbly to serve the desperate needs of a government facing a once-in-a-century crisis, our medical regulatory complex repeatedly ignored international evidence and experience, and our political leaders capitulated to their advice. And then the prime minister told us that when it came to getting Australians vaccinated:‘it’s not a race’”.
The failure to order every vaccine on the horizon meant when production or supply problems arose for those that were hoped for or on order, the rollout was delayed.
After this bungle, “stunningly, we turned around and repeated these same mistakes all over again” by not obtaining and distributing freely massive numbers of RATs. In this failure, “our federal government showed the same lack of foresight, the same penny-wise but pound-foolish mindset that it had displayed in the vaccine rollout”.
The authors blame Scott Morrison, then-health minister Greg Hunt, then-chief medical officer Brendan Murphy, the Therapeutic Goods Administration (TGA), and the Australian Technical Advisory Group on Immunisation (ATAGI) for the health failures, which prolonged the lockdowns, cost lives and delayed reopening.
Urging better preparation for the next pandemic, Hamilton and Holden have a list of suggestions. They stress we need to ensure we have mRNA vaccine manufacturing capability (on which there is fairly good progress). We must get vaccine procurement “right from the start” regardless of cost. Huge quantities of RATs should be procured as soon as they become available, ready to be used immediately.
A complete overhaul of the medical-regulatory complex should be undertaken. As well, Australia should continue to invest in “economic infrastructure”. In the pandemic, the economic effort was facilitated by having a single touch payroll system. “The first obvious candidate for improvement is a real-time GST turnover reporting capability.”
Perhaps a comprehensive indoor clean air policy could be added to the infrastructure list.
The government’s review will have its own recommendations. Crabb and his colleagues hope they include attention to indoor air quality, following advice from the Chief Scientist and the National Science and Technology Council.
Members of the delegation say they received an attentive hearing from the PM.
Anna-Maria Arabia, chief executive of the Australian Academy of Science, and a member of the delegation, says Albanese “understood that improving indoor air quality is a cornerstone requirement to preparing for future pandemics and [he] was attuned to the practical implications of having good indoor air quality systems, including schools and workplaces being able to stay open and functional, reduce absenteeism and boost productivity”.
What’s needed beyond awareness, however, is timely policy action. Pandemics don’t give much notice of their arrival.
Michelle Grattan does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.