Do you really know what you look like on the inside? Most people do not, and usually it takes surgery or medical imaging to take a look while we are still alive.
A case study was published last week where researchers made the rare finding of a man with “triphallia”. Most people would say the man had three penises. But anatomists, like myself, who teach health professionals about the structure of the human body, prefer the term penes (plural of penis).
This finding emerged from the dissection of the body of a 78-year-old man who had donated his body to science. It is a case that has left many anatomists scratching their heads, and ignited discussions about typical human anatomy and anatomical variation.
I too have an extra organ – an extra spleen – plus other anatomical variations regarding two muscles. It is highly likely you might also have anatomical variations, and not necessarily know.
Back to this case
According to the latest study, only one penis was externally visible. But when his body was dissected, there were two extra, smaller penises inside the scrotum.
The main penis was 77mm long and 24mm wide, with the smaller ones about half the size. However, the images provided in the study don’t seem to match the written descriptions in all places. So the study does need clarification.
Intriguingly, researchers identified a single urethra – the hollow tube from the bladder that allows urine (and sperm from the testes) to leave the body. This urethra travelled from the bladder through part of one of the smaller penises and along the length of the main penis, leaving out the third penis entirely.
Was there a misunderstanding in identifying these anatomical structures? Could the second penis simply be a misidentified part of the main one? Is this actually a case of diphallia – two penises? In either case, the man’s anatomy was different to what you’d typically see in anatomy textbooks.
The study suggests all three penises contained erectile tissue capable of engorgement. But it remains unclear whether they worked independently or together. Unfortunately, the authors did not confirm structures by examining them under the microscope, or report tracing the nerves or blood vessels, to shed more light.
A separate case of someone with three penises, which was documented in 2020, involved a three-month-old infant.
In this instance, the main penis was in its typical position, but you could see the extra ones on the perineum (between the anus and the scrotum in males).
Neither of the extra penises had a urethra, making them incapable of functioning typically. Ultimately, these non-functional penises were safely removed.
Such cases are rare, with only these two examples reported in medical databases.
So how does this happen? The answer may lie in how embryos develop.
Early in development
The penis begins to develop early in the first trimester of a 40-week pregnancy, a time when a woman may not know she’s pregnant.
During this critical period, the embryo may be exposed to various influences. These include toxins passed through the bloodstream if the mother falls ill, takes certain drugs while pregnant or is exposed to certain chemicals. There are also genetic factors that shape how organs develop.
By the fifth week of pregnancy, cells migrate to the midline of the embryo, where they help form the precursor to the penis.
Problems in this migratory process, abnormalities in a developmental gene (called “sonic hedgehog”), or fluctuations in testosterone levels or receptors during early fetal development, could potentially lead to the formation of additional penises.
While the appearance of triphallia may be startling, these rare cases highlight a broader point: our anatomy can vary significantly. Just as individuals differ in their external appearances, so too does our internal anatomy.
For example, there are anatomical variations in blood vessels, organs, muscles, nerves and even bones that may not be readily visible.
Indeed, incidental findings during my own medical examinations have found I have a supernumerary (or extra) spleen, called a splenunculus, an extra flexor digitorum longus muscle (in my leg), and I’m missing both palmaris longus muscles (in my forearms).
While my anatomical variations are internal, a common example of a visible external anatomical variation are extra nipples. These can be mistaken for moles and can also result from developmental issues in the early weeks of pregnancy.
Why is this important?
Cases like the man said to have three penises are important reminders of the complexities of human anatomy and the many factors that can influence our bodies from the very start of development.
Exploring these rare findings emphasises the importance of continued research in anatomy and embryology.
These findings also highlight the importance of a healthy lifestyle for people intending to fall pregnant and who are already. This is so growing embryos can have the best chance of developing typical anatomy.
Amanda Meyer is affiliated with the Australian and New Zealand Association of Clinical Anatomists, the American Association for Anatomy, and the Global Neuroanatomy Network.
Source: The Conversation (Au and NZ) – By Nicky Morrison, Professor of Planning and Director of Urban Transformations Research Centre, Western Sydney University
GettyImages
Essential workers such as teachers, health workers and community safety staff play a vital role in ensuring our society works well. Yet soaring housing costs in cities like Sydney, Melbourne and Brisbane are squeezing essential workers out of the communities they serve.
The issue is reaching crisis point across Australia. Anglicare Australia yesterday released a special edition of its Rental Affordability Snapshot focused on essential workers in full-time work. Housing costs under 30% of household income are considered affordable. In a survey of 45,115 rental listings, it found:
3.7% were affordable for a teacher
2.2% were affordable for an ambulance worker
1.5% were affordable for an aged care worker
1.4% were affordable for a nurse
0.9% were affordable for an early childhood educator
0.8% were affordable for a hospitality worker.
This trend is creating unsustainable patterns of urban sprawl and long commutes. It erodes workers’ quality of life. It also undermines public service delivery by making it harder to recruit and retain these workers in high-cost areas.
International experience, particularly in the UK where I have advised on similar policies, shows there are solutions to this crisis. These global lessons fall into four categories.
Essential workers face long commutes from home when they can’t afford to live in the communities they serve. Halfpoint/Shutterstock
1. Define essential worker housing
Essential worker housing typically targets front-line public sector workers on low to middle incomes. Yet eligibility should extend to support roles, such as ambulance drivers, porters and medical receptionists, who play a vital part in enabling front-line services. They too struggle to find affordable housing near their workplaces.
Conditions of eligibility should also include a cap on household earnings.
The UK experience highlights the importance of providing both rental and ownership options. To keep key worker housing affordable and accessible over time, both types need to be priced appropriately.
Australian cities could adopt similar approaches, by requiring housing developers and community housing providers to allocate affordable housing for essential workers. Prices would be below market rates for both rentals and home ownership for the long term, and not revert to market rates. This ensures stability for public service workers.
2. Financial innovations focused on long-term affordability
Innovative financial models, such as shared equity schemes, have succeeded in the UK. These allow workers to gradually buy into their homes, creating long-term stability.
Shared equity involves the government or another investor covering some of the cost of buying the home in exchange for an equivalent share in the property. Australia could explore similar schemes to provide immediate relief while ensuring sustained affordability for future essential workers.
This approach could build on the Commonwealth’s proposed Help to Buy scheme, currently before the Senate, and existing state and territory shared equity programs. These may need refinement to better serve essential workers by, for example, adjusting income thresholds and eligibility criteria to ensure they qualify. These schemes also need to expand to cover all urban areas where housing affordability is most strained.
3. Leverage planning systems
Countries like the UK have leveraged their planning systems to deliver affordable housing for key workers. In England, planning authorities use mechanisms such as Section 106 agreements to ensure a portion of new developments is reserved for key worker housing as a condition of planning approval.
Australian states could adapt this model, setting targets within existing planning frameworks. For example, they could use Voluntary Planning Agreements to prioritise essential worker housing.
Yet essential worker housing should not displace housing for other people in urgent need. They include people who are homeless, low-income families, people with disabilities, the elderly, those at risk of domestic violence, veterans and youth leaving foster care.
4. Use public land for housing development
The use of surplus public land for essential worker housing has proven successful in several cities, including London, Amsterdam and San Francisco.
Earmarking land owned by the public sector, such as hospital or education sites, is a strategic way to deliver affordable housing near key public sector employers. It also allows staff to travel to work nearby using sustainable transport instead of cars.
Affordable housing has profound benefits
Without action, essential workers are likely to be forced into lower-quality, high-cost housing, shared accommodation, or long commutes from more affordable areas. Over time, these patterns of job-housing imbalances and urban sprawl are unsustainable. These issues are the focus of my current research, particularly in Western Sydney.
The New South Wales government has set up a parliamentary select committee to inquire into options for essential worker housing. It’s bringing much-needed attention to the housing crisis affecting key public sector roles.
Tackling these issues through targeted housing solutions has many benefits. It can help create more sustainable communities, reduce recruitment and retention difficulties for employers and ease the strain on infrastructure and services.
The key takeaway from the UK and other countries is the importance of long-term, sustainable solutions that do not shift the focus away from those most in need of housing. Australia has the opportunity to strike this balance. We need to ensure essential workers can afford to live near their workplaces while not sidelining everyone else in need of affordable housing.
Nicky Morrison does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Former One Direction band member and solo artist Liam Payne has been found dead outside a hotel in Buenos Aires, media reports have confirmed. Payne was just 31 years old – a loved friend and father.
Alongside his former One Direction band mates Niall Horan, Harry Styles, Louis Tomlinson and Zayn Malik, Payne had a huge influence on popular culture in his home country of the United Kingdom and internationally.
The group formed in 2010 on the British talent show X Factor and stayed together for about five years before officially splitting in 2016. Throughout this time, Payne remained a valuable member of the band and a clear talent in his own right.
Although each member auditioned seperately, they were eventually hand-picked by Simon Cowell to form a group.
After the split (and a brief hiatus from music-making), Payne continued to release music periodically as both a songwriter and collaborator. He most recently released the single Teardrops in March, ahead of an anticipated second solo album.
News of Payne’s death has led to an outpouring of tributes. Like many young people thrust into stardom seemingly overnight, his life wasn’t without controversy. But the response to his death by fans and industry colleagues alike is proof of the impact he had.
The making of a pop supergroup
While One Direction may have not been together for as long as other globally successful acts, their influence far exceeded bands that have been together for decades. They released five studio records – and broke many more, including six Guinness World Records. And even though they didn’t make it to their 10th anniversary together, they had still sold some 70 million records by 2020.
In the years since the split, fans continued to gather, listen and celebrate – with the most recent anniversary (14 years) seeing fan-led events held in Australia and the rest of the world.
It’s easy to dismiss pop music and its influence, especially in the face of what feel like increasingly dire global circumstances. But pop, like many other forms of entertainment, provides a practical way for people to gain momentary pleasure and comfort.
It also provides connection with others – and relief from politics and other daily pressures. For example, one of One Direction’s biggest hits, That’s What Makes You Beautiful, sought to empower young people who might otherwise be overwhelmed by negative messaging.
Within a year of their debut, the group was met with massive crowds of fans almost everywhere they want.
One Direction has been compared to The Beatles in terms of their influence on young people – and female and queer fans in particular.
The impact on fans when their idol dies
The loss of life, especially a young person’s life, is always a tragedy.
For some young fans, this might be the first person they “know” who has died. While it may not be the same as losing a family member or close friend, the feeling of loss is significant. Young fans will need support. And in 2024, many will find this support through social platforms and online forums.
I still remember the impact the deaths of stars such as Kurt Cobain and Jeff Buckley had on people like me who were teenagers in the 1990s. These were artists I admired and listened to – and whose art I relied on during times of pleasure and pain.
A similar pang was felt when artists such as George Michael, Aretha Franklin and David Bowie died, albeit later in my life and theirs.
The experience of losing a music idol is in many ways a universal one. People whose art we attach to our own life experiences become inseparable from our lives. And when they die, it can feel like those experiences are over too.
After news of Payne’s death broke, hundreds of fans took to the streets of Palermo in Buenos Aires, where Payne had been visiting. They held a vigil, cried and consoled one another in front of the Casa Sur hotel where Payne had been staying.
One fan, 25-year-old Yamila Zacarias, probably spoke for many when she said:
He meant a lot to me because the band came into my life at this time when you’re trying to be a part of something, and being a One Direction fan became that something for me.
Lifelong fandom and memories
There’s a stereotype of “fans” as hordes of screaming girls, which can really take away from the depth of fandom.
Anyone at any stage of life can be a fan of just about anything. And the best thing about fandom is that it can, and often does, allow lots of different types of people an outlet for connection throughout their lives.
Many fans have left comments on old music videos. YouTube/screenshot
The death of US actress Betty White in 2021, as sad as it was, brought people across generations and walks of life together. And not just those who knew her personally, but those who had connected with each other through their love of her work. It reminded me of my own family, including my Nan and Dad, now gone, and the laughs we’d share as we watched her.
As more details and tributes to Payne’s life and death emerge, the fans will have each other to lean on. If you yourself know someone who is a fan of Payne or One Direction, even reaching out to just acknowledge that person’s grief and experience is important. It says to them, “what you love is valid, and so are you”.
If this article has raised issues for you, or if you’re concerned about someone you know, call Lifeline on 13 11 14.
Liz Giuffre does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Clusters of severe thunderstorms are expected to strike Australia’s southern regions over Thursday and Friday.
The Bureau of Meteorology has issued severe weather warnings and forecasts related to these unusually widespread stormy conditions as they move through South Australia today and into Victoria.
As of October 17th, there’s a risk of severe thunderstorms for parts of central and southern Australia.
While we might not always think of thunderstorms as a threat, severe storms can be surprisingly damaging. The enormous Sydney thunderstorm of 1999 dropped an estimated 500,000 tonnes of hail, causing widespread damage to cars and roofs. At the time, it was the most expensive natural disaster on record, overtaken only by the unprecedented 2022 floods across eastern Australia – which were themselves partly caused by severe thunderstorms in addition to other weather systems.
When severe thunderstorms bring torrential rain, they can often trigger flash flooding. This is because extreme rain from thunderstorms usually falls over a relatively short time – less than an hour or two in many cases. Lightning can also pose a threat.
In recent years, severe thunderstorms have also shown they can damage the power grid. In 2016, huge rotating supercell storms brought intense winds and at least seven tornadoes to South Australia, toppling transmission towers and causing a statewide blackout. Smaller thunderstorms caused major outages in Victoria in February this year after taking down six towers.
But what makes a thunderstorm “severe”?
The ingredients for a storm
What triggers thunderstorms? Climate scientists and meteorologists often talk about the ingredients necessary for thunderstorms.
To make a normal thunderstorm, you need to have a lot of moisture in the air. Then you need vertical instability in the atmosphere, meaning relatively warm moist air near the surface and very cold air above. You also need a mechanism to lift warmer surface air up to a level where the atmospheric instability can be released.
For a severe thunderstorm, you need all those ingredients and usually one more: vertical wind shear. This means that wind speeds and direction differ with height. For example, you might have strong northerly winds down low, and strong southerly winds up higher.
Vertical wind shear can make a run-of-the-mill thunderstorm much more intense, in a range of ways. For instance, wind shear can help warm updrafts stay separate from cold downdrafts and rainfall, which can help make the storm last longer.
If a thunderstorm has large hail, damaging wind gusts or could trigger a tornado or flash flooding, this makes it a severe thunderstorm, according to Bureau of Meteorology classification.
You might have also heard of supercell storms. These are convective thunderstorms, characterised by strong, rotating updrafts that last for a long time.
Forecasters can predict the potential for severe thunderstorms several days out by looking for moisture-laden air and winds. But predicting exactly where and when they might pop up is extremely challenging.
Severe storms can bring lightning, hail, intense winds and rain. Pictured: a previous thunderstorm over Perth’s northern suburbs. cephotoclub/Shutterstock
What’s unusual about these storms?
The storms this week are unusually widespread, with thunderstorms possible from Kalbarri in central Western Australia down through Esperance, across into South Australia, into Victoria and up through New South Wales and southern Queensland.
These conditions are due to a large-scale low pressure system moving west to east.
As this large low pressure system moves east, it brings thunderstorms. This map shows the low pressure system on October 16th. Bureau of Meteorology, CC BY-NC-ND
Ahead of the arrival of this low pressure system, winds from the north are bringing down moisture and instability and priming the system for thunderstorms. When air near the low pressure system begins to rise, energy from the warm, moisture-laden and unstable air can be released. This includes energy release due to condensation of water vapour. These rising air currents can travel several kilometres up into the atmosphere, even reaching the top of the troposphere, 10–15km up.
Severe thunderstorms in southern Australia are more likely in spring and summer. That’s because there’s plenty of moisture available from the tropics and the warm oceans around Australia, while low pressure systems and cold fronts can still emerge from the cold oceans to our south.
Thunderstorms, tornadoes and fire
Severe thunderstorms can also pack a hidden punch. They can trigger tornadoes in extreme cases.
In August, severe thunderstorms hit northern Victoria and triggered a tornado, a destructive whirling column of air that damaged houses and farms in the high country.
This surprised many people. It’s generally known that Australia has tropical cyclones in the north, intense tropical storms coming in off the sea, but not as well known to have tornadoes.
In fact, Australia does get tornadoes – an estimated 30–80 each year. In 2013, a total of 69 known tornadoes caused almost 150 injuries. Many of these tornadoes spin out of supercells.
In Australia’s hotter months, many fires burn around the country. Thunderstorms can make fires worse by bringing strong, warm northerly winds, often with rapid variations in speed and direction that can increase the rate of spread of a fire.
Firefighters and first responders dread these conditions. Australia’s most deadly bushfire was Black Saturday in 2009, which killed 173 people. One reason it was so dangerous was its suddenness. Intense northerly winds brought down powerlines and started fires, which were quickly whipped into intense firestorms, including thunderstorms generated in the fire plumes.
Will climate change bring more severe storms?
As the world heats up, more water is evaporating off warm sea surfaces and hanging in the air as water vapour. This means there’s more of this ingredient necessary to fuel severe thunderstorms and more intense rain from thunderstorms.
What we don’t know for certain yet is how prevailing air currents over Australia are changing. This could shift moisture to different regions, or affect other thunderstorm ingredients like vertical wind shear, instability, and lifting mechanisms. If circulation patterns do change, we could see severe storms develop in new areas, or different times of the year.
Andrew Brown receives funding from the ARC Centre of Excellence for 21st Century Weather.
Andrew Dowdy receives funding from University of Melbourne, including through the Centre of Excellence for Climate Extremes and the Melbourne Energy Institute.
Former One Direction band member and solo artist Liam Payne has been found dead outside a hotel in Buenos Aires, media reports have confirmed. Payne was just 31 years old – a loved friend and father.
Alongside his former One Direction band mates Niall Horan, Harry Styles, Louis Tomlinson and Zayn Malik, Payne had a huge influence on popular culture in his home country of the United Kingdom and internationally.
The group formed in 2010 on the British talent show X Factor and stayed together for about five years before officially splitting in 2016. Throughout this time, Payne remained a valuable member of the band and a clear talent in his own right.
Although each member auditioned seperately, they were eventually hand-picked by Simon Cowell to form a group.
After the split (and a brief hiatus from music-making), Payne continued to release music periodically as both a songwriter and collaborator. He most recently released the single Teardrops in March, ahead of an anticipated second solo album.
News of Payne’s death has led to an outpouring of tributes. Like many young people thrust into stardom seemingly overnight, his life wasn’t without controversy. But the response to his death by fans and industry colleagues alike is proof of the impact he had.
The making of a pop supergroup
While One Direction may have not been together for as long as other globally successful acts, their influence far exceeded bands that have been together for decades. They released five studio records – and broke many more, including six Guinness World Records. And even though they didn’t make it to their 10th anniversary together, they had still sold some 70 million records by 2020.
In the years since the split, fans continued to gather, listen and celebrate – with the most recent anniversary (14 years) seeing fan-led events held in Australia and the rest of the world.
It’s easy to dismiss pop music and its influence, especially in the face of what feel like increasingly dire global circumstances. But pop, like many other forms of entertainment, provides a practical way for people to gain momentary pleasure and comfort.
It also provides connection with others – and relief from politics and other daily pressures. For example, one of One Direction’s biggest hits, That’s What Makes You Beautiful, sought to empower young people who might otherwise be overwhelmed by negative messaging.
Within a year of their debut, the group was met with massive crowds of fans almost everywhere they want.
One Direction has been compared to The Beatles in terms of their influence on young people – and female and queer fans in particular.
The impact on fans when their idol dies
The loss of life, especially a young person’s life, is always a tragedy.
For some young fans, this might be the first person they “know” who has died. While it may not be the same as losing a family member or close friend, the feeling of loss is significant. Young fans will need support. And in 2024, many will find this support through social platforms and online forums.
I still remember the impact the deaths of stars such as Kurt Cobain and Jeff Buckley had on people like me who were teenagers in the 1990s. These were artists I admired and listened to – and whose art I relied on during times of pleasure and pain.
A similar pang was felt when artists such as George Michael, Aretha Franklin and David Bowie died, albeit later in my life and theirs.
The experience of losing a music idol is in many ways a universal one. People whose art we attach to our own life experiences become inseparable from our lives. And when they die, it can feel like those experiences are over too.
After news of Payne’s death broke, hundreds of fans took to the streets of Palermo in Buenos Aires, where Payne had been visiting. They held a vigil, cried and consoled one another in front of the Casa Sur hotel where Payne had been staying.
One fan, 25-year-old Yamila Zacarias, probably spoke for many when she said:
He meant a lot to me because the band came into my life at this time when you’re trying to be a part of something, and being a One Direction fan became that something for me.
Lifelong fandom and memories
There’s a stereotype of “fans” as hordes of screaming girls, which can really take away from the depth of fandom.
Anyone at any stage of life can be a fan of just about anything. And the best thing about fandom is that it can, and often does, allow lots of different types of people an outlet for connection throughout their lives.
Many fans have left comments on old music videos. YouTube/screenshot
The death of US actress Betty White in 2021, as sad as it was, brought people across generations and walks of life together. And not just those who knew her personally, but those who had connected with each other through their love of her work. It reminded me of my own family, including my Nan and Dad, now gone, and the laughs we’d share as we watched her.
As more details and tributes to Payne’s life and death emerge, the fans will have each other to lean on. If you yourself know someone who is a fan of Payne or One Direction, even reaching out to just acknowledge that person’s grief and experience is important. It says to them, “what you love is valid, and so are you”.
If this article has raised issues for you, or if you’re concerned about someone you know, call Lifeline on 13 11 14.
Liz Giuffre does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Egypt recently deepened its involvement in the war-weary Horn of Africa by arming Somalia and deploying its troops in the embattled country. To Ethiopia’s growing alarm, Egypt is also set to join the multinational force supporting the Somali army against the jihadist threat by al-Shabaab. Egypt’s potentially destabilising presence in the region is seen a direct consequence of Ethiopia’s port agreement with breakaway Somaliland, which Somalia took as a direct affront. Endalcachew Bayeh, a political scholar with a focus on the Horn of Africa, sets out the risks and the path to de-escalation.
What do we know about Egypt’s entry into Somalia and the theatre of conflict in the Horn?
Egypt’s arrival in the Horn of Africa can be traced back to Ethiopia’s quest for a dedicated port under its control. Ethiopia is the world’s largest landlocked country by population and has relied exclusively on the port of Djibouti since the outbreak of the Ethiopia-Eritrea war (1998-2000).
Ethiopia has been exploring alternative access points. This led to the announcement on 1 January 2024 that it had struck a port deal with Somaliland. Ethiopia agreed to recognise the breakaway republic in exchange for a naval base on Somaliland’s coast.
The announcement sparked a diplomatic rift with Somalia, which viewed the deal as a violation of its sovereignty and territorial integrity. Somalia still considers self-declared Somaliland part of its territory.
Amid the turmoil, Somalia courted Egypt as a regional patron to counter Ethiopia. This aligned well with Egypt’s increasing interest in finding a military partner along Ethiopia’s border.
Egypt is a longstanding rival of Ethiopia. Recently, it threatened to go to war over Ethiopia’s massive Grand Ethiopian Renaissance Dam, which it sees as a threat to its survival.
Egypt deployed military forces in Somalia following its defence deal with Mogadishu in August 2024. It also plans to deploy 5,000 soldiers as part of the African Union Support and Stabilisation Mission in Somalia. The mission is set to replace the African Union Transition Mission in Somalia, in which Ethiopia is a main player.
Ethiopia’s agreement to recognise Somaliland and the friction with Somalia have brought its old enemy, Egypt, to its doorstep.
How have Egypt-Ethiopia hostilities added to regional tensions?
Soon after Egypt’s deployment in Somalia, Ethiopia formalised its recognition of Somaliland. It also sent an ambassador to the capital, Hargeisa. This made it the first nation to officially acknowledge Somaliland’s independence. The two are also rushing to turn their memorandum of understanding into a binding bilateral treaty.
Somaliland ordered the closure of the Egyptian Cultural Library in Hargeisa.
Eritrea, for a time a key ally of Ethiopia’s Abiy Ahmed in the fight against the Tigray People’s Liberation Front, is now at odds with Addis Ababa. And, in response to the recent tensions in the region, Eritrea is strengthening its ties with Egypt and Somalia. A recent meeting of the three has created a united front against Ethiopia.
In Somalia, Ethiopia plays a stabilising role. Somalia now demands that Ethiopia should end its involvement. That could open the way for militant groups and keep Somalia unstable. This is even more likely to happen if Egypt focuses on its competition with Ethiopia rather than Somalia’s stability.
In addition, Somalis have longstanding territorial claims over parts of Ethiopia, Kenya and Djibouti. Instability can create fertile ground for groups like Al-Shabaab, which aims to include these territories in an Islamic state.
Ethiopia’s recognition of Somaliland and Egypt’s presence in Somalia come at a time of multiple regional crises. These include the strained Ethiopia-Eritrea relations, the Ethiopia-Sudan dispute over Al-Fashaga border region, and instability in Ethiopia.
This volatile environment increases the likelihood of proxy wars.
Key areas to watch are:
Sudan and Egypt: These two countries align on the Grand Ethiopian Renaissance Dam issue. Egypt has enhanced its security cooperation with Sudan through military support and joint exercises. Although Sudan is in turmoil, the Al-Fashaga dispute with Ethiopia remains a potential flashpoint. Egypt may take advantage of this dispute and its support for the Sudanese Armed Forces against the Rapid Support Forces to further its interests.
Instability in Ethiopia: In several regions, the government is engaged in active conflict with non-state forces. This instability creates fertile ground for Egypt to potentially support proxies against the Ethiopian government. Egypt and Somalia have already expressed the possibility of using proxy forces.
Egypt’s main motivation for intervening in the region is to control the Nile’s source or hinder Ethiopia’s use of the water. As a result, Ethiopia perceives Egypt’s presence at its doorstep as a direct security threat. This increases tensions between Egypt, Somalia and Ethiopia.
Any further destabilisation of Ethiopia would disrupt the entire region, as it shares porous borders with almost all countries in the Horn.
What are the potential avenues for de-escalation?
A promising pathway for reducing tensions in Somalia and the broader region is for the two regional powers to reconsider their strategies and exercise restraint.
Ethiopia can access the sea through Somaliland without formal recognition. This could ease tensions and would not encourage separatist movements.
For Egypt, a more constructive approach would be to limit its direct involvement in the Horn of Africa. Instead, it should address its concerns about the Ethiopian mega-dam through the United Nations, the African Union and other platforms. Historically, its unilateral actions have often been sources of tensions rather than solutions in the region.
The African Union and the Intergovernmental Authority on Development must ensure that the regional states themselves address regional issues. States must make wise decisions now to calm tensions, as no state will be spared from the spillover effects.
Endalcachew Bayeh does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Source: The Conversation – UK – By Phil Tomlinson, Professor of Industrial Strategy, Co-Director Centre for Governance, Regulation and Industrial Strategy (CGR&IS), University of Bath
The UK government’s plan to create a new industrial strategy is a welcome attempt to steer Britain’s economy through the challenges of the 21st century. Amid a backdrop of global economic uncertainty, a clear focus on achieving growth is essential.
The plan is at an early stage. The new green paper marks the beginning of a consultation process designed to shape future government policy.
But creating an industrial strategy in the first place – to coordinate a wide range of economic policies – is commendable. For too long, the UK has been lagging behind other countries which have embraced greater government intervention in their economies.
And the idea of having that strategy overseen by an “industrial strategy council”, to offer a degree of independent oversight, is a good one. If set up properly, this council should encapsulate the idea of industrial strategy as a partnership between the state and business – a collaborative effort to discover new opportunities and develop new policies.
It is also pleasing to see the green paper hasn’t shied away from some of the big issues. There is appropriate emphasis on geography, and creating opportunities in “left behind places”. For too long, economic growth in Britain has been disproportionately concentrated in London and the south-east.
Empowering local leaders in other regions to shape industrial policies, tailored to their specific needs, is a step in the right direction.
The emphasis on addressing the UK’s clapped-out infrastructure is also wise. Pledges to invest in broadband, electricity supply, rail and roads should lay the groundwork for a more interconnected economy. There is evidence that improved connectivity could attract new investment and boost regional productivity in areas that have been economically stagnant for decades.
There are also promises to increase public investment in research and development
in emerging industries such as AI and clean energy. The vision for a modern, hi-tech economy driven by innovation is much needed in a county which currently ranks 25th in the global robotics league table, the only G7 nation outside the top 20.
But there are also risks to such a technology-centred approach, which could easily be at odds with the goal of tackling regional inequality. Indeed, given new investment tends to flow to existing hi-tech regions, the divide between successful and left-behind places could widen.
The plan’s green focus is also timely. By prioritising clean energy and investment in sectors such as electric vehicles, the strategy aligns with goals for achieving net zero emissions by 2050.
Mission impossible?
However, other issues also need to be included in the government’s plans. There is no consideration of geopolitics in the green paper. Yet any effective UK industrial strategy has to account for the impact of China and the US, and their ongoing tensions.
Similarly – and strangely – Brexit is hardly mentioned. Despite post-Brexit disruption to trade with the EU continuing to act as a drag on investment and growth, the green paper merely skirts around the issue. Nor is there anything about how industries deeply reliant on EU supply chains and markets (such as car manufacturing) can thrive outside the European single market.
Workers in traditional manufacturing, and in sectors such as retail, hospitality and care, will also need to hear more about support and retraining. The government needs to be mindful of not increasing a sense of polarisation between those who benefit from a green hi-tech revolution, and those who don’t.
And there will need to be much more detail about funding. The Labour government is keen to attract investors – the green paper was published on the same day as a high-profile investment summit in London, which featured impressive international attendees enjoying fine food and high-calibre entertainment.
But heavy reliance on private sector investment raises questions about accountability. For, while public-private partnerships can be effective, there is always a risk that private sector interests may not align with the needs of everyone else.
Overall, the green paper is the starting point for a critical national conversation about the UK’s economic future. The road to tangible success will depend on translating ideas into concrete actions, dealing with inevitable trade-offs, and being brave enough to address some deep structural issues. If it does, the green paper could turn into a blueprint for a genuinely resilient and competitive country.
Phil Tomlinson receives funding from the Engineering and Physical Sciences Research Council (EPSRC) for Made Smarter Innovation: Centre for People-Led Digitalisation.
David Bailey receives funding from the Economic and Social Research Council’s UK in a Changing Europe Programme.
Michael A. Lewis currently receives funding from the Economic and Social Research Council (ESRC) and the Arts and Humanities Research Council (AHRC).
This chart shows how death rates have fallen since the 1970s, emphasising the higher male death experience. The principal finding is that dramatically falling death rates have plateaued since around 2010, especially for men aged 50 to 64. Yet the starkest fact portrayed is the much higher death rates of males than females, in each of the age groups shown.
While this chart shows the differences in death rates clearly, the arithmetic plots used have an inbuilt visual bias with respect to changes over time; they exaggerate the slowing down of the improvements in death rates and the narrowing of the gaps. The second chart uses a ‘logarithmic scale’, which corrects this bias. For this second chart it is the slopes that matter, not the gaps between the groups.
Chart by Keith Rankin.
The ‘plateau-effect’ still clearly shows. What it means is that it is no longer credible to say that “we are all living longer” (as many people urging us to save more for retirement say). Essentially, since about 2010, older working-age adults were dying at the same rates in the late 2010s as in the early 2010s. For the 2020s there is a small Covid19 effect. It seems unlikely that the declining age-group death rates of the millennial period will resume.
The data used shows some other things that are not easy to chart. First, the large gap between male and female death rates is closing (but remains large). Second, males aged between 15 and 35 had disturbingly higher death rates in the late 1980s ‘Rogernomic period’ compared to the early-1980s ‘Muldoon period’. Though females aged 20-24 did have markedly rising death rates in the early 1980s. In recent years, the death rates of younger people has risen significantly, especially females; though female death rates remain significantly lower than male death rates for all age groups. The biggest improvements in death rates in the millennial period were made by younger people, and by males aged 50 to 74. Those improvements slowed or reversed after 2015.
*******
Keith Rankin (keith at rankin dot nz), trained as an economic historian, is a retired lecturer in Economics and Statistics. He lives in Auckland, New Zealand.
Victor Ambros and Gary Ruvkun were awarded the 2024 Nobel prize in physiology or medicine for their discovery of microRNA, tiny biological molecules that tell the cells in our body what kind of cell to be by turning on and off certain genes.
In this episode of The Conversation Weekly podcast, we speak to Ambros about the discovery that led to his Nobel prize and find out what he’s researching now. And we hear about how a deeper understanding of microRNA is opening up new avenues for potential treatment of cancers and other diseases.
Today, Ambros is a professor of molecular medicine and the Silverman Chair in Natural Sciences at the University of Massachusetts Chan Medical School in the US. But the research that won him a Nobel prize was published more than 30 years ago in 1993, when he had just established his own research lab at Harvard University.
Ambros was trying to understand the way cells get the right instructions from DNA during their development. To do this, he was studying mutations in an experimental organism: a small worm called C. elegans.
We were studying some mutations and that affected C. elegans’ development in interesting ways – but we were not looking for the involvement of any sort of unexpected kind of molecular mechanisms.
Ambros’s wife, Rosalind Lee, and another member of the lab team, Rhonda Feinbaum, had spent a couple of years trying to understand the genetic process behind the mutation in a labour-intensive search. What they eventually discovered was a microRNA, a new dimension to gene regulation – the process through which genes are turned on and off in certain cells. As Ambros put it:
You can say they’re really the heroes behind this, and our job – mine and Gary’s – is to stand in as representatives of the whole enterprise of science, which is so dependent upon teams, collaborations, brainstorming among multiple people, communications of ideas and crucial data … All this is part of the process that underlies successful science like this.
MicroRNA’s role in cancer
Thanks to the discoveries of Ambros and Ruvkun back in the 1990s, medical researchers all over the world are looking at how microRNA affects the development of human diseases. One such researcher is Justin Stebbing, a professor of biomedical sciences at Anglia Ruskin University in Cambridge, UK. He explained:
MicroRNAs, like many processes, can go wrong and they’ve been implicated in diseases as diverse as Alzheimer’s and Parkinson’s to cancer and kidney failure.
Stebbing said that in cancer, microRNA has been found to turn off tumour suppressor genes, effectively allowing cancers to spread. But microRNA can also be useful in understanding cancer, and in potential treatments:
We can work out the right treatments for people based on what we call a microRNA signature. We can understand prognosis, which means how severe people’s cancers are, but we can also try and harness them for treatments to make people better.
To find out more about the discovery of microRNA and what research is being done on it today, listen to the full episode of The Conversation Weekly podcast, which includes an introduction from Vivian Lam, associate health and biomedicine editor at The Conversation in the US.
This episode of The Conversation Weekly was produced by Katie Flood, Gemma Ware and Mend Mariwany. Sound design was by Michelle Macklem, and our theme music is by Neeta Sarl.
Listen to The Conversation Weekly via any of the apps listed above, download it directly via our RSS feed or find out how else to listen here.
Victor Ambros’s laboratory’s research has been funded (since 1985) and is currently funded by the US National Institutes of Heath. Justin Stebbing does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
The winners of a competition which challenges academics to explain their research in just three minutes have been announced.
A total of 850 researchers from across the UK entered the tenth annual Vitae Three Minute Thesis (3MT®) competition, which was sponsored by The Conversation through its Universal Impact training and mentoring subsidiary.
These were narrowed down first to 65 competitors and then six finalists, before a judging panel and a public vote determined the winning three.
The overall judges’ award went to Jo Baker from Newcastle University for her presentation on children’s speech difficulties, which was perfectly illustrated through the use of an original cartoon.
Speech and language therapist Jo Baker impressed the judges.
Ulster University’s James McMullan captured the public’s imagination with his presentation on whether eating fish could be the secret to healthy ageing, winning the people’s choice award.
Universal Impact also had the chance to pick an editor’s champion. We chose Muhammad Muddasar at University of Limerick for his research looking at whether the heat we waste on a daily basis could be transformed into a new energy source.
The other finalists were Ferdinando Sereno at UCL, Natalie Weir at University of Derby and Charlie Gerlis from the University of the West of England.
Originally developed by the University of Queensland, the competition challenges doctoral researchers to communicate their research to a non-expert audience – in three minutes or less.
As a judge, I was blown away by the overall standard of the entries – this really was a masterclass in research communications.
All of the academics involved spoke passionately about their research, explaining how it could contribute to making the world a better place.
Each of these researchers deserved their place in the final and it took lengthy discussions before the panel was able to pick a winner.
This year’s final was broadcast live online with the winners announced on Friday, October 4.
The presentations were recorded and uploaded online ahead of a public vote.
‘It opens minds and opens doors’**
At Universal Impact, we have been delighted to support this mission by joining the judging panel and mentoring the champions (who also receive a coveted trophy and small grant) to help them build on their success and take their research to an even wider audience.
Vitae, which organised the competition, is a non-profit organisation that supports the professional development of researchers.
Rachel Cox, head of membership and engagement at Vitae, said: “The Vitae Three Minute Thesis is a fantastic competition which provides a unique opportunity for doctoral researchers to think differently about how they communicate work that is meaningful to them to a wider audience.
“It opens minds and opens doors for the individuals involved, as it can be a pathway to a wide variety of future careers, as previous participants have shown.
“At Vitae, we are proud of the impact this competition has had over the past ten years, and we are excited to see what it can do over the next decade.
“We are also delighted that Universal Impact and The Conversation are supporting this year’s event.”
You can find out more about the competition and the work of Vitae here.
The Conversation Weekly podcast caught up with Victor Ambros from his lab at the UMass Chan Medical School to learn more about the Nobel-winning research and what comes next. Below are edited excerpts from the podcast.
How did you start thinking about this fundamental question at the heart of the discovery of microRNA, about how cells get the instructions to do what they do?
The paper that described this discovery was published in 1993. In the late 1980s, we were working in the field of developmental biology, studying C. elegans as a model organism for animal development. We were using genetic approaches, where mutations that caused developmental abnormalities were then followed up to try to understand what the gene was that was mutated and what the gene product was.
It was well understood that proteins could mediate changes in gene expression as cells differentiate, divide.
We were not looking for the involvement of any sort of unexpected kind of molecular mechanisms. The fact that the microRNA was the product of this gene that was regulating this other gene in this context was a complete surprise.
There was no reason to postulate that there should be such regulators of gene expression. This is one of those examples where the expectations are that you’re going to find out about more complexity and nuance about mechanisms that we already know about.
But sometimes surprises emerge, and in fact, surprises emerge perhaps surprisingly often.
These C. elegans worms, nematodes, is there something about them that allows you to work with their genetic material more easily? Why are they so key to this type of science?
C. elegans was developed as an experimental organism that people could use easily to, first, identify mutants and then study the development.
It only has about a thousand cells, and all those cells can be seen easily through a microscope in the living animal. But still it has all the various parts that are important to all animals: intestine, skin, muscles, a brain, sensory systems and complex behavior. So it’s quite an amazing system to study developmental processes and mechanisms really on the level of individual cells and what those cells do as they divide and differentiate during development.
You were looking at this lin-4 gene. What was your surprising discovery that led to this Nobel Prize?
In our lab, Rosalind Lee and Rhonda Feinbaum were working on this project for several years. This is a very labor intensive process, trying to track down a gene.
And all we had to go by was a mutation to guide us as we gradually homed in on the DNA sequence that contained the gene. The surprises started to emerge when we found that the pieces of DNA that were sufficient to confer the function of this gene and rescue a mutant were really small, only 800 base pairs.
And so that suggested, well, the gene is small, so the product of this gene is going to be pretty small. And then Rosalind worked to pare down the sequence more and to mutate potential protein coding sequences in that little piece of DNA. By a process of elimination, she finally showed that there was no protein that could be expressed from this gene.
And at the same time, we identified this very, very small transcript of only 22 nucleotides. So I would say there was probably a period of a week or two there where these realizations came to the fore and we knew we had something new.
You mentioned Rosalind, she’s your wife.
Yeah, we’ve been together since 1976. And we started to work together in the mid-’80s. And so we’re still working together today.
And she was the first author on that paper.
That’s right. It’s hard to express how wonderful it is to receive such validation of this work that we did together. That is just priceless.
Victor Ambros and Rosalind Lee toast the Nobel news on the day of the announcement. UMass Chan Medical School
Like it’s a Nobel Prize for her too?
Yes, every Nobel Prize has this obvious limitation of the number of people that they give it to. But, of course, behind that are the folks who worked in the lab – the teams that are actually behind the discoveries are surprisingly large sometimes. In this case, two people in my lab and several people in Gary Ruvkun’s lab.
In a way they’re really the heroes behind this. Our job – mine and Gary’s – is to stand in as representatives of this whole enterprise of science, which is so, so dependent upon teams, collaborations, brainstorming amongst multiple people, communications of ideas and crucial data, you know, all this is part of the process that underlies successful science.
That first week of the discoveries, did you anticipate at that point that this could be such a huge step for our understanding of genes?
Until other examples are found of something new, it’s very hard to know how peculiar that particular phenomenon might be.
We’re always mindful that evolution is amazingly innovative. And so it could have been that this particular small RNA base-pairing to this mRNA of lin-14 gene and turning off production of the protein from lin-14 messenger RNA, that could be a peculiar evolutionary innovation.
The second microRNA was identified in Gary Ruvkun’s lab in 1999, so it was a good six years before the second one was found, also in C. elegans. Really, the watershed discovery was when Ruvkun showed that let-7, the other microRNA, was actually conserved perfectly in sequence amongst all the bilaterian animals. So that meant that let-7 microRNA had been around for, what, 500 million years?
And so it was immediately obvious to the field that there had to be other microRNAs – this was not just a C. elegans thing. There must be others, and that quickly emerged to be the case.
You and Gary Ruvkun had been postdoctoral fellows at the same time at MIT, but by the time you made your respective discoveries, you’d both set up your own labs. Would you call them rival labs, in the same town?
No, I would certainly not call it rival labs. We were working together as postdocs basically on this problem of developmental timing in Bob Horvitz’s lab.
We just basically informally divided up the work. The understanding was, OK, Ambros lab will focus on lin-4 gene, and Ruvkun lab will focus on lin-14, and we anticipated that there would be a point that we would get together and share information about what we’ve learned and see if we could come to a synthesis.
That was the informal plan. It was not really a collaboration. It was certainly not a rivalry. The expectation was that we would divide up the work and then communicate when the time came. There was an expectation in this community of C. elegans researchers that you should share data freely.
Your lab still works on microRNA. What are you investigating? What questions do you still have?
One I find very interesting is a project where we collaborated with a clinician, a geneticist who studies intellectual disability. She had discovered that her patients, children with intellectual disabilities, in certain families carried a mutation that neither of their parents had – a spontaneous mutation – in the protein that is associated with microRNAs in humans called the Argonaute protein.
Each of our genomes contains four genes for Argonautes that are the partners of microRNAs. In fact, this is the effector protein that is guided by the microRNA to its target messenger RNAs. This Argonaute is what carries out the regulatory processes that happen once it finds its target.
These so-called Argonaute syndromes were discovered, where there are mutations in Argonautes, point mutations where only one amino acid changes to another amino acid. They have this very profound and extensive effect on the development of the individual.
And so working with these geneticists, our lab and other labs took those mutations, that were essentially gifted to us by the patient. And then we put those mutations into our system, in our case into C. elegans‘ Argonaute.
I’m excited by the very organized, active partnership between the Argonaute Alliance of families with Argonaute syndromes and the basic scientists studying Argonaute.
How does this collaboration potentially help those patients?
What we’ve learned is that the mutant protein is sort of a rogue Argonaute. It’s basically screwing up the normal process that these four Argonautes usually do in the body. And so this rogue Argonaute, in principle, could be removed from the system by trying to employ some of the technology that folks are developing for gene knockout or RNA interference of genes.
This is promising, and I’m hopeful that the payoff for the patients will come in the years ahead.
Victor Ambros receives funding from the U.S. National Institutes of Heath.
You’d be forgiven for thinking that young people are behind most knife crime in the UK. Media coverage often focuses on youth involvement, and the government’s plan to halve knife crime focuses specifically on young people and vulnerable teenagers.
Evidence shows that most knife-involved crime is committed in the home, between adults, in the form of intimate partner violence. Only around 18% of knife offences are carried out by 10- to 17-year-olds. These usually involve other young people.
Although young people’s share of knife crime is low, their involvement is a significant concern and has risen starkly in the last decade.
Choosing to carry a knife out of the home, into the streets, or into school is a rare choice that most children never make. Estimates show that between one and four in 100 young people carry knives.
For those few who do, it is important to understand the complex factors behind why. This is what we, and many other academics, have been studying in our research.
Both researchers and young people themselves cite protection as a factor in knife carrying. Many young people are fearful of being victims of knife crime, and knife carrying may offer a sense of security and defence from potential threats.
This fear is not necessarily correlated to reality. Young people tend to overestimate the prevalence of weapon carrying among their peers. What’s more, those carrying knives for defence often end up having their own knife used against them.
Seeing images of knives
One reason that young people may have a fear of knife crime is because of how the threat is presented to them through images.
Media reports and anti-knife campaign material often features images of shocking weapons, such as zombie knives. Depictions of piles of seized weapons and vicious blades all paint a picture of a risky landscape.
You probably noticed that the photos illustrating this article do not include a picture of a knife. This is a deliberate choice. Our research has found that such knife imagery can evoke fear or excitement for some young people.
Their heightened emotional responses suggest that these young people are the most likely to be vulnerable to future knife carrying. Those who feel most unsafe in their communities are the most likely to respond negatively to graphic imagery.
Interestingly, the young people who participated in our research self-reported knife imagery as having little impact on them. But our study investigated their unconscious emotional response through an implicit association test. This approach is key in a research area vulnerable to self-presentation bias, where young people might attempt to hide their true feelings.
The test we used assessed response speeds to determine associations between images of knives and words relating to fear and excitement. Overall, response times were faster (showed more association) for fear-related words.
Other evidence suggests that anti-knife crime imagery and messaging can create exaggerated belief about the prevalence of knife carrying. This may increase, rather than reduce, the fear of victimisation, and further encourage people to carry knives.
Floods of knife images in a young person’s social and educational environment may normalise knife carrying. Nearly two-thirds of young people report experiencing secondary traumatic stress when viewing knife crime news on social media.
When knife imagery is used in intervention materials presented by someone in a position of authority (a teacher or police officer, for example), it can validate the fears even more.
In other words, the more we talk about knife crime, the scarier it can seem, and the more young people feel the need to protect themselves by carrying a weapon.
Labour’s plan to cut knife crime – including a ban on zombie knives that has just come into effect – should go a long way to reducing the availability of “status” weapons. It may also mean that images of these knives are less prevalent in the media, which, given our research findings, would likely have a positive effect.
But, as noted earlier, most young people are not at risk, and have had no exposure to knife crime. Knife carrying is not normal behaviour for most young people. Anti-knife messaging would serve young people better by avoiding the use of knife imagery, and instead focus on discussing how to keep safe by avoiding risky behaviour, and how to get help if a dangerous situation arises.
Dr Charlotte Coleman receives funding from N8 Policing Research Partnership.
Dr Charlotte Coleman is a member of the Youth Justice Board Academic Liaison Network
Dr Charlotte Coleman is an executive member of the Society for Evidence Based Policing.
Jess Scott-Lewis does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Source: The Conversation – UK – By Elizabeth Chappell, Researcher Faculty of Arts and Social Sciences, The Open University
The 2024 Nobel peace prize has been awarded to Nihon Hidankyo, a Japanese grassroots organisation created by survivors of the US atomic bombings of Hiroshima and Nagasaki in 1945. Nihon Hidankyo has provided thousands of witness accounts and public appeals by survivors, who are known as hibakusha, and has sent annual delegations to the UN.
Their work was commended by the Nobel committee, who decided to award the prize to Nihon Hidankyo “for its efforts to achieve a world free of nuclear weapons and for demonstrating that nuclear weapons must never be used again”.
Nihon Hidankyo’s co-chair, Toshiyuki Mimaki, said: “I never expected we would win the Nobel peace prize. Now we want to go further and appeal to the world to achieve lasting peace. We are old, but we never give up.”
There are an estimated 106,000hibakusha still living in Japan, with many more alive around the world. There are also survivors – and their descendants – of the more than 2,000 nuclear tests that have taken place worldwide since 1945. Some of these people use the term hibakusha to describe themselves.
This was not the first time the prize had been awarded to a nominee for their efforts towards nuclear disarmament. And it probably won’t be the last.
In 1985, the prize was awarded to an organisation called the International Physicians for the Prevention of Nuclear War. And then, in 1995, the prize was won by Joseph Rotblat, the only scientist to have left the Manhattan Project – the US government’s research project to produce the first atomic bomb – on moral grounds.
Barack Obama was next in 2009, for his “vision of and work for a world without nuclear weapons”. His administration made efforts to renew the strategic arms reduction treaty with Russia, and Obama became the first US president to visit one of the atomic bombed cities when he made a special trip to Hiroshima in 2016.
The following year, the prize was won by the International Campaign to Abolish Nuclear Weapons (ICAN) for its “groundbreaking efforts to achieve a treaty-based prohibition of nuclear weapons”. This was a reference to the UN treaty on the prohibition of nuclear weapons, which from 2017 has outlawed states from participating in any nuclear weapon activities.
Nihon Hidankyo may not be a household name, but two of its former co-chairs are quite well known internationally. Hiroshima-born Sunao Tsuboi was photographed in one of the few known images to be taken on the day of the bombing.
Tsuboi and fellow survivor Shigeaki Mori also spoke with Obama on his visit to the city. It is said that Obama’s visit was, in part, triggered by Mori’s research. He had spent 40 years searching for the identities of 12 US prisoners of war who had been killed in the bombing of Hiroshima.
Another of Nihon Hidankyo’s former co-chairs, Nagasaki-born Sumiteru Taniguchi, spent three-and-a-half years in hospital after the bombing of his city and never fully recovered from his wounds.
Taniguchi’s story became famous after the publication of his 1984 memoir, The Postman of Nagasaki. The book’s author, Peter Townsend, was a Royal Air Force pilot in the second world war and is known in the UK for his affair with Princess Margaret, sister of the late Queen. The memoir was made into a film in 2022.
The logic of nuclear deterrence
We are currently at a time where the threat of nuclear weapons is growing. This was reflected by the committee who, when awarding Nihon Hidankyo with the prize, noted that the “taboo” against their use was “under pressure”.
Nuclear deterrence relies on the logic of the threat to inflict “unacceptable damage” on the enemy. But nuclear deterrence is not foolproof. What is unacceptable to one adversary may be acceptable to another, depending on the circumstances.
It’s worth remembering that the 1945 atomic bombings were not, as is commonly believed, the only reason the Japanese surrendered the following week and brought the war to an end. Various factions in the war council had been attempting to find ways to surrender for over a year, and the bombs offered Japan’s Emperor Hirohito a way to save face.
The bombs didn’t force the Japanese to surrender, they gave Hirohito the opportunity to surrender … News of the Nagasaki bomb came as they were having a meeting of the imperial war council about what to do about the Soviets coming into the war. It should be known that there was never any special imperial war council meeting after the Hiroshima bomb. That wasn’t considered weighty enough to make everyone drop what they were doing and head to the Imperial Palace.
The effects of radiation on the human body were little known in 1945, due to censorship both by the Japanese military and the US occupation that followed. As I was told in an interview with a hibakusha called Keiko Ogura, who was eight when the first bomb was dropped: “No one understood why people were still dying days, weeks, months and years after the attacks – they thought the atomic bomb was poison gas.”
We now know much more about the devastating consequences of radiation for humans, animals and the environment across generations. However, research is still not widely publicised, with ICAN taking the lead as an international forum for important new findings to be shared and known.
Let’s hope this year’s award will help inform the world once and for all of the nature of these weapons. As former US president, John F. Kennedy, said in a speech to the UN in 1961: “A nuclear disaster, spread by wind and water and fear, could well engulf the great and the small, the rich and the poor, the committed and the uncommitted alike.”
Next year will mark the 80th anniversary of the atomic bombings. This prize should help ban what Kennedy described as the “sword of Damocles” that still threatens life on earth.
Elizabeth Chappell does not work for or receive funding from any external organisation.
As people who research ageing like to quip: the best thing you can do to increase how long you live is to pick good parents. After all, it has long been recognised that longer-lived people tend to have longer-lived parents and grandparents, suggesting that genetics influence longevity.
Complicating the picture, however, is that we know that the sum of your lifestyle, specifically diet and exercise, also significantly influences your health into older age and how long you live. What contribution lifestyle versus genetics makes is an open question that a recent study in Nature has shed new light on.
Scientists have long known that reducing calorie intake can make animals live longer. In the 1930s, it was noted that rats fed reduced calories lived longer than rats who could eat as much as they wanted. Similarly, people who are more physically active tend to live longer. But specifically linking single genes to longevity was until recently a controversial one.
While studying the lifespan of the tiny worm C elegans at the University of California, San Francisco, Cynthia Kenyon found that small changes to the gene that controls the way that cells detect and respond to nutrients around them led to the worms doubling their lifespan. This raises new questions: if we know that genetics and lifestyle affect how long you live, which one is more important? And how do they interact?
To try to tease out the effects of genetics versus lifestyle, the new study in Nature examined different models of caloric restriction in 960 mice. The researchers specifically looked at classical experimental models of caloric restriction (either 20% or 40% fewer calories than control mice), or intermittent fasting of one or two days without food (as intermittent fasting is popular in people looking to see the positive benefits of caloric restriction).
Because we now know that small genetic variations affect ageing, the researchers specifically used genetically diverse mice. This is important for two reasons. First, as laboratory studies on mice are normally performed on genetically very (very!) similar mice, this allowed the researchers to tease out the effects of both diet and genetic variables would have on longevity.
Second, humans are highly diverse, meaning that studies on genetically near-identical mice don’t often translate into humanity’s high genetic diversity.
The headline finding was that genetics appeared to play a larger role in lifespan than any of the dietary restriction interventions. Long-lived types of mice were still longer lived despite dietary changes.
Diet counts, but genes count more
And while shorter-lived mice did show improvements as a result of dietary restrictions, they didn’t catch up to their longer-lived peers. This suggests that there’s truth to the “pick good parents” joke.
Caloric restriction models still increased lifespans across all the types of mice, with the 40% restriction group having improved average and maximum lifespans compared with the 20% group.
And the 20% group showed improvements in both group average and maximum length of lives compared with the control group. It’s just the effects of genetics were larger than the effect of the dietary interventions.
While all the caloric restriction models resulted in increased lifespan in the mice on average, in the most extreme caloric restriction model tested (40% less group) changes that could be seen as physical harms were observed. These included reduced immune function and losses in muscle mass, which outside of a predator- and germ-free laboratory environment could affect health and longevity.
There are some important caveats in studies like this. First, it’s not known if these results apply to humans.
As with most caloric restriction research in mice, the restricted feeding groups were fed 20% or 40% less than a control group who ate as much as they wanted. In humans, that’d be like assuming people eating every meal every day at a bottomless buffet is “normal”. And people who do not eat from limitless trays of food are “restricted feeding”. That’s not an exact parallel to how humans live and eat.
Second, although exercise wasn’t controlled in any way in this study, most groups did similar amounts of running in their in-cage running wheels except the 40% caloric restriction group who ran significantly more.
The researchers suggested that this extra exercise in the 40% group was the mice constantly hunting for more food. But as this group did so much more exercise than the others, it could also mean that positive effects of increased exercise were also seen in this group alongside their caloric restriction.
So, while we can’t pick our parents or change the genes we inherit from them, it is interesting to know that specific genetic variations play a significant role in the maximum age we can aspire to.
The genetic cards we’re dealt dictate how long we can expect to live. Just as important in this study, however, lifestyle interventions such as diet and exercise that aim to improve lifespan should be effective regardless of the genes we have.
Bradley Elliott receives funding from the Physiological Society, the British Society for Research on Ageing, the Altitude Centre, and private philanthropic individuals, and has consulted for industry and government on longevity research. He is on the Board of Trustees of the British Society for Research on Ageing.
Source: The Conversation – UK – By Richard Massey, Professor of extragalactic astrophysics (dark matter and cosmology), Durham University
Illustration of the Extremely Large Telescope, currently under construction in Chile’s Atacama desert.ESO, CC BY
In recent decades, we’ve learnt huge amounts about the universe and its history. The rapidly developing technology of telescopes – both on Earth and in space – has been a key part of this process, and those that are due to start operating over the next two decades should push the boundaries of our understanding of cosmology much further.
All observatories have a list of science objectives before they switch on, but it is their unexpected discoveries that can have the biggest impact. Many surprise advances in cosmology were driven by new technology, and the next telescopes have powerful capabilities.
Still, there are gaps, such as a lack of upcoming space telescopes for ultraviolet and visible light astronomy. Politics and national interests have slowed scientific progress. Financial belts are tightening at even the most famous observatories.
This is article is part of our series Cosmology in crisis? which uncovers the greatest problems facing cosmologists today – and discusses the implications of solving them.
The biggest new telescopes are being built in the mountains of Chile. The Extremely Large Telescope (ELT) will house a mirror the size of four tennis courts, under a huge dome in the Atacama desert.
Reflecting telescopes like ELT work by using a primary mirror to collect light from the night sky, then reflecting it off other mirrors to a camera. Larger mirrors collect more light and see fainter objects.
The Extremely Large Telescope under construction atop the Cerro Amazones peak in northern Chile.
Another ground-based telescope under construction in Chile is the Vera C. Rubin telescope. Rubin’s camera is the largest ever built: the size of a small car and weighing about three tonnes. Its 3,200 megapixels will photograph the whole sky every three days to spot moving objects. Over the course of 10 years, these photographs will be combined to form a massive time-lapse video of the universe.
Astronomy used to be a physically demanding job, requiring travel to remote telescopes in dark sites –- but many astronomers began working from home long before COVID. In the late 20th century, major ground observatories started to put in place technology to allow astronomers to control telescopes for observations at night, even when they were not there in person. Remote observing is now commonplace, carried out via the internet.
Expect the unexpected
The view of any telescope on the ground is limited, though, even if it’s on top of a mountain. Launching telescopes into space can get around these limitations.
The Hubble Space Telescope’s operational history began when the space shuttle lifted it above the atmosphere on April 25 1990. Hubble got the full 1960s sci-fi treatment: a rocket to launch it, gyroscopes to point it, and electronic cameras instead of photographic film. But one plan fell through: for Hubble to host a commuting astronaut-astronomer, working decidedly away from home.
Webb, launched on December 25 2021, now spends a third of its time looking at planets around other stars that weren’t even known about when it was designed.
The stated goal of an expensive telescope is usually just a sales pitch to space agencies, governments and (shhh…) taxpayers. The Webb telescope should achieve its original science goals, but astronomers have always known that seeing further, finer or in more colours can achieve so much more. The unexpected discoveries by telescopes are often more significant than the science objectives stated at the outset.
Taking the long view
For scientists, it’s a relief that telescopes go beyond their brief, because Hubble and Webb both took more than 25 years from napkin to launch. In that time, new scientific questions arise.
Building a large space telescope typically takes about two decades. The Chandra and XMM-Newton space telescopes took 23 years and 15 years to build, respectively. They were designed to observe X-rays coming from hot gas around black holes and galaxy clusters, and were launched very close together in 1999.
Similar timescales apply to the European Space Agency’s Hipparcos and Gaia space telescopes, which have mapped all the stars in the Milky Way. The Cobe and Planck missions to study the microwave-light afterglow of the Big Bang also took two decades. Precise dates depend how you count, and a few exceptions have been “faster, better, cheaper”, but national space agencies are generally risk averse and slow when developing these projects.
The latest space telescopes are therefore millennials. They were designed at a time when astronomers had measured the universe’s newborn expansion following the Big Bang, and also its old-age, accelerating expansion. Their main goal now is to fill the gap –- because, surprisingly, interpolations from early times to late times don’t meet in the middle.
The measured rates for the expansion of the universe are inconsistent, as are results for the clumpiness of matter in the cosmos. Both measurements create challenges for our theories of how the universe evolved.
Observing the middle age of the universe requires telescopes operating at long wavelengths, because light from distant galaxies is stretched by the time it reaches us. So, Webb has infrared zoom cameras, while the European Space Agency’s Euclid space telescope, launched in 2023, and Nasa’s Nancy Grace Roman telescope, which is set to launch in 2026, both have infrared wide-angle views.
Three buses come along at once
Most stars shine in ultraviolet and infrared colours that are blocked by the Earth’s atmosphere, as well as the colours our eyes evolved to see.
Extra colours are useful. For example, we can weigh stars on the other side of our galaxy because massive stars are bright in infrared, while smaller ones are faint – and they stay that way throughout their lifetimes. However, we know where stars are being born because only young stars emit ultraviolet light.
In addition, independent measurements of the same thing are vital for rigorous science. Infrared telescopes, for example, can work together and have already made surprising discoveries. But it’s not great for diversity that the Webb, Euclid and Roman space telescopes all see infrared colours.
Earthly politics gets in the way, too. Data from China’s Hubble-class space telescope, Xuntian, is unlikely to be shared internationally. And in protest at Russia’s invasion of Ukraine, in February 2022 Germany switched off its eRosita X-ray instrument that had been operating perfectly, in collaboration with Russia, a million miles from Earth.
Cheap commercial launches may save the day. Euclid was to have lifted off on a Russian Soyuz rocket from a European Space Agency spaceport in French Guiana. When Russia ended operations there in tit-for-tat reprisals, Euclid’s launch was successfully switched at the last minute to a SpaceX Falcon 9 rocket.
If large telescopes can also be folded inside shoebox-size “cubesat” satellites, the lower cost would make it viable for them to fail. Tolerating risk creates a virtuous circle that makes missions even cheaper.
But perhaps the most unusual telescope technology, which may bring the most unexpected discoveries, is gravitational wave detectors. Gravitational waves are not part of the electromagnetic spectrum, so we can’t see them. They are distortions, or “ripples”, in spacetime caused by some of the most violent and energetic processes in the universe. These might include a collision between two neutron stars (dense objects formed when massive stars run out of fuel), or a neutron star merging with a black hole.
Asked what the next generation of observatories will discover, I have no idea. And that’s a good thing. The best science experiments shouldn’t just tell us about the things we expect to find, but also about the unknown unknowns.
Richard Massey receives funding from the UK Space Agency to support Euclid, and leads UK involvement in the SuperBIT balloon-born telescope.
Source: The Conversation – UK – By Michelle Bentley, Professor of International Relations, Royal Holloway University of London
The Apprentice – a new film dramatising Donald Trump’s business career during the 1970s and 80s – is the latest in a presidential election full of controversy.
The movie charts Trump’s (Sebastian Stan) professional rise from an awkward nobody to hotshot real-estate tycoon. Trump’s Pygmalion-like transformation is credited to his friendship with Roy Cohn (Jeremy Strong). Cohn was an infamous prosecutor who worked with Senator Joseph McCarthy during the Communist and Lavender (homosexual) scares, and as a political fixer for Richard Nixon.
The key storyline is that Trump becomes Cohn’s apprentice, learning underhanded ways of business and Machiavellian deal-making. Other figures said to have influenced Trump’s career, such as political adviser Roger Stone, get only cameos at best.
Trump does not look good. He is portrayed as vain, using amphetamines as diet pills and getting plastic surgery including liposuction and a scalp reduction. Trump rejects his alcoholic brother and later Cohn, who dies from AIDS in social disgrace.
Trump is also shown to rape his then-wife, Ivana (Maria Bakalova) – a scene which made headlines after the movie’s Cannes Film Festival premiere earlier this year. The rape claim was made during the couple’s divorce proceedings, although Ivana said afterwards that she did not consider the incident “rape” in a criminal sense.
Director Ali Abbasi says this depiction isn’t a take-down of the former president but a more nuanced exploration of Trump’s character. Indeed, there is sympathy for Trump – for example, by detailing the emotional pressure from his father.
The film explores how this experience fuelled Trump’s obsession with winning, which is cultivated by Cohn and his three rules of success: “attack, attack, attack”, “deny everything” and “never admit defeat”. The film seeks to get inside Trump’s mindset, not only as a businessperson, but unpicking what drove him in the White House, as well as the election he’s now fighting.
Some have criticised this approach for being too soft on Trump. A review in The Guardian called the film “obtuse and irrelevant”. A further concern is that presenting Trump as a “winner” could actually be seen to legitimise amoral business practices as successful, especially given that Trump’s later six bankruptcies are not clearly mentioned.
The Apprentice is also a deeper commentary on America. Another character comments that Cohn’s three rules also describe US foreign policy. The film raises big questions about the US, not least where Cohn repeatedly highlights what he identifies as the country’s virtues, and justifies his (sometimes illegal) actions as upholding these. The audience is left to consider what shapes America and its foreign policy – and what may be toxic about this.
Will the film influence the upcoming election?
The Apprentice’s screenwriter, Gabriel Sherman, insists the movie is not designed “to influence people’s minds”. Yet the film’s release so close to the polls means it is inevitably political.
The Apprentice is unlikely to radically shift the electoral needle. Trump’s negative portrayal may make some voters on the fence question his suitability for high office. But beyond this, the film will reinforce what people already thought.
Pro-Trumpers won’t like the movie, but this upset will likely just give oxygen to their support. Those against Trump will also be able to feel their opinion has been affirmed, even by those who would have wanted the film to take a harder line. Although it’s perhaps uncertain whether anyone who dislikes Trump will want to spend two hours watching even more of him than they already have in this election.
While the film likely won’t influence the final outcome, it is still a major marker in this election thanks to the huge controversy around it. Concern over its divisive portrait of Trump meant the movie took five years to reach production. Clint Eastwood turned down the option to direct due to the perceived business risk involved. Distribution also took time to secure – a situation Abbasi describes as a “boycott or censorship”.
Distribution problems were also exacerbated by legal threats. After Cannes (where the film received an eight-minute ovation), Trump’s legal team issued a cease-and-desist letter. Communications Director for the Trump election campaign, Steven Cheung, said the film was “garbage” and “pure fiction”, constituting election interference.
Strong resistance also came from billionaire and close Trump associate, Dan Snyder, who was involved in the film’s financing, thinking it would paint a positive picture of the presidential hopeful. Snyder later sought to block the film’s release after seeing a preview.
Controversy has only raised the movie’s profile. And while people will watch it for very different political reasons, some will buy a ticket purely because this film is now a standout event in one of the most contentious US elections in history.
Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.
Michelle Bentley does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Source: The Conversation – UK – By Kieran Maguire, Senior Teacher in Accountancy and member of Football Industries Group, University of Liverpool
When the Premier League broke away from the rest of English football in 1992, its 22 clubs generated £205 million in its debut season, and the average player earned £2,050 a week. Thirty years later, despite having two fewer clubs, the league’s revenue had increased by 2,850% to £6.1 billion and the average player earned £93,000 a week.
At the heart of this extraordinary growth is an American revolution. In the Premier League’s inaugural season, football was still in recovery from the horrors of the stadium disasters at Hillsborough and Heysel. Owners tended to be from the local area and with a business background. The only foreign owner was Sam Hamman at Wimbledon, a Lebanese millionaire who bought the club on a whim having reportedly been much more interested in tennis. The season ended with Manchester United (under Alex Ferguson) winning the English game’s top league for the first time in 26 years.
Now, if the bid for Everton by the Friedkin Group (TFG) is ratified, 11 of the 20 Premier League clubs will be controlled or part-owned by American investors. The US – long seen as football’s final frontier when it comes to the men’s game – suddenly can’t get enough of English “soccer”.
Four of the Premier League’s “big six” are American-owned – Manchester United, Liverpool, Arsenal and Chelsea – while a fifth, Manchester City, has a significant US minority shareholding. Aston Villa, Fulham, Bournemouth, Crystal Palace, West Ham and Ipswich Town also have varying degrees of American ownership.
And it’s not even just the glamour clubs at the top of the tree. American investment has also been significant lower down the football pyramid, led by the high-profile acquisition of then non-league Wrexham by Hollywood actors Ryan Reynolds and Rob McElhenny, and Birmingham City’s purchase by US investors including seven-time Super Bowl winner Tom Brady. American investment in football has reached places as geographically diverse as Carlisle and Crawley in England, and Aberdeen and Edinburgh in Scotland.
Manchester United was the first Premier League club to come under American ownership – after a row about a horse.
In 2005, United was owned by a variety of investors including Irish businessmen and racehorse owners John Magnier and J.P. McManus. Their erstwhile friend Ferguson, the United manager, thought he co-owned the champion racehorse Rock of Gibraltar with them – a stallion worth millions in stud rights. They disagreed – and their bitter dispute was such that Magnier and McManus decided to sell their shares in the football club.
The Miami-based Glazer family – already involved in sport as owners of NFL franchise the Tampa Bay Buccaneers – had already been buying up small tranches of shares in United, but the sudden availability of the Irish shares allowed Malcolm Glazer to acquire a controlling stake for £790 million (around £1.5 billion at today’s prices).
The fact Glazer did not actually have sufficient funds to pay for these shares was a solvable problem. In the some-might-say commercially naive world of top-flight English football before the Premier League, Manchester United was a club without debt, paying its way without leveraging its position as one of the world’s most famous football clubs. Glazer saw the opportunity this presented and arranged a leveraged buy-out (LBO), whereby the football club borrowed more than £600 million secured on its own assets to, in effect, “buy itself” in 2005.
Despite the need to meet the high interest costs to fund the LBO, United continued winning trophies under Ferguson – including three Premier League titles in a row in 2007, 2008 and 2009, as well as a Champions League victory in 2008. Amid this success, the club felt that ticket prices were too low and set about increasing them, with matchday revenue increasing from £66 million in 2004/05 to over £101 million by 2007/08.
Commercial income was another area the Glazers were keen to increase. United set up offices in London and adopted a global approach to finding new official branding deals ranging from snacks to tractor and tyre suppliers – doubling revenues from this income source too.
But in this new, more aggressive world of “sweating the asset”, the debts lingered – and most United fans remained deeply suspicious of their American owners. (Following their father’s death in 2014, the club was co-owned by his six children, with brothers Avram and Joel Glazer becoming co-chairmen.)
Today, despite its partial listing on the New York Stock Exchange and the February 2024 sale of 27.7% of the club to British billionaire Sir Jim Ratcliffe for a reputed £1.25 billion, United still has borrowings of more than £546 million, having paid cumulative interest costs of £969 million since the takeover in 2005. But with the club now valued at US$6.55 billion (around £5bn), it represents a very smart investment for the Glazer family.
Indeed, while the prices being paid for football clubs across Europe have reached record levels, they are still seen as cheap investments compared with US sports’ leading franchises. Forbes’s annual list of the world’s most valuable sports teams has American football (NFL), baseball (MLB) and basketball (NBA) teams occupying the top ten positions, with only three Premier League clubs – Manchester United, Liverpool and Manchester City – in the top 50.
With NFL teams having an average franchise value of US$5.1 billion and NBA $3.9 billion, many English football clubs still look like a bargain from the other side of the pond.
The risk of relegation
The latest to join this US bandwagon, TFG – a Texas-based portfolio of companies run by American businessman and film producer Dan Friedkin – is reported to have offered £400m to buy Everton, despite the club’s poor financial state.
“The Toffees” have been hit by loss of sponsorships as well as two sets of points deductions for breaching the Premier League’s financial rules, leading to revenue losses from lower league positions. While the new stadium being built at Liverpool’s Bramley-Moore dock has been yet another financial constraint, it will at least increase matchday income from the start of next season.
Everton’s new stadium at Bramley-Moore dock will open in time for the start of the 2025-26 season. Phil Silverman / Shutterstock
A wider reason for the relative bargain in valuations of European football clubs is the risk of relegation – something that is not part of the closed leagues of most US sports. While the threat of relegation (and promise of promotion) has always been an integral part of English and European football, the jeopardy this brings for supporters – and a club’s finances – does not exist in the NFL, NBA, Major League Soccer and similar competitions.
The Premier League, with its three relegation spots at the end of each season, has featured 51 different clubs since it launched in 1992. Only six clubs – Arsenal, Spurs, Chelsea, Manchester United, Liverpool and Everton – have been ever present, with Arsenal now approaching 100 years of consecutive top-flight football.
Other Premier League clubs have experienced the dramatic cost-benefit of relegation and promotion. Oldham Athletic, who were in the Premier League for its first two seasons, now languish in the fifth tier of the game, outside the English Football League (EFL). In contrast, Luton Town, who were in the fifth tier as recently as 2014, were promoted to the Premier League in 2023 – only to be relegated at the end of last season.
While it is difficult to compare football clubs with basketball and American football teams, the financial difference between having an open league, with relegation, and a closed league becomes apparent when you look at women’s football on both sides of the Atlantic.
Angel City, a women’s soccer team based in Los Angeles, only entered the National Women’s Soccer League (NWSL) in 2022 and is yet to win an NWSL trophy. But last month, the club was sold for US$250 million (£188m) to Disney’s CEO Bob Iger and TV journalist Willow Bay – the most expensive takeover in the history of women’s professional sport.
In comparison, Chelsea – seven-time winners of the English Women’s Super League and one of the most successful sides in Europe – valued its women’s team at £150 million ($US196m) earlier this summer. While there are a number of factors to this price differential, the confidence that Angel City will always be a member of the big league of US soccer clubs – and share very equally in its revenue – will have made its new owners very confident in the long-term soundness of their deal.
The story of Angel City FC, the most expensive team in women’s sport.
A further attraction for American investors is the potential to enter two markets – one mature (men’s football) and one effectively a start-up (the women’s game) – in a single purchase. In the US, the top men’s and women’s clubs are completely separate. But in Europe, most top-flight women’s teams are affiliated to men’s clubs – with the exception of eight-time Women’s Champions League winners Olympique Lyonnais Feminin, which split from the French men’s club when Korean-American businesswoman Michele Kang bought a majority stake in the women’s team in February 2024).
While interest in, and hence value of, the WSL is now growing fast, the women’s game in England is dwarfed by viewer ratings for the Premier League – the most watched sporting league in the world, viewed by an estimated 1.87 billion people every week across 189 countries.
These figures dwarf even the NFL which, while currently still the most valuable of all sporting leagues in terms of its broadcasting deals, must be looking at the growth of the Premier League with some jealousy. This may explain why some US franchise owners, such as Stan Kroenke, the Glazer family, Fenway Sports Group and Billy Foley, have subsequently purchased Premier League football clubs.
Ironically, for many spectators around the world, it is the intensity and competitiveness of most Premier League matches – brought on in part by the threat of relegation and prize of European qualification – that makes it so captivating. However, billionaire investors like guaranteed numbers and dislike risk – especially the degree of financial risk that exists in the Premier League and English Football League.
European not-so-Super League
In April 2021, 12 leading European clubs (six from England plus three each from Spain and Italy) announced the creation of the European Super League (ESL). This new mid-week competition was to be a high-revenue generating, closed competition with (eventually) 15 permanent teams and five annual additions qualifying from Europe. According to one of the driving forces behind the plan, Manchester United co-chairman Joel Glazer:
By bringing together the world’s greatest clubs and players to play each other throughout the season, the Super League will open a new chapter for European football, ensuring world-class competition and facilities, and increased financial support for the wider football pyramid.
The problem facing the Premier League’s “big six” clubs – and their ambitious owners – is there are currently only four slots available to play in the Champions League. So, their thinking went, why not take away the risk of not qualifying? However, the proposal was swiftly condemned by fans around Europe, together with football’s governing bodies and leagues – all of whom saw the ESL proposal as a threat to the quality and integrity of their domestic leagues. Following some large fan protests, including at Chelsea’s Stamford Bridge, Manchester City was the first club to withdraw – followed, within a couple of days, by the rest of the English clubs.
Under the terms of the ESL proposals, founding member clubs would have been guaranteed participation in the competition forever. Guaranteed participation means guaranteed revenues. The current financial gap between the “big six” and the other members of the Premier League, which in 2022/23 averaged £396 million, would have widened rapidly.
For example, these clubs would have been able to sell the broadcast rights for some of their ESL home fixtures direct to fans, instead of via a broadcaster. All of a sudden, that database of fans who have downloaded the official club app, or are on a mailing list, becomes far more valuable. These are the people most willing to watch their favourite team on a pay-per-view basis, further increasing revenues.
At the same time, a planned ESL wage cap would have stopped players taking all these increased revenues in the form of higher wages, allowing these clubs to become more profitable and their ownership even more lucrative.
American-owned Manchester United and Liverpool had previously tried to enhance the value of their investments during the COVID lockdowns era via ProjectBig Picture – proposals to reduce the size of the Premier League and scrap one of the two domestic cup competitions, thus freeing up time for the bigger clubs to arrange more lucrative tours and European matches against high-profile opposition.
Most importantly, Project Big Picture would have resulted in changing the governance of the domestic game. Under its proposals, the “big six” clubs would have enjoyed enhanced voting rights, and therefore been able to significantly influence how the domestic game was governed.
Any attempt to increase the concentration of power raises concerns of lower competitive balance, whereby fewer teams are in the running to win the title and fewer games are meaningful. This is a problem facing some other major European football leagues including France’s Ligue 1, where interest among broadcasters has dwindled amid the perceived dominance of Paris St-Germain.
So while to date, American-led attempts to change the structure of the Premier League have been foiled, it’s unlikely such ideas have gone away for good. The near-universal fear of fans – even those who welcome an injection of extra cash from a new billionaire owner – is that the spectacle of the league will only be diminished if such plans ever succeed.
And there is evidence from the women’s game that the US closed league format is coming under more pressure from football’s global forces. The NWSL recently announced it is removing the draft system that is designed (as with the NFL and NBA) to build in jeopardy and competitive balance when there is no risk of relegation.
Top US women’s football clubs are losing some of their leading players to other leagues, in part because European clubs are not bound by the same artificial rules of employment. In a truly global professional sport such as football, international competition will always tend to destabilise closed leagues.
Why do they keep buying these clubs?
Does this mean that American and other wealthy owners of Premier League clubs seeking to reduce their risks are ultimately fighting a losing battle? And if so, given the potential risks involved in owning a football club – both financial and even personal – why do they keep buying them?
The motivations are part-financial, part technological and, as has always been the case with sports ownership, part-vanity.
The American economy has grown far faster than that of the EU or UK in recent years. Consequently, there are many beneficiaries of this growth who have surplus cash, and here football becomes an attractive proposition. In fact, football clubs are more resilient to recessions than other industries, holding their value better as they are effectively monopoly suppliers for their fans who have brand loyalty that exists in few other industries.
From 1993 to 2018, a period during which the UK economy more than doubled, the total value of Premier League clubs grew 30 times larger. And many fans are tied to supporting one club, helping to make the biggest clubs more resilient to economic changes than other industries. While football, like many parts of the entertainment industry, was hit by lockdown during Covid, no clubs went out of business, despite the challenges of matches being played in empty stadiums.
Added to this, the exchange rates for US dollars have been very favourable until recently, making US investments in the UK and Europe cheaper for American investors.
Our co-editors commission long-form journalism, working with academics from many different backgrounds who are engaged in projects aimed at tackling societal and scientific challenges.
So, while Manchester United fans would argue that the Glazer family have not been good for the club, United has been good for the Glazers. And Fenway Sports Group (FSG), who bought Liverpool for £300 million in 2010, have recouped almost all of that money in smaller share sales while remaining majority owners of Liverpool.
Despite this, the £2.5 billion price paid for Chelsea by the US Clearlake-Todd Boehly consortium in May 2022 took markets by surprise.
The sale – which came after the UK government froze the assets of the club’s Russian oligarch owner, Roman Abramovich, following the invasion of Ukraine – went through less than a year after Newcastle United had been sold by Sports Direct founder Mike Ashley to the Saudi Arabian Public Investment Fund for £305 million – approximately twice that club’s annual revenues. Yet Clearlake-Boehly were willing to pay over five times Chelsea’s annual revenues to acquire the club, even though it was in a precarious financial position.
Clearlake is a private equity group whose main aim is to make profits for their investors. But unlike most such investors, who tend to focus on cost-cutting, the Chelsea ownership came in with a high-spending strategy using new financial structuring ideas, such as offering longer player contracts to avoid falling foul of football’s profitability and sustainability rules (although this loophole has since been closed with Uefa, European football’s governing body, limiting contract lengths for financial regulation purposes to five years).
Chelsea’s location in the one of the most expensive areas of London, combined with its on-field success under Abramovich, all added to the attraction, of course. But there are other reasons why Clearlake, along with billionaire businessman Boehly, were willing to stump up so much for the club.
From Hollywood to the metaverse
While some British football fans may have viewed the Ted Lasso TV show as an enjoyable if slightly twee fictional account of American involvement in English soccer, it has enhanced the attraction of the sport in the US. So too Welcome To Wrexham – the fly-on-the-wall series covering the (to date) two promotions of Wales’s oldest football club under the unlikely Hollywood stewardship of Reynolds and McElhenney.
Welcome To Wrexham, season one trailer.
The growth in US interest in English football is reflected in the record-breaking Premier League media rights deal in 2022, with NBC Sports reportedly paying $2.7 billion (£2.06bn) for its latest six-year deal.
But as well as football offering one of increasingly few “live shared TV experiences” that carry lucrative advertising slots, there may also be more opportunity for more behind-the-scenes coverage of the Premier League – as has long been seen in US coverage of NBA games, for example, where players are interviewed in the locker room straight after games.
According to Manchester United’s latest annual report, the club now has a “global community of 1.1 billion fans and followers”. Such numbers mean its owners, and many others, are bullish about the potential of the metaverse in terms of offering a matchday experience that could be similar to attending a match, without physically travelling to Manchester.
Their neighbours Manchester City, part-owned by American private equity company Silverlake, broke new (virtual) ground by signing a metaverse deal with Sony in 2022. Virtual reality could give fans around the world the feeling of attending a live match, sitting next to their friends and singing along with the rest of the crowd (for a pay-per-view fee).
Some investors are even confident that advancements in Abba-style avatar technology could one day allow fans to watch live 3D simulations of Premier League matches in stadiums all over the world. Having first-mover advantage by being in the elite club of owners who can make use of such technology could prove ever more rewarding.
More immediately, there are some indications that competitive matches involving England’s top men’s football teams could soon take place in US or other venues. Boehly, Chelsea’s co-owner, has already suggested adopting some US sports staples such as an All-Star match to further boost revenues. Indeed, back in 2008, the Premier League tentatively discussed a “39th game” taking place overseas, but that idea was quickly shelved.
The American owners of Birmingham City were keen to play this season’s EFL League One match against Wrexham in the US, but again this proposal did not get far. Liverpool’s chairman Tom Werner says he is determined to see matches take place overseas, and recent changes to world governing body Fifa’s rulebook could make it easier for this proposal to succeed.
The potential benefits of hosting games overseas include higher matchday revenues, increased brand awareness, and enhanced broadcast rights. While there is likely to be significant opposition from local fans, at least American owners know they would not face the same hostility about rising matchday prices in the US as they have encountered in England.
When the Argentinian legend Lionel Messi signed for new MLS franchise Inter Miami in 2023, season ticket prices nearly doubled on his account. And while there is vocal opposition to higher ticket prices in England, this is not borne out in terms of lower attendances for matches against high-calibre opposition – as evidenced by Aston Villa charging up to £97 for last week’s Champions League meeting with Bayern Munich.
Villa’s director of operations, Chris Heck, defended the prices by saying that difficult decisions had to be made if the club was to be competitive.
Manchester United’s matchday revenue per EPL season (£m)
For much of the 2010s, with broadcast revenues increasing rapidly, many Premier League owners made little effort to stoke hostilities with their loyal fan bases by putting up ticket prices. Indeed, Manchester United generated little more from matchday income in the 2021-22 season, as football emerged from the pandemic, than the club had in 2010-11 (see chart above).
However, this uneasy truce between fans and owners has ceased. The relative flatlining of broadcast revenues since 2017, along with cost control rules that are starting to affect clubs’ ability to spend money on player signings and wages, has changed club appetites for dampened ticket prices. This has resulted in noticeable rises in individual ticket and season ticket prices by some clubs.
However, season ticket and other local “legacy” fans generate little money compared with the more lucrative overseas and tourist fans. They may only watch their favourite team live once a season, but when they visit, they are far more likely not only to pay higher matchday prices, but to spend more on merchandise, catering and other offerings from the club.
Today’s breed of commercially aware, profit-seeking US Premier League owners – pioneered by the Glazer family, who saw that “sweating the asset” meant more than watching football players sprinting hard – understand there is a lot more value to come from English football teams. The clubs’ loyal local supporters may not like it, but English football’s American-led revolution is not done yet.
To hear about new Insights articles, join the hundreds of thousands of people who value The Conversation’s evidence-based news. Subscribe to our newsletter.
Kieran Maguire has taught courses and presented on football finance for the Professional Footballers Association, League Managers Association, FIFA and national football associations in Europe.
Christina Philippou is affiliated with the RAF FA, and Premier League education programs.
In the heart of Isalo National Park in central-southern Madagascar, at least 200km from the sea in any direction, is a remote valley with a mysterious past. This place, Teniky, can only be reached on foot, by hiking through a mountainous region dissected by steep canyons.
Part of the Teniky site has been known for well over 100 years, as we know from names and dates scratched on the rocks there. Various visitors in the 1950s and 1960s with an interest in archaeology described an amphitheatre-shaped location with man-made terraces, a rock shelter with neatly constructed sandstone walls, a chamber cut into the rock with pillars and benches, and a large number of niches cut in the steep cliffs. Recesses are still visible around some of the niches, suggesting that they could be closed off by a wooden or stone slab.
Among the suggested interpretations were that these structures had been made by shipwrecked Portuguese sailors, or Arabs, or even Phoenicians.
No similar rock-cut architecture is known anywhere else in Madagascar or on the east African coast, 400km away.
And until recently, no detailed archaeological studies had ever been carried out at Teniky.
Madagascar’s past is still the subject of considerable debate. Situated in the south-western Indian Ocean, it is one of the last big islands to have been settled by humans. Genetic studies have identified the people of Madagascar as having come mainly from Africa and from Southeast Asia. Archaeology suggests that the first settlers arrived about 1,500 to 1,000 years ago. The earliest settlements studied have been located along the coast, close to river estuaries.
Our archaeological study of Teniky, however, points to a new possibility: a former Persian presence in southern Madagascar about 1,000 years ago.
What we found at Teniky
Our study of high-resolution satellite images revealed the Teniky site was much larger than previously known. It showed there were more terraces and stone walls on a hill 2km to the west. This led us to take a closer look, hoping to get a better sense of who had lived there and when.
During field prospecting on this hill we discovered niches, cut in the walls of a rock shelter, that had not been described before.
Excavations at this rock shelter revealed more archaeological structures, including carved sandstone walls and a large stone basin.
Radiocarbon dating of charcoal samples from the site dated to the late 10th to mid-12th centuries AD. Pieces of ceramic items of southeast Asian and Chinese origin found there have been dated by a specialist to the 11th to 14th centuries AD.
We also found sandstone quarries from which the stones used to build the walls at the rock shelters were extracted. And we found more stone basins on terraces.
The terraces cover a total area of about 30 hectares, indicating that Teniky must have been a fair-sized settlement. Water is available all year round in the valley below, where people might have been able to plant crops, fish for eels or even keep cattle.
Considering the dimensions, location and character of the rock-cut structures at Teniky, we think the niches and chambers served a ritual purpose.
Who were the people who lived at Teniky?
There is no other archaeological site like Teniky in Madagascar. So, the question arises as to what group of people settled there, far inland, and carved the niches and chambers in the cliff walls about 1,000 years ago. The presence of imported ceramics indicates that they took part in the Indian Ocean trade networks at the time but doesn’t tell us where they came from.
We think the answer may lie in the style of the rock-cut niches.
Rock architecture at Teniky, Madagascar. Courtesy Guido Schreurs.
They are similar to rock niches of the first millennium or earlier in Iran (formerly Persia). Archaeologists have interpreted those as belonging to Zoroastrian communities, which used them as part of their funeral rites.
Zoroastrianism was the dominant state religion of the Persian Sasanian Empire (224-656 AD). After the conquest of the Sasanian Empire by the Arabs in the mid-seventh century AD, Islam was imposed.
Zoroastrian funeral rites do not allow direct burial in the ground, so as not to pollute the earth. Instead, dead bodies are left in places of exposure not touching the ground. Once the flesh has decomposed or been removed by animals, the bone remains are dried and placed in bone receptacles (ossuaries).
We tentatively interpret the rock-cut architecture at Teniky as having been made by a community with Zoroastrian origins.
The larger rock-cut niches might have been the places where the bodies of the dead were exposed, and the smaller niches with recesses might have served as ossuaries, closed off by a slab to protect the bones from the rain and thus to prevent them from polluting the earth.
The stone basins at Teniky show stylistic similarities with those used in Zoroastrian ritual ceremonies to hold water or fire, both agents of ritual purity.
Zoroastrians abroad
There are few accounts of Madagascar written at the turn of the first and second millennia AD. Buzurg Ibn Shahriyar, a tenth-century Persian sailor and writer, collected stories from sailors in port towns on the Persian Gulf which suggest that Persian contacts with Madagascar may have existed then. The name Madagascar did not exist at that time but names like “Wak-wak” or “Qumr”/“Komr” may have referred to the island.
Historical documents, archaeological excavations and genetic studies indicate that Zoroastrians left Iran and settled in western India in the late eighth century AD.
Did they settle on the island of Madagascar too? If the rock-cut architecture and associated stone basins at Teniky are the work of a community with Zoroastrian origins, this would strongly point to a former Persian presence in southern Madagascar about 1,000 years ago.
Many questions remain. We hope future studies will answer some of them.
The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.
The rapid growth in aquaculture means that billions of individual aquatic animals are now being farmed without basic information that could help ensure even minimal welfare standards. Our newly published study shows that these welfare risks are not uniform: Aquaculture is likely to have severe effects on welfare for some species, but negligible impacts on others.
Whenever humans manage animals on a large scale, welfare becomes a concern. As experts on aquatic animals and their welfare, we believe that taking proactive measures to shape the aquaculture industry’s growth will be critical for its long-term success.
A cuttlefish tackles a challenge originally designed for human children, demonstrating cephalopods’ complex cognitive processes.
Complex aquatic lives
In a wide-ranging review of the existing science, we identified seven risk factors in fish and other sea creatures that would be challenging or impractical to accommodate in captivity. They include 1) migratory behavior, 2) solitary social structures, 3) long life spans, 4) carnivorous feeding habits, 5) cannibalism, 6) living at depths of 165 feet (50 meters) or more, and 7) elaborate courtship or involved parental care.
We researched these characteristics for each of the more than 400 species currently farmed in aquaculture. Our analysis found that many species of fishes, reptiles and amphibians are likely to suffer in aquaculture because they won’t be able to engage in their natural behaviors in farmed conditions. The same is true for crustaceans such as lobsters and for cephalopods such as cuttlefish.
In contrast, aquatic plants and other invertebrates such as oysters would experience fewer differences between their life in the wild versus in a tank, pond or other aquaculture production system.
We also found that species most at risk are among the most expensive on the market but contribute the least to global production. By shifting toward species whose behaviors and life habits are more compatible with aquaculture, the industry could minimize animal welfare risk while also keeping prices down and production quantities high. In other words, protecting aquatic animal welfare is compatible with producing affordable, nutritious food.
Animal welfare in the water
Research shows that many aquatic animals are intelligent, emotional, curious, highly social and have strong preferences. Like land animals, they can suffer if their needs aren’t met.
Divers observe a feeding school of bumphead (also referred to as humphead) parrotfish on Australia’s Great Barrier Reef.
It would be very difficult and expensive to accommodate this species’ long life span, large range, complex foraging behavior and dynamic social relationships in the highly restrictive and monotonous environments of aquaculture.
We also found examples of invertebrate animals with similarly elaborate ways of life. One example is the red swamp crayfish (Procambarus clarkii), a comparatively small crustacean that builds elaborate tunnel and chamber systems underground. Females care attentively for their tiny offspring, fanning, cleaning and feeding juveniles for up to four months after they hatch.
In contrast, plant species farmed in aquaculture, such as seaweeds and water spinach (Ipomoea aquatica), are nutritious, protein-rich foods that can be raised without posing direct animal welfare concerns.
In 2021 alone, 56 species were farmed for the first time. By identifying species that may naturally adapt better to life in captivity, aquaculture producers and policymakers can steer their industry toward a more humane future.
This approach is already finding support in the U.S., where Washington and California have banned octopus farming. The states acted partly in response to research showing that octopuses are intelligent, curious, social animals that can solve problems and recognize individual people – qualities that are incompatible with being raised en masse for food.
More research is needed to understand the lives and behaviors of other sea creatures that are currently farmed or targeted for production in the future. Most of these species remain understudied and mysterious, which makes it hard to make informed decisions about whether they are suitable for farming.
Better data could contribute to aquaculture policy, while also boosting public appreciation for the diversity and intricacy of life on a planet that is 70% aquatic.
Becca Franks receives funding from TinyBeam Foundation and Open Philanthropy.
Chiawen Chiang does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
A plastic microfiber found in the exhaled breath of a bottlenose dolphin is nearly 14 times smaller than a strand of hair and can be seen only with a microscope. Miranda Dziobak/College of Charleston, CC BY-SA
Our study found the microplastic particles exhaled by bottlenose dolphins (Tursiops truncatus) are similar in chemical composition to those identified in human lungs. Whether dolphins are exposed to more of these pollutants than people are is not yet known.
The ocean releases microplastics into the air through surface froth and wave action. Once the particles are released, wind can transport them to other locations. Steve Allen, CC BY-SA
In fact, bubble bursts caused by wave energy can release 100,000 metric tons of microplastics into the atmosphere each year. Since dolphins and other marine mammals breathe at the water’s surface, they may be especially vulnerable to exposure.
Where there are more people, there is usually more plastic. But for the tiny plastic particles floating in the air, this connection isn’t always true. Airborne microplastics are not limited to heavily populated areas; they pollute undeveloped regions, too.
Our research found microplastics in the breath of dolphins living in both urban and rural estuaries, but we don’t yet know whether there are major differences in amounts or types of plastic particles between the two habitats.
During these brief permitted health assessments, we held a petri dish or a customized spirometer – a device that measures lung function – above the dolphin’s blowhole to collect samples of the animals’ exhaled breath. Using a microscope in our colleague’s lab, we checked for tiny particles that looked like plastic, such as pieces with smooth surfaces, bright colors or a fibrous shape.
Since plastic melts when heated, we used a soldering needle to test whether these suspected pieces were plastic. To confirm they were indeed plastic, our colleague used a specialized method called Raman spectroscopy, which uses a laser to create a structural fingerprint that can be matched to a specific chemical.
Our study highlights how extensive plastic pollution is – and how other living things, including dolphins, are exposed. While the impacts of plastic inhalation on dolphins’ lungs are not yet known, people can help address the microplastic pollution problem by reducing plastic use and working to prevent more plastic from polluting the oceans.
Leslie Hart receives funding from the National Institute of Environmental Health Sciences of the National Institutes of Health, Sea Grant, and the National Science Foundation. Research reported in this article was supported by the National Institute of Environmental Health Sciences of the National Institutes of Health under Award Number R15ES034169 and the College of Charleston’s School of Health Sciences. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
Leslie Hart is an epidemiological consultant for the National Marine Mammal Foundation; however, this study was not conducted as a consultant.
Bottlenose dolphin health assessments were conducted under Scientific Research Permit #26622 and #24359, issued by the National Oceanic and Atmospheric Administration’s (NOAA) National Marine Fisheries Service (NMFS). Research studies were reviewed and approved by Mote Marine Laboratory and NMFS Atlantic Institutional Animal Care and Use Committees (IACUC).
Miranda Dziobak does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
I do research on racist and xenophobic speech. I am also an American citizen, and have voted from overseas since 1996 (first in the U.K., and now in Canada).
The current law giving Americans overseas the right to vote in federal elections is the Uniformed and Overseas Citizens Absentee Voting Act, which was signed into law by Ronald Reagan, a Republican president.
The federal program to help American citizens vote while living overseas is overseen by the Department of Defense — which makes sense, given that a large number of them are members of the United States military. All of this should give pause to anyone who thinks that allowing overseas citizens to vote is some sort of left-wing conspiracy.
Complex process
Nor is it an easy matter to vote from overseas. Every state has its own process for verifying citizenship after the registration and request form reaches them, and each has its own rules that voters must follow in order for their ballot to be counted.
My own state, New Jersey, is relatively simple: I can email my registration/request form, get my ballot by e-mail, and email it back. But I must also remember to mail in the paper version of my ballot or my vote won’t count.
This is easy enough for me, from Canada or previously the U.K. But it’s much more difficult for American citizens living in places that lack reliable postal services who often have to use expensive courier services to carry out their duty as citizens.
My husband’s state is New York. He is allowed to e-mail his ballot request, but he must also mail a paper version of the request. And the ballot itself comes with an elaborate set of envelope templates that require precise folding — and must arrive by a strict deadline, no matter where they’re being mailed from.
He’s a former graphic designer, and comfortable performing this task. But imagine trying to do so while suffering from arthritis or vision problems — especially when the home-printed version has tiny text. In short, there is nothing easy about voting from abroad.
So why use inflammatory language to pretend it’s an easy matter to generate many thousands of fraudulent overseas votes? One explanation would be to sow doubt about the election results. Anything that can introduce uncertainty and slow down the counting process can be exploited in an effort that could allow Trump and his allies to falsely declare him the winner on Nov. 5.
Trump’s campaign has made no secret about its plan to follow this path.
Language that suggests American citizens abroad are not really American also fits into a larger pattern of stoking divisions — and of drawing ever tighter boundaries around who would be counted as “real” Americans. This is a classic fascist power move, one that leads to a sharply defined “us,” who are worthy of moral consideration, as opposed to “them,” who are not.
Disenfranchising citizens abroad
Importantly, the movement against overseas voters is not just confined to a social media post. There are lawsuits in several states designed to disenfranchise American citizens abroad. These are citizens who may have gone to enormous lengths to carry out their duties by asking for and sending in election ballots, often at substantial personal expense and faced with substantial barriers.
Trump and his allies are working hard to prevent Americans abroad from exercising their most basic rights of citizenship. When Trump uses language that accuses overseas voters of fraud and foreign interference, it suggests we’re not really Americans.
There’s a major problem in doing so. As mentioned, a large segment of American citizens abroad are members of the U.S. armed forces. Efforts to disenfranchise Americans abroad are also efforts to disenfranchise the military.
‘Figleaf’ language
That’s why Trump’s allegation on Truth Social that Democrats “want to dilute the TRUE vote of our beautiful military” makes no sense. This is especially true given it’s coming from someone who’s attacking the very law that allows members of the military to vote from abroad, including casting ballots for him if they’re so inclined.
This is what I call a figleaf — an additional bit of speech that provides just a bit of cover for saying something else that is much less acceptable. The allegation suggests, to someone who doesn’t understand overseas voting, that Trump somehow supports the military.
Trump’s “diluting the vote” rhetoric also plays into the deeply racist Great Replacement Theory. This theory holds that Democrats and other shadowy forces (often cast as Jewish) are plotting to replace white Americans with foreigners, in part as a way to secure electoral victory.
Overseas voting might seem like a niche issue. But overseas citizens could make all the difference in a close election. The attack on overseas voting is part of a much larger pattern of destructive suggestions from Trump about who is and is not a real American.
Source: The Conversation (Au and NZ) – By Janet Davies, Respiratory Allergy Stream Co-chair, National Allergy Centre of Excellence; Professor and Head, Allergy Research Group, Queensland University of Technology
Hay fever (or allergic rhinitis) is a long-term inflammatory condition that’s incredibly common. It affects about one-quarter of Australians.
Symptoms vary but can include sneezing, itchy eyes and a runny or blocked nose. Hay fever can also contribute to sinus and ear infections, snoring, poor sleep and asthma, as well as lower performance at school or work.
But many people didn’t have hay fever as a child, and only develop symptoms as a teenager or adult.
Here’s how a combination of genetics, hormones and the environment can lead to people developing hay fever later in life.
Remind me, what is hay fever?
Hay fever is caused by the nose, eyes and throat coming into contact with a substance to which a person is allergic, known as an allergen.
Common sources of outside allergens include airborne grass, weed or tree pollen, and mould spores. Pollen allergens can be carried indoors on clothes, and through open windows and doors.
Depending on where you live, you may be exposed to a range of pollen types across the pollen season, but grass pollen is the most common trigger of hay fever. In some regions the grass pollen season can extend from spring well into summer and autumn.
How does hay fever start?
Hay fever symptoms most commonly start in adolescence or young adulthood. One study found 7% of children aged six had hay fever, but that grew to 44% of adults aged 24.
Before anyone has hay fever symptoms, their immune system has already been “sensitised” to specific allergens, often allergens of grass pollen. Exposure to these allergens means their immune system has made a particular type of antibody (known as IgE) against them.
During repeated or prolonged exposure to an allergen source such as pollen, a person’s immune system may start to respond to another part of the same allergen, or another allergen within the pollen. Over time, these new allergic sensitisations can lead to development of hay fever and possibly other conditions, such as allergic asthma.
Why do some people only develop hay fever as an adult?
1. Environmental factors
Some people develop hay fever as an adult simply because they’ve had more time to become sensitised to specific allergens.
Migration or moving to a new location can also change someone’s risk of developing hay fever. This may be due to exposure to different pollens, climate and weather, green space and/or air quality factors.
A number of studies show people who have migrated from low- and middle-income countries to higher-income countries may be at a higher risk of developing hay fever. This may due to local environmental conditions influencing expression of genes that regulate the immune system.
2. Hormonal factors
Hormonal changes at puberty may also help drive the onset of hay fever. This may relate to sex hormones, such as oestrogen and progesterone, affecting histamine levels, immune regulation, and the response of cells in the lining of the nose and lower airways.
3. Genetic factors
Our genes underpin our risk of hay fever, and whether this and other related allergic disease persists.
For instance, babies with the skin condition eczema (known as atopic dermatitis) have a three times greater risk of developing hay fever (and asthma) later in life.
Having a food allergy in childhood is also a risk factor for developing hay fever later in life. In the case of a peanut allergy, that risk is more than 2.5 times greater.
What are the best options for treatment?
Depending on where you live, avoiding allergen exposures can be difficult. But pollen count forecasts, if available, can be useful. These can help you decide whether it’s best to stay inside to reduce your pollen exposure, or to take preventative medications.
If you have mild, occasional hay fever symptoms, you can take non-drowsy antihistamines, which you can buy at the pharmacy.
However, for more severe or persistent symptoms, intranasal steroid sprays, or an intranasal spray containing a steroid with antihistamine, are the most effective treatments. However, it is important to use these regularly and correctly.
Allergen immunotherapy, also known as desensitisation, is an effective treatment for people with severe hay fever symptoms that can reduce the need for medication and avoiding allergens.
However, it involves a longer treatment course (about three years), usually with the supervision of an allergy or immunology specialist.
When should people see their doctor?
It is important to treat hay fever, because symptoms can significantly affect a person’s quality of life. A GP can:
recommend treatments for hay fever and can guide you to use them correctly
organise blood tests to confirm which allergen sensitisations (if any) are present, and whether these correlate with your symptoms
screen for asthma, which commonly exists with hay fever, and may require other treatments
arrange referrals to allergy or immunology specialists, if needed, for other tests, such as allergen skin prick testing, or to consider allergen immunotherapy if symptoms are severe.
Janet Davies receives funding from the ARC, NHMRC, Department of Health and Ageing, and MRFF. She has conducted research on diagnostics in collaboration with Abionic SA, Switzerland, supported by the National Foundation for Medical Research Innovation with co-contribution from Abionic. Her research has been supported by in-kind services or materials from Sullivan Nicolaides Pathology (Queensland), Abacus Dx (Australia), Stallergenes (France), Stallergenes Greer (Australia), Swisens (Switzerland), Kenelec (Australia), and ThermoFisher (Sweden), as well as cash or in-kind contributions from Partner Organisations for the NHMRC AusPollen Partnership Project GNT1116107, Australasian Society Clinical Immunology Allergy, Asthma Australia; Stallergenes Australia; Bureau Meteorology, Commonwealth Scientific Industrial Research Organisation, Federal Office of Climate and Meteorology Switzerland. QUT owns patents relevant to grass pollen allergy diagnosis (US PTO 14/311944 issued, AU2008/316301 issued) for which Janet Davies is an inventor. She is the Executive Lead, Repository and Discovery Pillar, and Co-Chair Respiratory Allergy Stream for the National Allergy Centre of Excellence.
Unrelated to this article, Joy Lee has received funding from the Centre of Research Excellence in Treatable Traits in Asthma, Sanofi, Fondazione Menarini and GSK. This funding support was solely used for presenting at educational meetings in asthma and travel grants to attend international meetings and conferences in asthma and allergic diseases. She has been on advisory boards for Tezepelumab (Astra Zeneca). She is affiliated with the National Allergy Centre of Excellence as the co-chair of the Respiratory Allergy Leadership Group.
As the latest global biodiversity summit gets underway in Colombia, finance for the conservation and restoration of nature is one of the key themes of negotiations.
Global wildlife populations have shrunk by an average of 73% in the past 50 years, according to the 2024 Living Planet report. Consequently, momentum is growing worldwide to deliver new nature markets, such as biodiversity credits, to unlock new sources of funding.
Basically, nature markets are systems of exchange that match demand for nature regeneration with a supply of nature-positive projects.
But this creates risks, as well as opportunities, for Indigenous peoples. Without due care for data sovereignty, Indigenous communities may lose out yet again.
Nature markets could enable Indigenous peoples to fulfill their duties of guardianship. But such markets could also forge a new form of colonialism, including enclosure and appropriation of habitats and species that Indigenous peoples have traditional connections to.
Efforts to prevent deforestation have at times displaced Indigenous people. Mario Tama/Getty Images
One neglected area is Indigenous data. This relates to traditional and cultural information, population data, oral histories and ancestral knowledge relating to the environment and natural resources.
If care is not taken with Indigenous data, there are serious risks of reproducing colonialist patterns of exploitation.
Data represents reality. Data helps decision makers to know whether their interventions are effective, even when they are far away from the ecosystems being protected or restored.
If data are accurate, authentic and timely, a funder does not need to set foot in a remote habitat to know whether its carbon stock or native species abundance are improving or declining.
Biodiversity credits represent one way to operationalise a nature market. They are basically a vehicle for data. The emerging methodologies are bundles of metrics and indicators that track biodiversity and ecological function.
Biodiversity credits use metrics and indicators that track ecological function. Renee Raroa, CC BY-SA
The data enable credit holders to make credible claims of biodiversity uplift, or avoided biodiversity loss, as a consequence of credit sales.
As a representation of ecological reality, data are at least one step removed from the habitats and species they represent. This opens up the potential for nature markets to rely on the exchange of verifiable data, without the need to commodify nature itself, and therefore impinge on the ownership rights of Indigenous communities.
However, data are not free from such considerations. To divert data into a system of market exchange raises a different but related set of concerns about ownership, benefit and sovereignty.
The rise of Indigenous data sovereignty
Indigenous data sovereignty is the right of Indigenous peoples to govern the collection, ownership and application of data about Indigenous communities, peoples, lands and resources. It relates to data produced by and about Indigenous peoples and the environments they have relationships with.
Nature and people are precious, so data that represent nature and people are imbued with that preciousness. As Māori practitioner Ngapera Riley has written:
Data is a taonga (treasure). It’s something that people gift us, and that we gift to others as we go about our daily lives.
In te ao Māori, data come in many forms. This includes whakataukī (proverbs), moteatea (chants), whaikorero (oratory), maramataka (calendar), whakapapa (genealogies), pūrākau (stories) and increasingly digital forms.
Consequently, we must take great care in how data are accessed, shared, stored and used. This is especially critical in a system of market exchange. The dominant markets of today are profit-driven, creating incentives for appropriation and exploitation.
Sovereignty means power
Indigenous peoples are conscious that, while there are risks in data and knowledge sharing, there are also opportunities. Indigenous data and knowledge is a living and evolving system, which can contribute to effective responses to environmental challenges, including the protection and regeneration of biodiversity.
The principles of Indigenous data governance emerged from deliberations about how to protect Indigenous sovereignty when sharing knowledge and data for academic research. These CARE principles hold that Indigenous data should be governed for collective benefit, authority to control, responsibility and ethics.
This is critically important in ecological research, which too often neglects duties relating to data about natural ecosystems and the people who live within them.
It is troubling that the recognition of Indigenous data sovereignty is largely lacking from the discussion of nature markets so far. Unless Indigenous data sovereignty is upheld, the legitimacy of nature markets will likely be irreversibly tarnished.
But Indigenous data sovereignty is more than a risk: it is a source of power. It is a right to self-determination, to choose how data are used and their value is distributed. By ensuring this right, nature markets might deliver on their promise of inclusive, sustainable prosperity.
David Hall is Policy Director for the Toha Network.
Mike Taitoko is a shareholder of Toha Foundry Ltd and a Trustee of Toha Network Ltd.
Nathalie Whitaker works for the Toha Network in various capacities, including shareholder of Toha Foundry and trustee of Toha Network Trust.
Renee Raroa is the Establishment Director of the East Coast Exchange, a venture in the Toha Network.
Tasman Turoa Gillies is Head of Operations for Takiwā, part of the Toha Network.
Islands have long intrigued explorers and scientists. These isolated environments serve as natural laboratories for understanding how species evolve and adapt.
Islands are also centres of species diversity. It has long been speculated that islands support exceptionally high amounts of global biodiversity, but the true extent was unknown until now.
In world-first research published in Nature today, my colleagues and I counted and mapped the diversity of plant life on Earth’s islands. We found 21% of the world’s total plant species are endemic to islands, meaning they occur nowhere else on the planet.
These findings are important. Island plants are at higher risk of extinction than those on mainlands. Detailed knowledge of plants species, and where they grow, is essential for monitoring and conserving them.
Mapping island floras worldwide
The study involved an international team of scientists. We developed an unprecedented database of vegetation information from more than 3,400 geographical regions worldwide, including about 2,000 islands.
The definition of an island is somewhat arbitrary. Conventionally, an island is a landmass entirely surrounded by water and smaller than a continent. This means Tasmania and New Guinea are islands, but mainland Australia – a continent in itself – is not. This is the definition we used.
We found 94,052 plant species, or 31% of the world’s total, are native to islands. Of these, 63,280 plant species, or 21%, only occur on islands.
Endemic species were concentrated on large tropical islands such as Madagascar, New Guinea and Borneo. On Madagascar alone, 9,318 plant species – 83% of its total flora – grow there and nowhere else.
Fewer plant species overall were found at ocean archipelagos such as Hawaii, the Canary Islands and the Mascarenes (east of Madagascar, including La Reunion and Mauritius). But a large share of their species were still unique to these islands.
Two palms are endemic to Australia’s Lord Howe Island – Howea forsteriana and H. belmoreana. They are one of the best-researched examples of “sympatric speciation”, or in other words, species that evolve from a common ancestor at the same location.
This mode of evolution has long been hypothesised to exist. But examples are rare, and highly useful for evolutionary research.
The Norfolk Island Pine (Araucaria heterophylla) is, of course, named after the tiny island where it is found. This species, while endangered in the wild, is now widely planted along Australia’s beaches where it is instantly recognisable to us.
Islands are of great conservation concern
Islands cover just 5.3% of the world’s land area, but contribute disproportionately to global biodiversity.
Island plants are at much greater risk of extinction than species found in mainland areas, for reasons such as:
small population sizes
unique evolutionary traits that make them vulnerable to invasive species such as herbivores
specific habitat requirements
habitat degradation
threats from invasive plant and animal species
climate change.
Some 57% of the island-endemic species we assessed are considered critically endangered, endangered, vulnerable, or near-threatened, according to the International Union for Conservation of Nature.
Alarmingly, 176 of plant species endemic to islands are already classified as extinct, accounting for 55% of all known extinct plant species globally. Among these is Hawaii’s vulcan palm (Brighamia insignis), which is now considered extinct in the wild. However, the species is popular as an ornamental plant and still survives in gardens.
Hawaii’s vulcan palm is extinct in the wild, but is popular as an ornamental plant. Shutterstock
Other species might be less lucky; extinction in the wild may mean being lost for ever.
So, assessing the conservation status of island floras is important. Under a globally agreed United Nations target, 30% of the world’s land and oceans should be protected by 2030. We calculated how much of global islands is conserved today. Disappointingly, only 6% of endemic plant species occur on islands that meet this target.
For instance, New Caledonia, Madagascar and New Guinea – known for their many endemic plant species – contain relatively low levels of protected areas.
Assessing the conservation status of island floras is important. Shutterstock
Protecting our island plants
Urgent action is needed to protect island biodiversity. This includes expanding protected areas, prioritising regions with high numbers of endemic species, and implementing habitat restoration projects.
Without such measures, the unique floral diversity of islands may continue to decline, with potentially severe consequences for global biodiversity.
Much more research is needed to determined the best conservation strategies for all these plant species. Accurate data is vital to guide future conservation strategies and safeguard against further loss.
Our study also serves as a stark reminder of the urgent need for targeted plant conservation efforts on islands. Many species teeter on the brink of extinction, and time is running out to preserve this irreplaceable natural heritage.
Julian Schrader does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
On land, we’re familiar with heatwaves and cold snaps. But the deep sea also experiences prolonged periods of hot and cold.
Marine heatwaves and cold spells can severely damage ocean ecosystems and habitats such as coral reefs. These extremes can also force species to move or die and cause sudden losses for fisheries.
In research published today in Nature, we show almost half of the heatwaves and cold snaps reaching the ocean’s twilight zone – between 200 and 1,000 metres – are driven by large eddy currents, swirling currents which transport warm or cold water.
As the oceans heat up, heatwaves linked to eddy currents are getting more intense – and so are cold snaps. These pose potential threats to the vast amount of life in the twilight zone, home to the world’s most abundant vertebrate and the largest migration on the planet.
Monitoring the deep sea is hard
About 90% of heat trapped by greenhouse gases has gone into the oceans. As a result, marine heatwaves are arriving more frequently – especially off Australia’s east coast, Tasmania, the northeast Pacific coast in the United States and in the North Atlantic.
Researchers have long relied on satellite measurements of temperatures at the ocean surface to detect these extreme ocean temperature events. Surface temperatures are directly influenced by the atmosphere. But it’s different at depth.
Satellites can’t measure temperatures under the surface, making the deep sea much harder to monitor.
Instead, we have a handful of long-term moorings – measurement buoys suspended at depth – across the world’s oceans. These are hugely valuable, as they continuously record temperatures and make it possible to detect extremes temperature changes.
In recent decades, there have been welcome advances in the form of Argo floats – robotic divers which dive 2,000 metres deep and resurface, sampling temperature and salinity as they go.
Data from these two sources coupled with traditional measurements from vessels made our research possible.
Heatwaves inside eddy currents
The data gave us two million high quality temperature readings or “profiles” across the world’s oceans, spanning three decades. We used this rich data to uncover the role of eddy currents.
Ocean eddies are huge loops of swirling current, sometimes hundreds of kilometres across and reaching down over 1,000 metres. They’re so large you can see them on satellite images.
These powerful currents can push warm surface water down deeper or lift deep cold water up, causing rapid temperature changes. Eddies can travel a long distance before dissipating, carrying bodies of colder or warmer water with them.
We discovered their role in triggering deep heatwaves and cold snaps by examining each temperature profile and cross-matching this with eddies present at the same time and location.
This showed eddies played a major role in triggering marine heatwaves and cold spells in waters deeper than 100 metres – especially in the mid-latitude oceans north and south of the tropics.
The East Australian Current takes warm water southward down the east coast, triggering many eddies. More than 70% of deeper marine heatwaves in this area actually took place inside ocean eddies.
When eddies in this current spin anticlockwise, they tend to bring marine heatwaves, transporting warm water to the depths. But when they spin clockwise, they bring cold deep water up higher, bringing cold spells.
We found deep extreme temperature events linked to eddies are seen more often in major ocean boundary currents, such as the East Australian and Kuroshio currents in the Pacific and the Gulf Stream in the Atlantic. Deep marine heatwaves also occur in the Leeuwin Current off Western Australia. The stronger the eddy currents, the more likely they are to trigger extreme temperatures deeper down.
Eddy currents are the main driver for nearly half of all deep ocean heatwaves and cold spells. Other drivers include ocean temperature fronts from strong ocean currents and large-scale ocean waves.
When eddy currents spin one way, they can send heat to the depths. When they spin another, they can bring cold water towards the surface. olrat/Shutterstock
What does this mean for ocean life?
Day in, day out, heat trapped by greenhouse gases makes its way to the oceans.
You would expect marine heatwaves to increase, which they are. But cold snaps haven’t gone away. In fact, extremes of both heat and cold are getting more intense in the deeper ocean as the climate changes.
Our research suggests eddy currents are acting to magnify the warming rates of marine heatwaves and the cooling rate of the cold spells. Warmer oceans overall are leading to stronger eddy currents, which in turn are able to trigger large temperature change over a greater vertical distance.
Because we can detect ocean eddies with satellites, we can use this research to predict when deeper marine heatwaves and cold spells are likely. This will help find which ecosystems are likely to be hit by extreme heat or cold and assess what damage they do.
The ocean layer these extremes affect is called the twilight zone – between 200 and 1,000 metres deep. These depths are home to many important fish species and plankton. In fact, this zone has more fish biomass than the rest of the ocean combined. One small fish, the bristlemouth, is likely the most abundant vertebrate on earth, potentially numbering in the quadrillions – thousands of trillions.
The mesopelagic Twilight Zone is rich in life. Clockwise from top: mesopelagic jellyfish, viperfish, lanternfish, larvacean, copepod and squid. Wikimedia/Drazen et al, CC BY-NC-ND
When night falls, vast numbers of fish, crustaceans and other creatures migrate towards the surface to feed in the largest animal migration on Earth. During the day, many open ocean fish head to the twilight to avoid sharks, whales and other surface predators.
Heat and cold brought by eddies aren’t the only threat to the twilight zone. Marine heatwaves can lead to low oxygen levels in the water and reduced nutrients. We will need to find out what threat these combined changes pose to life in the twilight.
Ming Feng receives funding from CSIRO, the Integrated Marine Observing System (IMOS), Western Australia State Government, and Fisheries Research and Development Corporation
The sight of a fireball streaking across the sky brings wonder and excitement to children and adults alike. It’s a reminder that Earth is part of a much larger and incredibly dynamic system.
Each year, roughly 17,000 of these fireballs not only enter Earth’s atmosphere, but survive the perilous journey to the surface. This gives scientists a valuable chance to study these rocky visitors from outer space.
Scientists know that while some of these these meteorites come from the Moon and Mars, the majority come from asteroids. But two separate studies published in Nature today have gone a step further. The research was led by Miroslav Brož from Charles University in the Czech Republic, and Michaël Marsset from the European Southern Observatory in Chile.
The papers trace the origin of most meteorites to just a handful of asteroid breakup events – and possibly even individual asteroids. In turn, they build our understanding of the events that shaped the history of the Earth – and the entire solar system.
What is a meteorite?
Only when a fireball reaches Earth’s surface is it called a meteorite. They are commonly designated as three types: stony meteorites, iron meteorites, and stony-iron meteorites.
Stony meteorites come in two types.
The most common are the chondrites, which have round objects inside that appear to have formed as melt droplets. These comprise 85% of all meteorites found on Earth.
Most are known as “ordinary chondrites”. They are then divided into three broad classes – H, L and LL – based on the iron content of the meteorites and the distribution of iron and magnesium in the major minerals olivine and pyroxene. These silicate minerals are the mineral building blocks of our solar system and are common on Earth, being present in basalt.
“Carbonaceous chondrites” are a distinct group. They contain high amounts of water in clay minerals, and organic materials such as amino acids. Chondrites have never been melted and are direct samples of the dust that originally formed the solar system.
The less common of the two types of stony meteorites are the so-called “achondrites”. These do not have the distinctive round particles of chondrites, because they experienced melting on planetary bodies.
Most asteroids reside in a dense belt between Mars and Jupiter. The asteroid belt itself consists of millions of asteroids swept around and marshalled by the gravitational force of Jupiter.
The interactions with Jupiter can perturb asteroid orbits and cause collisions. This results in debris, which can aggregate into rubble pile asteroids. These then take on lives of their own.
It is asteroids of this type which the recent Hayabusa and Osiris-REx missions visited and returned samples from. These missions established the connection between distinct asteroid types and the meteorites that fall to Earth.
S-class asteroids (akin to stony meteorites) are found on the inner regions of the belt, while C-class carbonaceous asteroids (akin to carbonaceous chondrites) are more commonly found in the outer regions of the belt.
But, as the two Nature studies show, we can relate a specific meteorite type to its specific source asteroid in the main belt.
Artist’s graphic of the asteroid belt between Mars and Jupiter. NASA/McREL
One family of asteroids
The two new studies place the sources of ordinary chondrite types into specific asteroid families – and most likely specific asteroids. This work requires painstaking back-tracking of meteoroid trajectories, observations of individual asteroids, and detailed modelling of the orbital evolution of parent bodies.
The study led by Miroslav Brož reports that ordinary chondrites originate from collisions between asteroids larger than 30 kilometres in diameter that occurred less than 30 million years ago.
The Koronis and Massalia asteroid families provide appropriate body sizes and are in a position that leads to material falling to Earth, based on detailed computer modelling. Of these families, asteroids Koronis and Karin are likely the dominant sources of H chondrites. Massalia (L) and Flora (LL) families are by far the main sources of L- and LL-like meteorites.
The study led by Michaël Marsset further documents the origin of L chondrite meteorites from Massalia.
It compiled spectroscopic data – that is, characteristic light intensities which can be fingerprints of different molecules – of asteroids in the belt between Mars and Jupiter. This showed that the composition of L chondrite meteorites on Earth is very similar to that of the Massalia family of asteroids.
The scientists then used computer modelling to show an asteroid collision that occurred roughly 470 million years ago formed the Massalia family. Serendipitously, this collision also resulted in abundant fossil meteorites in Ordovician limestones in Sweden.
In determining the source asteroid body, these reports provide the foundations for missions to visit the asteroids responsible for the most common outerspace visitors to Earth. In understanding these source asteroids, we can view the events that shaped our planetary system.
Trevor Ireland receives funding from the Australian Research Council for research into the samples returned by the Hayabusa and Osiris-REx missions. He is a past President of the Meteoritical Society, the international organisation responsible for classification and cataloguing meteorites.
Oct. 16 marks World Food Day, a global initiative drawing attention to the “right to foods for a better life and a better future.” However, Canada’s food and agricultural policies are falling short of this objective.
Canada’s current agricultural policies are not serving the well-being of the public. Canada’s agricultural program payments and subsidies are not aligned with the government’s dietary guidelines and health goals.
Very few agriculture investments go to the production of fruits and vegetables, even though Canadians under-consume them. Instead, financial support overwhelmingly goes to feed crops, agricultural export crops and foods high in saturated fat. This is particularly troubling, given the rise of food and lifestyle diseases in Canada, such as diabetes, obesity, coronary heart disease and high cholesterol.
The health-care costs of diet-related diseases from not meeting the dietary guidelines are at least two per cent of all health-care costs in Canada, with some estimates putting it as high as 19 per cent. Agricultural policy is not just about food; it influences health, the economy and the environment.
Climate change and agriculture
Trying to address greenhouse gas emissions without paying attention to agriculture is like heating your home while not ensuring doors and windows are closed. Agriculture is a big contributor to Canada’s greenhouse gas emissions.
As climate change intensifies, bringing more frequent and severe wildfires, droughts, floods, and heat domes , agriculture is being impacted. Instability in weather patterns threatens regional and global social stability and may require Canada to rethink the dominant role of international trade in shaping its current agricultural policies.
Despite these concerns, Canada is not investing strategically or sufficiently in agriculture. Despite $12.5 billion dollars in annual agricultural supports, a surprising portion of Canadian farmers continue to financially struggle to survive. According to the National Farmers Union:
“Over the last three decades, the agribusiness corporations that supply fertilizers, chemicals, machinery, fuels, technologies, services, credit, and other materials and services have captured 95 per cent of all farm revenues, leaving farmers just five per cent.”
In 2016, 66 per cent of all farms in Canada were in the revenue class of $10,000 to $249,999. On average, these farms had expenses exceeding their revenue by a large margin.
While Canada spends a large share of its budget on addressing the negative outcomes of how we produce and consume food, there remain greater opportunities for investing in preventive measures that promote a healthier, more sustainable food system. Canada’s 20th century agriculture policy regime is woefully insufficient for the challenges of the 21st century.
Food outlets and school cafeterias can play a role in reducing inefficiencies in the food system, like food waste, and improving sustainability by promoting healthy eating. To make this happen, schools need more resources and autonomy to counter misinformation about food and position Canadians for success by making healthy choices attractive.
To truly make an impact, local food movements must be part of a larger, co-ordinated effort supported by policies that align agricultural production with healthy diets.
A new approach to food policies that considers them from a holistic perspective, beyond GDP, and respects farmers while creating food systems based on the One Planet and One Health frameworks is needed.
It’s important to recognize that farmers are not only just business operators; they are our neighbours, and are integral to our communities. Supporting them with better policies and giving everyone equitable access to nourishing and sustainable foods will ensure a healthier, more resilient future for all Canadians.
Canada needs to provide stronger support for family farms practising agroecologically sound production methods. Government programs that support greater production and purchasing of grains, fruits and vegetables for direct human consumption are also needed. These initiatives would reduce Canada’s reliance on imports of these critical foods.
In addition, federal and municipal governments should strengthen and broaden Canada’s bioregional food systems while also fostering the growth of small- and medium-sized food businesses. It’s also important to reduce the political and market power of oligopolies in Canada’s food system.
In fact, smaller-scale agroecological farmers operating in bioregional food systems are key. Achieving our broader societal goals means thinking of food through agriculture, human health and environmental sustainability lenses.
Canada needs a new vision of agriculture that connects health and environment goals with sustainable diets and prosperous family farming. This vision must prioritize nutritious diets, human and environmental health, and the overall well-being of society beyond profits, market share and food exports. Also it must be formed collectively by decision-makers, farmers, food processors, community groups and the public.
In Canada, governments, organizations and citizens must work together to create a food system vision for Canada, much like Food Secure Canada’s Resetting the Table process previously did.
Further collaboration among agriculture, environment and health professionals can arise from these efforts, as can be seen with Canada’s National School Food program, which is aligning local farmers and suppliers of local options to meet Canada’s Food Guide. This is also an opportunity for Canada’s Food Policy Advisory Council to gain greater influence in shaping policy.
Just as calls for health-care reform often focus on improving services, Canadians have the right to expect better outcomes from agricultural subsidies. By prioritizing economic, environmental and public health sustainability, Canada can ensure its agricultural policy is fit for its 21st-century food system.
Kathleen Kevany received funding from Protein Industries Canada. She is an advisor to Farm to Cafeteria Canada.
Howard Nye receives funding from the Social Sciences and Humanities Research Council of Canada. He is a board member and research lead for Canadians for Responsible Food Policy.
Mark Kent Mullinix receives funding from Social Sciences and Humanities Research Council, Agriculture and Agri-Food Canada. Government of British Columbia, various foundations
Talan B. Iscan is a project lead and receives funding from MacEachen Institute for Public Policy and Governance at Dalhousie University. He is a board member with the Halifax Cycling Coalition, a non-profit.
Source: The Conversation (Au and NZ) – By John Hawkins, Senior Lecturer, Canberra School of Politics, Economics and Society, University of Canberra
Lucky Loser tells the story of Donald Trump’s less-than-stellar business career and how he was able to misrepresent it as a success.
It is written by New York Times investigative journalists, Russ Buettner and Susanne Craig. Both have won Pulitzer Prizes for earlier analyses of Trump. Another badge of honour is Trump sued them – and lost.
They are by no means the first writers to expose the Potemkin village that is Trump’s business empire. A telling insider account came from Trump’s niece, psychologist Mary Trump, who revealed the creator of Donald’s fortune was his father Fred.
Lucky Loser: How Donald Trump Squandered His Father’s Fortune and Created the Illusion of Success – Russ Buettner and Susanne Craig (Bodley Head)
Setting things straight
However, at more than 500 pages, including more than 40
pages of notes on sources, this new book is the most comprehensive rendering. It is detailed, clearly written and has been well-reviewed in the financial press and by economic historian Brad de Long.
The authors aim to draw on financial statements and interviews to “set straight Donald Trump’s chaotic onslaught of untruths and misdirection”.
A large part of the Trump mythology is the lie that he is a self-made billionaire. In the presidential debate with Hillary Clinton, Trump sought to downplay the contribution of his father, saying “my father gave me a very small loan”. The book reveals his father’s contribution, in today’s money, was around half a billion US dollars.
Trump’s first piece of luck was being born the son of hard-working, cautious and competent residential property developer Fred Trump, the son of a German immigrant. His second was that Fred’s eldest son did not have the ruthless drive to become Fred’s successor, and Fred did not consider his daughters as potential successors. So despite some characteristics that were the antithesis of his father, Donald became his heir.
The book describes Fred’s career in some detail. The first hundred pages are mostly about him. Once Fred stepped back, Trump diversified his father’s company to form what the authors term
an eclectic conglomerate untethered from any core competency.
Another piece of luck was been chosen to star in the reality television series The Apprentice, from which he made a lot of money, including from licensing deals, for the small amount of time he spent on it.
The producers of this series have a lot to answer for, as they wanted to present their star as the astute businessman they knew him not to be. As they said, it was “not a documentary”. But it enormously and misleadingly raised Trump’s profile.
Wins followed by losses
The authors describe how some of Trump’s ventures, such as the development of Trump Tower, went well as the Manhattan property market boomed. He also profited from some “greenmailing” (buying shares in a company with the stated or implied intention of taking it over and then selling the shares at a higher price), facilitated by exaggerated accounts in the media of his wealth.
But Trump used up much of the proceeds of his few successes covering his losses on a range of his other business ventures.
Among his notable failures was Trump University, where he paid A$37 million to settle lawsuits for fraud. Many other property projects, Scottish golf courses, Trump Ice bottled water and Trump Mortgage, never turned a profit. And the punters were not the only ones losing money in Trump casinos.
While he has fought to keep them secret, what has emerged from Trump’s tax returns are a series of huge losses.
A conundrum not really addressed in the book is why so many bankers were willing to lend to him.
The book concentrates on Trump’s career before the 2016 election, when the flawed US electoral system turned his almost 3 million vote loss on the popular vote into a win in the electoral college. As president, he disregarded conflicts of interest. As the authors note, parties wanting to influence the president could funnel money to him by booking blocks of rooms at his hotel.
After 81 million Americans voted to fire him in 2020, Trump’s businesses again performed poorly.
Trump’s current wealth is estimated by Forbes at A$5.7 billion (less than it was a decade ago). But about half of this is from his majority stake in Truth Social, promoted as a right-wing alternative to Twitter. (Now, it could be said, an even more right-wing forum than X.) It has tiny and falling revenues and makes large losses. If Trump loses the election, its value will probably soon be close to zero. It is regarded as a “meme stock”.
Buettner and Craig conclude Trump “would have been better off betting on the sharemarket than on himself”. Analysis cited in The Economist in 2018 concluded that had Trump just put the money from his father into a sharemarket index fund he would have had A$2.9 billion in 2018. Given subsequent rises in the US stockmarket that would have grown to around A$5.9 billion by now, more than most estimates of his wealth.
Forbes reached a similar conclusion, as did De Long and US political commentator Professor Robert Reich. The self-described business genius destroyed rather than created value.
A poor tycoon and a poor president
This business record of mismanaging an inheritance is reflected in Trump’s economic performance as president. He inherited the world’s largest economy from Obama. By the end of his term it was more than 10% smaller than China’s economy. Historians rank him one of the worst performing presidents on economic management (and much else). The public gave him the lowest approval ratings during his presidential term.
Trump has indeed been a “lucky loser”. But if this deeply flawed man is returned to the presidency, the world will be an unlucky loser.
John Hawkins does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
When Canada legalized recreational cannabis use on Oct. 17, 2018, there were concerns about the potential impacts. Would it trigger greater cannabis use, boost economic growth or otherwise affect the country’s health, safety and finances?
Patients already using cannabis legally for medical purposes were especially concerned. They worried that recreational legalization might prompt physicians to stop authorizing cannabis treatments. Or that cannabis producers would abandon the small medical market to pursue the larger recreational one.
As someone who studies the business aspects of cannabis legalization, I wondered about these issues, too. It wasn’t clear how patients, producers or health-care providers would react to recreational legalization. Legal medical use itself had only become accessible a few years earlier.
Accessing medical cannabis
Canada began allowing medical use of cannabis in 1999. But it remained difficult to get until regulations changed during 2014-15.
The new rules allowed any physician to authorize patients to use cannabis. Those patients could then register to buy products online from licensed cannabis producers. Online orders could not exceed a 30-day supply.
(Instead of buying cannabis products, some patients grew their own plants instead. My research hasn’t examined that.)
Under this new procedure, the number of patients registering to buy cannabis soared. They grew from 7,914 in June 2014 to 330,344 in June 2018, nearly one per cent of Canada’s population.
However, registration levels differed greatly between provinces. In June 2018, registrations represented almost three per cent of Alberta’s population, versus only 0.1 per cent of Québec’s.
Interestingly, less than half of registrants bought medical cannabis in any given month. Perhaps they simply didn’t need the full dose. Or maybe they found it too expensive, inconvenient or ineffective.
June 2018 was also when the federal government passed its new cannabis legislation. The law took effect in October 2018, when recreational sales of dried cannabis and cannabis oils began. After initial product shortages were overcome, recreational cannabis sales grew rapidly as more stores opened, even during the COVID-19 pandemic. Consumer choice expanded in December 2019 when edibles and vapes became available.
This is where my new study came in. I analyzed government data on patients’ use of Canada’s medical cannabis system between 2017 and 2022. This included how many patients registered, how often they placed orders, and how much cannabis they bought.
Evolving system usage
I found that as soon as parliament passed the new cannabis law, medical registrations began slowing down, despite recreational legalization still being four months away.
But the response differed noticeably between provinces. For example, registrations kept growing steadily in Québec but plummeted rapidly in Alberta. Other provinces were in between.
My data doesn’t say why those changes occurred. Perhaps Alberta, with its copious cannabis clinics, had many patients only mildly interested in using cannabis medically. Conversely, maybe Québec was still catching up with other provinces on medical use.
When recreational sales started in October 2018, patient registrations seemed unaffected. Their average purchase sizes didn’t change either. But they bought medical cannabis slightly less often.
This might have been due to retail convenience. At that time, medical producers and recreational stores were selling similar products: dried cannabis and cannabis oils. So, perhaps some patients started topping up their supplies occasionally at recreational stores but saw no reason to leave the online medical system completely.
When edibles and other processed products began selling in December 2019, registrations dropped further. But the patients who remained bought medical cannabis slightly more often and in increasingly larger quantities.
Product selections might explain this patient split. Perhaps producers with good edible products retained their customers and received larger orders from them. Conversely, maybe medical producers offering few edibles lost their patients to the recreational shops and their vast product assortments.
In summary, Canada’s medical cannabis system experienced big changes after recreational legalization. But it didn’t disappear.
Will other countries see similar outcomes if they allow recreational cannabis?
A changing world
In Europe, for example, The Netherlands is experimenting with recreational sales. Meanwhile, Germany has legalized recreational use but not retail sales. Will those countries experience medical cannabis changes like Canada did?
Other countries, like Australia and New Zealand, are somewhere in between. They’re seeing rapid growth in legal medical use and illegal recreational use, but haven’t legalized recreationally. That’s roughly where Canada was 10 years ago.
Will Canada’s medical and recreational cannabis experiences make these other countries more interested in legalization, or less? Either way, I hope they can learn from our experiences as they chart their own cannabis paths.
Michael J. Armstrong does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
By the end of 2024, nearly 200 nations will have met at three conferences to address three problems: biodiversity loss, climate change and plastic pollution.
Colombia will host talks next week to assess global progress in protecting 30% of all land and water by 2030. Hot on its heels is COP29 in Azerbaijan. Here, countries will revisit the pledge they made last year in Dubai to “transition away” from the fossil fuels driving climate breakdown. And in December, South Korea could see the first global agreement to tackle plastic waste.
Don’t let these separate events fool you, though.
“Climate change, biodiversity loss and resource depletion are not isolated problems” say biologist Liette Vasseur (Brock University), political scientist Anders Hayden (Dalhousie University) and ecologist Mike Jones (Swedish University of Agricultural Sciences).
“How hot is it going to get? This is one of the most important and difficult remaining questions about our changing climate,” say two scientists who study climate change, Seth Wynes and H. Damon Matthews at the University of Waterloo and Concordia University respectively.
The answer depends on how sensitive the climate is to greenhouse gases like CO₂ and how much humanity ultimately emits, the pair say. When Wynes and Matthews asked 211 authors of past reports by the Intergovernmental Panel on Climate Change, their average best guess was 2.7°C by 2100.
“We’ve already seen devastating consequences like more flooding, hotter heatwaves and larger wildfires, and we’re only at 1.3°C above pre-industrial levels — less than halfway to 2.7°C,” they say.
There is a third variable that is harder to predict but no less important: the capacity of forests, wetlands and the ocean to continue to offset warming by absorbing the carbon and heat our furnaces and factories have released.
This blue and green carbon pump stalled in 2023, the hottest year on record, amid heatwaves, droughts and fires. The possibility of nature’s carbon storage suddenly collapsing is not priced into the computer models that simulate and project the future climate.
However, the ecosystems that buffer human-made warming are clearly struggling. A new report from the World Wildlife Fund (WWF) showed that the average size of monitored populations of vertebrate wildlife (animals with spinal columns – mammals, birds, fish, reptiles and amphibians) has shrunk by 73% since 1970.
Wildlife could become so scarce that ecosystems like the Amazon rainforest degenerate, according to the report.
“More than 90% of tropical trees and shrubs depend on animals to disperse their seeds, for example,” says biodiversity scientist Alexander Lees (Manchester Metropolitan University).
The result could be less biodiverse and, importantly for the climate, less carbon-rich habitats.
Plastic in a polar bear’s gut
Threats to wildlife are numerous. One that is growing fast and still poorly understood is plastic.
Bottles, bags, toothbrushes: a rising tide of plastic detritus is choking and snaring wild animals. These larger items eventually degrade into microplastics, tiny fragments which now suffuse the air, soil and water.
“In short, microplastics are widespread, accumulating in the remotest parts of our planet. There is evidence of their toxic effects at every level of biological organisation, from tiny insects at the bottom of the food chain to apex predators,” says Karen Raubenheimer, a senior lecturer in plastic pollution at the University of Wollongong.
Plastic is generally made from fossil fuels, the main agent of climate change. Activists and experts have seized on a similar demand to address both problems: turn off the taps.
In fact, the diagnosis of Costas Velis, an expert in ocean litter at the University of Leeds, sounds similar to what climate scientists say about unrestricted fossil fuel burning:
“Every year without production caps makes the necessary cut to plastic production in future steeper – and our need to use other measures to address the problem greater.”
A production cap hasn’t made it into the negotiating text for a plastic treaty (yet). And while governments pledged to transition away from coal, oil and gas last year, a new report on the world’s energy use shows fossil fuel use declining more slowly than in earlier forecasts – and much more slowly than would be necessary to halt warming at internationally agreed limits. The effort to protect a third of earth’s surface has barely begun.
Each summit is concerned with ameliorating the effects of modern societies on nature. Some experts argue for a more radical interpretation.
“Even if 30% of Earth was protected, how effectively would it halt biodiversity loss?” ask political ecologists Bram Büscher (Wageningen University) and Rosaleen Duffy (University of Sheffield).
“The proliferation of protected areas has happened at the same time as the extinction crisis has intensified. Perhaps, without these efforts, things could have been even worse for nature,” they say.
“But an equally valid argument would be that area-based conservation has blinded many to the causes of Earth’s diminishing biodiversity: an expanding economic system that squeezes ecosystems by turning ever more habitat into urban sprawl or farmland, polluting the air and water with ever more toxins and heating the atmosphere with ever more greenhouse gas.”