Category: Academic Reportage

  • Survivors’ voices 80 years after Hiroshima and Nagasaki sound a warning and a call to action

    Source: ForeignAffairs4

    Source: The Conversation – USA (2) – By Masako Toki, Senior Education Project Manager and Research Associate, Nonproliferation Education Program, Middlebury

    Supporters of nuclear disarmament, including Hibakusha, demonstrate in Oslo, Norway, in 2024. Hideo Asano, CC BY-ND

    Eighty years ago, in August 1945, the cities of Hiroshima and Nagasaki were incinerated by the first and only use of nuclear weapons in war. By the end of that year, approximately 140,000 people had died in Hiroshima and 74,000 in Nagasaki.

    Those who survived – known as Hibakusha – have carried their suffering as living testimony to the catastrophic humanitarian consequences of nuclear war, with one key wish: that no one else will suffer as they have.

    Now, in 2025, as the world marks 80 years of remembrance since those bombings, the voices of the Hibakusha offer not only memory, but also moral clarity in an age of growing peril.

    As someone who focuses on nuclear disarmament and has heard Hibakusha testimonies in my native Japanese language, I have been enthusiastically promoting disarmament education grounded in their voices and experience. I believe their message is more vital than ever at a time of rising nuclear risk. Nuclear threats have reemerged in global discourse, breaking long-standing taboos against even talking about their use. From Russia and Europe to the Middle East and East Asia, the possibility of nuclear escalation is no longer unthinkable.

    Amid a landscape of rubble, a partially destroyed building stands, with the skeleton of a metal dome atop a tower.
    The Hiroshima Prefectural Industrial Promotion Hall was one of the few buildings not totally demolished in the Aug. 6, 1945, U.S. atomic bombing of Japan.
    Universal History Archive/Universal Images Group via Getty Images

    Japan’s deepening reliance on deterrence

    Ironically, increasing nuclear threats are contributing to further reliance on nuclear deterrence, the strategy of preventing attack by threatening nuclear retaliation, rather than renewed efforts toward nuclear disarmament, which seeks to eliminate nuclear weapons entirely.

    Nowhere is this contradiction more visible than in Japan. While the Hibakusha have long stood as global advocates for nuclear abolition, Japan’s approach to national security has placed growing emphasis on the role of nuclear deterrence.

    In the face of regional threats, the Japanese government has strengthened its dependence on U.S. nuclear protection – even as the survivors of Hiroshima and Nagasaki warn not only of the dangers of relying on nuclear weapons for security, but also of the profound moral failure such reliance represents.

    Masako Wada, a survivor of the 1945 atomic bomb attack on Nagasaki, speaks about the risk of nuclear weapons in the 21st century.

    Listen to Hibakusha voices

    For eight decades, the Hibakusha have shared their stories to prevent future tragedy – not to assign blame, but to awaken conscience and spark action.

    Masako Wada, assistant secretary general of Nihon Hidankyo, a nationwide organization of atomic bomb survivors working for the abolition of nuclear weapons, was just under 2 years old when the atomic bomb was dropped on Nagasaki. Her home, 1.8 miles from the blast center, was shielded by surrounding mountains, sparing her from burns or injury. Though too young to remember the bombing herself, she grew up hearing about it from her mother and grandfather, who witnessed the devastation firsthand.

    In July 2025 at a nuclear risk reduction conference in Chicago, Wada told the attendees:

    “The risk of using nuclear weapons has never been higher than it is now. … Nuclear deterrence, which intimidates other countries by possessing nuclear weapons, cannot save humanity.”

    In a piece she wrote for Arms Control Today that same month, she further implored:

    The Hibakusha are the ones who know the humanitarian consequences of the use of nuclear weapons. We will continue to convey that reality. Please listen to us, please empathize with us. Find out what you can do and take action together with us. Nuclear weapons cannot coexist with human beings. They were created by humans; let us assume the responsibility to abolish them with the wisdom of public conscience.”

    This plea – rooted in lived experience and moral responsibility – was recognized globally when the 2024 Nobel Peace Prize was awarded to Nihon Hidankyo. The award honored not only the survivors’ suffering, but their decades-long commitment to preventing future use of nuclear weapons through education, activism and testimony.

    A concrete building with no windows and a metal skeleton of a dome atop a tower stand against a blue sky.
    The Hiroshima Peace Memorial stands as it has since 1945, partially destroyed by the atomic bomb blast and serving as a reminder of the 140,000 people who died in the attack and its aftermath.
    Masako Toki, CC BY-ND

    A dwindling number

    But time is running out. Most Hibakusha were children or young adults in 1945. Today, their average age is over 86. In March 2025, the number of officially recognized Hibakusha fell below 100,000, according to Japan’s Ministry of Health.

    As Terumi Tanaka, a Hiroshima survivor and longtime leader of Nihon Hidankyo, said at the Nobel Peace Prize ceremony:

    “Ten years from now, there may only be a handful of us able to give testimony as firsthand survivors. From now on, I hope that the next generation will find ways to build on our efforts and develop the movement even further.”

    Terumi Tanaka, a survivor of the 1945 atomic bomb attack on Hiroshima, delivers the 2024 Nobel Peace Prize lecture.

    The role of empathy in disarmament education

    Empathy is not a luxury in disarmament education – it is a necessity. Without it, nuclear weapons remain abstract. With it, they become personal, real and morally unacceptable.

    That’s why disarmament education begins with human stories. The Hibakusha testimonies illuminate not only the physical destruction caused by nuclear weapons, but also the long-term trauma, discrimination and intergenerational pain that follow. They remind us that nuclear policy is not just a matter of strategy – it is a question of human survival. Nuclear weapons are the only weapons ever created with the power to annihilate all of humanity – and that makes disarmament not just a political issue, but a moral imperative.

    Yet opportunities for young people to learn about nuclear risks, or hear from the Hibakusha directly, are extremely limited. In most countries, these issues are absent from school and university classrooms. This lack of education feeds ignorance and, in turn, complacency – allowing the flawed logic of deterrence to remain unchallenged.

    Disarmament education that puts empathy and ethics at its center, along with survivors’ voices, can empower the next generation not only with knowledge, but with moral strength to choose their path.

    A person stands at a lectern in front of a screen with photos and text reading 'As long as I could see, all the roof tiles had been blown to one side. The green of the mountain that surround the city was gone. They were brown mountains now.'
    Masako Wada, a Hibakusha who survived the U.S. bombing of Nagasaki in August 1945, speaks at a church in California in 2019, spreading the message of the horror of the attack and its aftermath, and urging people to promote nuclear disarmament.
    Masako Toki, CC BY-ND

    From remembrance to responsibility

    Commemorating 80 years since the atomic bombings of Hiroshima and Nagasaki is not about history alone. It is about the future. It is about what people choose to remember – and what people choose to do with that memory.

    The Hibakusha have never sought revenge. Their message is clear: This can happen again. But it doesn’t have to.

    The Hibakusha’s journey shows that human beings are not destined to remain divided, nor are they doomed to repeat cycles of destruction. In the face of unimaginable loss, many Hibakusha chose not to dwell on anger or seek retribution, but instead to speak out for the good of all humanity. Their activism has been marked not by bitterness, but by an unwavering commitment to peace, empathy and the prevention of future suffering. Rather than directing their pain toward blame, they have transformed it into a powerful appeal to conscience and global solidarity. Their concern has never been only for Japan – but for the future of the entire human race.

    That moral clarity, grounded in lived experience, remains profoundly instructive. In a world increasingly filled with conflict and fear, I believe there is much to learn from the Hibakusha. Their testimony is not just a warning – it is a guide.

    I try to listen, and urge others, as well, to truly listen to what they have to say. I seek the company of people who also refuse complacency, question the legitimacy of nuclear deterrence, and work for a future where human dignity, not mutual destruction, defines human security.

    The Conversation

    Masako Toki does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Survivors’ voices 80 years after Hiroshima and Nagasaki sound a warning and a call to action – https://theconversation.com/survivors-voices-80-years-after-hiroshima-and-nagasaki-sound-a-warning-and-a-call-to-action-262174

  • The case that saved the press – and why Trump wants it gone

    Source: ForeignAffairs4

    Source: The Conversation – USA – By Stephanie A. (Sam) Martin, Frank and Bethine Church Endowed Chair of Public Affairs, Boise State University

    Donald Trump wants to restrict journalists’ ability to publish or broadcast critical stories. Mesh cube, iStock/Getty Images Plus

    President Donald Trump is again attacking the American press – this time not with fiery rally speeches or by calling the media “the enemy of the people,” but through the courts.

    Since the heat of the November 2024 election, and continuing into July, Trump has filed defamation lawsuits against “60 Minutes” broadcaster CBS News and The Wall Street Journal. He has also sued the Des Moines Register for publishing a poll just before the 2024 election that Trump alleges exaggerated support for Democratic candidate Kamala Harris and thus constituted election interference and fraud.

    These are in addition to other lawsuits Trump filed against the news media during his first term and during his years out of office between 2021 and 2025.

    At the heart of Trump’s complaints is a familiar refrain: The media is not only biased, but dishonest, corrupt and dangerous.

    The president isn’t just upset about reporting on him that he thinks is unfair. He wants to redefine what counts as libel and make it easier for public officials to sue for damages. A libel suit is a civil tort claim seeking damages when a person believes something false has been printed or broadcast about them and so harmed their reputation.

    Redefining libel in this way would require overturning the Supreme Court’s 1964 ruling in New York Times Co. v. Sullivan, one of the most important First Amendment legal rulings in American constitutional history

    Trump made overturning Sullivan a talking point during his first campaign for president; his lawsuits now put that threat into action. And they raise the question: What happened in Sullivan, and why does it still matter?

    President Donald Trump discusses U.S. libel laws on Jan. 10, 2018, calling them a ‘sham’ and a ‘disgrace’ during comments to reporters at the White House.

    What Sullivan was about

    As chair of a public policy institute devoted to strengthening deliberative democracy, I have written two books about the media and the presidency, and another about media ethics. My research traces how news institutions shape civic life and why healthy democracies rely on free expression.

    In 1960, The New York Times published a full-page advertisement titled “Heed Their Rising Voices”. The ad, which included an appeal for readers to send money in support of Martin Luther King Jr. and the movement against Jim Crow, described brutal and unjust treatment of Black students and protesters in Montgomery, Alabama. It also emphasized episodes of police violence against peaceful demonstrations.

    The ad was not entirely accurate in its description of the behavior of either protesters or the police.

    It claimed, for instance, that activists had sung “My Country ’Tis of Thee” on the steps of the state capitol during a rally, when they actually had sung the national anthem. It said that “truckloads of police armed with shotguns and tear-gas” had “ringed” a college campus, when the police had only been deployed nearby. And it asserted that King had been arrested seven times in Alabama, when the real number was four.

    Though the ad did not identify any individual public officials by name, it disparaged the behavior of Montgomery police.

    That’s where L.B. Sullivan came in.

    As Montgomery’s police commissioner, he oversaw the police department. Sullivan claimed that because the ad maligned the conduct of law enforcement, it had implicitly defamed him. In 1960 in Alabama, a primary defense against libel was truth. But since there were mistakes in the ad, a truth defense could not be raised. Sullivan sued for damages, and an Alabama jury awarded him US$500,000, equivalent to $5,450,000 in 2025.

    The message to the press was clear: criticize Southern officials and risk being sued out of existence.

    In fact, the Sullivan lawsuit was not an isolated incident, but part of a broader strategy. In addition to Sullivan, four other Montgomery officials filed suits against the Times.

    In Birmingham, public officials filed seven libel lawsuits over Times reporter Harrison Salisbury’s trenchant reporting about racism in that city. The lawsuits helped push the Times to the edge of bankruptcy. Salisbury was even indicted for seditious libel and faced up to 21 years in prison.

    Alabama officials also sued CBS, The Associated Press, the Saturday Evening Post and Ladies’ Home Journal – all for reporting on civil rights and the South’s brutal response.

    Four men in suits standing together and smiling.
    Montgomery, Ala., Police Commissioner L.B. Sullivan, second left, and his attorneys celebrate his $500,000 libel suit victory in a county court on Nov. 3, 1960.
    Bettman/Getty Images

    The Supreme Court decision

    The jury’s verdict in favor of Sullivan was unanimously overturned by the Supreme Court in 1964.

    Writing for the court, Justice William Brennan held that public officials cannot prevail in defamation lawsuits merely by showing that statements are false. Instead, they must prove such statements are made with “actual malice”. Actual malice means a reporter or press outlet knew their story was false or else acted with reckless disregard for the truth.

    The decision set a high bar.

    Before the ruling, the First Amendment’s protections for speech and the press didn’t offer much help to the press in libel cases.

    After it, public officials who wanted to sue the press would have to prove “actual malice” – real, purposeful untruths that caused harm. Honest mistakes weren’t enough to prevail in such lawsuits. The court held that errors are inevitable in public debate and that protecting those mistakes is essential to keeping debate open and free.

    Nonviolent protest and the press

    In essence, the court ruling blocked government officials from suing for libel with ulterior motives.

    King and other civil rights leaders relied on a strategy of nonviolent protest to expose injustice through public, visible actions.

    When protesters were arrested, beaten or hosed in the streets, their goal was not chaos – it was clarity. They wanted the nation to see what Southern oppression looked like. For that, they needed press coverage.

    If Sullivan’s lawsuit had succeeded, it could have bullied the press away from covering civil rights altogether. The Supreme Court recognized this danger.

    Public officials treated differently

    Another key element of the court’s reasoning was its distinction between public officials and private citizens.

    Elected leaders, the court said, can use mass media to defend themselves in ways ordinary people cannot.

    “The public official certainly has equal if not greater access than most private citizens to media of communication,” Justice Brennan wrote in the Sullivan ruling.

    Trump is a perfect example of this dynamic. He masterfully uses social media, rallies, televised interviews and impromptu remarks to push back. He doesn’t need the courts.

    Giving public officials the power to sue over news stories they dislike could well create a chilling effect on the media that undermines government accountability and distorts public discourse.

    “The theory of our Constitution is that every citizen may speak his mind and every newspaper express its view on matters of public concern and may not be barred from speaking or publishing because those in control of government think that what is said or written is unwise,” Brennan wrote.

    “In a democratic society, one who assumes to act for the citizens in an executive, legislative, or judicial capacity must expect that his official acts will be commented upon and criticized.”

    Why Sullivan still matters

    The Sullivan ruling is more than a legal doctrine. It is a shared agreement about the kind of democracy Americans aspire to. It affirms a press duty to hold power to account, and a public right to hear facts and information that those in power want to suppress.

    The ruling protects the right to criticize those in power and affirms that the press is not a nuisance, but an essential part of a functioning democracy. It ensures that political leaders cannot insulate themselves from scrutiny by silencing their critics through intimidation or litigation.

    Trump’s lawsuits seek to undo these press protections. He presents himself as the victim of a dishonest press and hopes to use the legal system to punish those he perceives to be his detractors.

    The decision in the Sullivan case reminds Americans that democracy doesn’t depend on leaders who feel comfortable. It depends on a public that is free to speak.

    The Conversation

    Stephanie A. (Sam) Martin does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. The case that saved the press – and why Trump wants it gone – https://theconversation.com/the-case-that-saved-the-press-and-why-trump-wants-it-gone-261821

  • 2 spacecraft flew exactly in line to imitate a solar eclipse, capture a stunning image and test new tech

    Source: ForeignAffairs4

    Source: The Conversation – USA – By Christopher Palma, Teaching Professor of Astronomy & Astrophysics, Penn State

    The solar corona, as viewed by Proba-3’s ASPIICS coronagraph. ESA/Proba-3/ASPIICS/WOW algorithm, CC BY-SA

    During a solar eclipse, astronomers who study heliophysics are able to study the Sun’s corona – its outer atmosphere – in ways they are unable to do at any other time.

    The brightest part of the Sun is so bright that it blocks the faint light from the corona, so it is invisible to most of the instruments astronomers use. The exception is when the Moon blocks the Sun, casting a shadow on the Earth during an eclipse. But as an astronomer, I know eclipses are rare, they last only a few minutes, and they are visible only on narrow paths across the Earth. So, researchers have to work hard to get their equipment to the right place to capture these short, infrequent events.

    In their quest to learn more about the Sun, scientists at the European Space Agency have built and launched a new probe designed specifically to create artificial eclipses.

    Meet Proba-3

    This probe, called Proba-3, works just like a real solar eclipse. One spacecraft, which is roughly circular when viewed from the front, orbits closer to the Sun, and its job is to block the bright parts of the Sun, acting as the Moon would in a real eclipse. It casts a shadow on a second probe that has a camera capable of photographing the resulting artificial eclipse.

    An illustration of two spacecraft, one which is spherical and moves in front of the Sun, another that is box-shaped facing the Sun.
    The two spacecraft of Proba-3 fly in precise formation about 492 feet (150 meters) apart.
    ESA-P. Carril, CC BY-NC-ND

    Having two separate spacecraft flying independently but in such a way that one casts a shadow on the other is a challenging task. But future missions depend on scientists figuring out how to make this precision choreography technology work, and so Proba-3 is a test.

    This technology is helping to pave the way for future missions that could include satellites that dock with and deorbit dead satellites or powerful telescopes with instruments located far from their main mirrors.

    The side benefit is that researchers get to practice by taking important scientific photos of the Sun’s corona, allowing them to learn more about the Sun at the same time.

    An immense challenge

    The two satellites launched in 2024 and entered orbits that approach Earth as close as 372 miles (600 kilometers) – that’s about 50% farther from Earth than the International Space Station – and reach more than 37,282 miles (60,000 km) at their most distant point, about one-sixth of the way to the Moon.

    During this orbit, the satellites move at speeds between 5,400 miles per hour (8,690 kilometers per hour) and 79,200 mph (127,460 kph). At their slowest, they’re still moving fast enough to go from New York City to Philadelphia in one minute.

    While flying at that speed, they can control themselves automatically, without a human guiding them, and fly 492 feet (150 meters) apart – a separation that is longer than the length of a typical football stadium – while still keeping their locations aligned to about one millimeter.

    They needed to maintain that precise flying pattern for hours in order to take a picture of the Sun’s corona, and they did it in June 2025.

    The Proba-3 mission is also studying space weather by observing high-energy particles that the Sun ejects out into space, sometimes in the direction of the Earth. Space weather causes the aurora, also known as the northern lights, on Earth.

    While the aurora is beautiful, solar storms can also harm Earth-orbiting satellites. The hope is that Proba-3 will help scientists continue learning about the Sun and better predict dangerous space weather events in time to protect sensitive satellites.

    The Conversation

    Christopher Palma does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. 2 spacecraft flew exactly in line to imitate a solar eclipse, capture a stunning image and test new tech – https://theconversation.com/2-spacecraft-flew-exactly-in-line-to-imitate-a-solar-eclipse-capture-a-stunning-image-and-test-new-tech-259362

  • Meet ‘lite intermediate black holes,’ the supermassive black hole’s smaller, much more mysterious cousin

    Source: ForeignAffairs4

    Source: The Conversation – USA – By Bill Smith, Ph.D. Candidate in Physics & Astronomy, Vanderbilt University

    Merging black holes generate gravitational waves, which astronomers can track. SXS, CC BY-ND

    Black holes are massive, strange and incredibly powerful astronomical objects. Scientists know that supermassive black holes reside in the centers of most galaxies.

    And they understand how certain stars form the comparatively smaller stellar mass black holes once they reach the end of their life. Understanding how the smaller stellar mass black holes could form the supermassive black holes helps astronomers learn about how the universe grows and evolves.

    But there’s an open question in black hole research: What about black holes with masses in between? These are much harder to find than their stellar and supermassive peers, in size range of a few hundred to a few hundred thousand times the mass of the Sun.

    We’re a team of astronomers who are searching for these in-between black holes, called intermediate black holes. In a new paper, two of us (Krystal and Karan) teamed up with a group of researchers, including postdoctoral researcher Anjali Yelikar, to look at ripples in space-time to spot a few of these elusive black holes merging.

    Take me out to the (gravitational wave) ball game

    To gain an intuitive idea of how scientists detect stellar mass black holes, imagine you are at a baseball game where you’re sitting directly behind a big concrete column and can’t see the diamond. Even worse, the crowd is deafeningly loud, so it is also nearly impossible to see or hear the game.

    But you’re a scientist, so you take out a high-quality microphone and your computer and write a computer algorithm that can take audio data and separate the crowd’s noise from the “thunk” of a bat hitting a ball.

    You start recording, and, with enough practice and updates to your hardware and software, you can begin following the game, getting a sense of when a ball is hit, what direction it goes, when it hits a glove, where runners’ feet pound into the dirt and more.

    Admittedly, this is a challenging way to watch a baseball game. But unlike baseball, when observing the universe, sometimes the challenging way is all we have.

    This principle of recording sound and using computer algorithms to isolate certain sound waves to determine what they are and where they are coming from is similar to how astronomers like us study gravitational waves. Gravitational waves are ripples in space-time that allow us to observe objects such as black holes.

    Now imagine implementing a different sound algorithm, testing it over several innings of the game and finding a particular hit that no legal combination of bats and balls could have produced. Imagine the data was suggesting that the ball was bigger and heavier than a legal baseball could be. If our paper was about a baseball game instead of gravitational waves, that’s what we would have found.

    Listening for gravitational waves

    While the baseball recording setup is designed specifically to hear the sounds of a baseball game, scientists use a specialized observatory called the Laser Interferometer Gravitational-Wave Observatory, or LIGO, to observe the “sound” of two black holes merging out in the universe.

    An L-shaped facility with two long arms extending out from a central building.
    The LIGO detector in Hanford, Wash., uses lasers to measure the minuscule stretching of space caused by a gravitational wave.
    LIGO Laboratory

    Scientists look for the gravitational waves that we can measure using LIGO, which has one of the most mind-bogglingly advanced laser and optics systems ever created.

    In each event, two “parent” black holes merge into a single, more massive black hole. Using LIGO data, scientists can figure out where and how far away the merger happened, how massive the parents and resultant black holes are, which direction in the sky the merger happened and other key details.

    Most of the parent black holes in merger events originally form from stars that have reached the end of their lives – these are stellar mass black holes.

    An illustration of a black hole with gas swirling around it, coming from a large cloud around a star on the right.
    This artist’s impression shows a binary system containing a stellar mass black hole called IGR J17091-3624. The strong gravity of the black hole, on the left, is pulling gas away from a companion star on the right.
    NASA/CXC/M.Weiss, CC BY-NC

    The black hole mass gap

    Not every dying star can create a stellar mass black hole. The ones that do are usually between about 20 to 100 times the mass of the Sun. But due to complicated nuclear physics, really massive stars explode differently and don’t leave behind any remnant, black hole or otherwise.

    These physics create what we refer to as the “mass gap” in black holes. A smaller black hole likely formed from a dying star. But we know that a black hole more massive than about 60 times the size of the Sun, while not a supermassive black hole, is still too big to have formed directly from a dying star.

    The exact cutoff for the mass gap is still somewhat uncertain, and many astrophysicists are working on more precise measurements. However, we are confident that the mass gaps exist and that we are in the ballpark of the boundary.

    We call black holes in this gap lite intermediate mass black holes or lite IMBHs, because they are the least massive black holes that we expect to exist from sources other than stars. They are no longer considered stellar mass black holes.

    Calling them “intermediate” also doesn’t quite capture why they are special. They are special because they are much harder to find, astronomers still aren’t sure what astronomical events might create them, and they fill a gap in astronomers’ knowledge of how the universe grows and evolves.

    Evidence for IMBHs

    In our research, we analyzed 11 black hole merger candidates from LIGO’s third observing run. These candidates were possibly gravitational wave signals that looked promising but still needed more analysis to conclusively confirm.

    The data suggested that for those 11 we analyzed, their final post-merger black hole may have been in the lite IMBH range. We found five post-merger black holes that our analysis was 90% confident were lite IMBHs.

    Even more critically, we found that one of the events had a parent black hole that was in the mass gap range, and two had parent black holes above the mass gap range. Since we know these black holes can’t come from stars directly, this finding suggests that the universe has some other way of creating black holes this massive.

    A parent black hole this massive may already be the product of two other black holes that merged in the past, so observing more IMBHs can help us understand how often black holes are able to “find” each other and merge out in the universe.

    LIGO is in the end stages of its fourth observing run. Since this work used data from the third observing run, we are excited to apply our analysis to this new dataset. We expect to continue to search for lite IMBHs, and with this new data we will improve our understanding of how to more confidently “hear” these signals from more massive black holes above all the noise.

    We hope this work not only strengthens the case for lite IMBHs in general but helps shed more light on how they are formed.

    The Conversation

    Bill Smith receives funding from an NSF Research Trainee Grant called EMIT.

    Karan Jani is a member of the LIGO Scientific Collaboration.

    Krystal Ruiz-Rocha receives funding from an NSF research grant called EMIT.

    ref. Meet ‘lite intermediate black holes,’ the supermassive black hole’s smaller, much more mysterious cousin – https://theconversation.com/meet-lite-intermediate-black-holes-the-supermassive-black-holes-smaller-much-more-mysterious-cousin-259976

  • Plantation tourism, memory and the uneasy economics of heritage in the American South

    Source: ForeignAffairs4

    Source: The Conversation – USA (2) – By Betsy Pudliner, Associate Professor of Hospitality and Technology Innovation, University of Wisconsin-Stout

    The American South – and the nation more broadly – continues to wrestle with how to remember its most painful chapters. Tourism is one of the arenas where that struggle is most visible.

    This tension came into sharp relief in May 2025, when the largest antebellum mansion in the region – the 19th-century estate at Nottoway Plantation in Louisiana – burned to the ground. While some historians, community members and tourism advocates mourned the loss of a landmark site, many activists and others critical of slavery’s past celebrated its destruction.

    Soon after the fire, Nottoway’s owner indicated an interest in rebuilding. And within weeks, a new restaurant had opened on a different part of the site. That speed underscores how quickly memory, history and economics can collide – and how tourism sits at the center of that tension.

    As a professor who studies tourism, I know that the impulse to monetize history isn’t new. Six months after the First Battle of Manassas in 1861, the site was already developing as a tourist attraction. People have been traveling to historic sites, buying souvenirs and leaving their mark on the landscape for centuries. That tradition continues, and evolves, today.

    Wealth, slavery and the battle over memory

    Nottoway is one of more than 300 such plantation sites across the country, which together generate billions of dollars in revenue each year. This type of tourism forces communities and visitors alike to ask a difficult question: What parts of the past do Americans preserve, and for whom?

    A local news segment about the Nottoway fire.

    Nottoway, completed in 1859, was built by 155 enslaved people. Blending Greek Revival and Italianate styles, it stood as a monument to wealth built on forced labor and racial exploitation. Over the decades, it passed through different owners, survived the Civil War and was eventually restored and converted into a resort and wedding venue. Critics have long argued that this commercial reinvention downplayed the lives and labor of enslaved people, neglecting the site’s foundations in brutality.

    Beyond its symbolism, Nottoway has long been recognized as a cornerstone of Iberville Parish’s tourism economy. Research shows that sites like Nottoway can anchor regional economies by encouraging longer stays and local spending. These can stimulate nearby businesses through the multiplier effect.

    Nottoway’s sociocultural significance was far more complex – as shown by the celebrations that followed the fire. For many, Nottoway was a site of trauma and erasure. With its white columns and manicured lawns, Nottoway was pervaded by a sense of romanticism that relied on selective memory. For example, as of June 2025, the Nottoway website’s “History” page made no mention of slavery.

    In other words, the fire didn’t just destroy a building. It disrupted a layered ecosystem of economic livelihood, memory and contested meaning.

    Tourism and the power of the past

    To understand why people visit places like Nottoway, it helps to turn to the four main categories of travel motivation: physical, cultural, interpersonal and status. Plantation venues typically draw cultural tourists seeking heritage, history and architecture.

    They also draw those engaged in what scholars call “dark tourism”: traveling to places associated with tragedy and death. While dark tourism may imply voyeurism, many such visits are deeply reflective. These travelers seek to confront hard truths and process collective memory. But if interpretation is selective – focusing on opulence while minimizing suffering – tourism then becomes a force of historical distortion.

    Some tourists choose plantations for a sense of romance, others for education, and still others for reckoning. These motivations complicate how such places should be preserved, interpreted or transformed.

    Over the past decade, innovative sites like the Whitney Plantation have gained national attention for centering the lives and stories of the enslaved, rather than the architecture or planter families. Opened to the public in 2014, Whitney reframed the traditional plantation tour by prioritizing historical truth over nostalgia – featuring first-person slave narratives, memorials and educational programming focused on slavery’s brutality.

    A CBS News report on Whitney Plantation.

    This approach reflects a growing segment of travelers seeking deeper engagement with difficult histories. As Whitney draws visitors for its honesty and restorative framing, it raises a key question: Is the future of plantation tourism splitting into two tracks – one rooted in reflection, the other in romanticism?

    Many Americans still picture the antebellum South through the lens of popular culture – a romanticized vision shaped by novels and films like “Gone with the Wind,” with its iconic Tara plantation. This “Tara effect” continues to influence how plantations are portrayed and remembered, often emphasizing beauty and grandeur while downplaying the brutality of slavery.

    That’s why sites like the Donato House in Louisiana are important. Built and owned by Martin Donato, a formerly enslaved man who later became a landowner – and, complicating the narrative, also a slaveholder – this modest home offers a counterpoint to the opulence of estates like Nottoway.

    Still in the hands of Donato’s descendants and slowly developing as a tourist site, the Donato House reflects the layered and often uncomfortable truths that challenge simple historical categories. Sites like this remind us that tourism plays a vital role in educating society about the complexity of our past. Heritage travel isn’t just about iconic landmarks; it’s about broadening our perspective, confronting historical bias and helping visitors to engage with the fuller, often uncomfortable, truths behind the stories we tell.

    Controlling the narrative: Who tells the story?

    What is chosen to be preserved – or let go of – shapes not only our memory of the past but our vision for the future.

    When the last generation with firsthand experience of a historical moment is gone, their stories remain in fragments – photos, recordings such as those in the National Archives, or family lore. Some memories are factual, others softened or sharpened with time. That’s the nature of memory: It changes with us.

    My late father, a high school history teacher, often reminded his students and his children to study the full spectrum of history: the good, the bad and the profoundly uncomfortable. He believed one must dive deep into its complexity to better understand human behavior and motivation.

    He was right. Tourism has always echoed the layered realities of the human experience. Now, as Americans reckon with what was lost at Nottoway, we’re left with the question: “What story will be told – and who will get to tell it?”

    The Conversation

    Betsy Pudliner is affiliated with ICHRIE.

    ref. Plantation tourism, memory and the uneasy economics of heritage in the American South – https://theconversation.com/plantation-tourism-memory-and-the-uneasy-economics-of-heritage-in-the-american-south-258558

  • Fixing Michigan’s teacher shortage isn’t just about getting more recruits

    Source: ForeignAffairs4

    Source: The Conversation – USA (2) – By Gail Richmond, Professor of Education, Michigan State University

    Finding good candidates to fill that teacher’s chair is no easy task. Brian van der Brug/Los Angeles Times via Getty Images

    Nearly 500 of Michigan’s 705 school districts reported teaching vacancies in the fall of 2023. That’s up from 262 districts at the beginning of the 2012 school year.

    The number of vacancies is likely an undercount, because this number does not include substitutes or unqualified teachers who may have been hired to fill gaps.

    Local news reports and job boards suggest that at least some Michigan districts are still struggling to fill open positions for the fall of 2025.

    The teacher shortage is a nationwide problem, but it is especially acute in Michigan, where the number of teachers leaving teaching and the overall teacher shortage both exceed the national average. This shortage is particularly severe in urban and rural communities, which have the most underresourced schools, and in specialization areas such as science, mathematics and special education.

    For more than two decades, my work at Michigan State University has centered on designing and leading effective teacher preparation programs. My research focuses on ways to attract people to teaching and keep them in the profession by helping them grow into effective classroom leaders.

    Low pay and lack of support

    Teacher shortages are the result of a combination of factors, especially low salaries, heavy workloads and a lack of ongoing professional support.

    A report released last year, for example, found that Michigan teachers and teachers nationwide make about 20% less compared to those in other careers that also require a college education.

    From my experience working with teachers and district leadership across the state, I know that beginning teachers – especially those in districts which have severe shortages – are often given the most challenging teaching loads. And in some districts, teachers have been forced to work without the benefit of any kind of planning time in their daily schedule.

    The shortage was made much worse by the COVID-19 pandemic, which led many educators to leave the profession. Yet another culprit is the many teachers who, in Michigan as well as nationally, were hired during the 1960s and early ’70s, when school enrollments saw a massive increase, and who in the past decade have been retiring in large numbers.

    Creating pathways to certification

    One recent strategy to address the teacher shortage in Michigan has been to create nontraditional routes to teacher certification.

    The idea is to prepare educators more quickly and inexpensively. A variety of agencies – from the Michigan Department of Education, state-level grants programs such as the Future Proud Michigan Educator program, as well as private foundations and businesses – have helped these programs along financially.

    Even some school districts, including the Detroit Public Schools Community District, have adopted this strategy in order to certify teachers and fill vacant positions.

    A modern-looking multi-story building made from glass and red cladding materials
    Cass Technical High School is a magnet school in midtown Detroit.
    WikiMedia Commons, CC BY-ND

    Other similar programs are the product of partnerships between Michigan’s intermediate school districts, community colleges and four-year colleges and universities. One example is Grand Valley State University’s Western Michigan Teacher Collaborative, which targets interested students of college age. Another is MSU’s Community Teacher Initiative, designed to attract students into teaching while they are still in high school.

    Perhaps even more visible are national programs such as Teachers of Tomorrow and Teach for America. Candidates in such programs often work as full-time teachers while completing teacher training coursework with minimal oversight or support.

    ‘Stuffing the pipeline’ is not the solution

    But simply “stuffing the pipeline” with new recruits is not enough to solve the teacher-shortage problem in Michigan.

    The loss of teachers is significantly higher among individuals in nontraditional training programs and for teachers of color. This starts while they are preparing to be certified and continues for several years after certification.

    The primary reasons for the higher attrition rates include a lack of awareness of the complexity of schools and schooling, the lack of effective mentoring during the certification period, and the absence of instructional and other professional guidance in the early years of teaching.

    How to repair the leaky faucet

    So how can teachers be encouraged to stay in the profession?

    Here are a few of the things scholars have learned to improve outcomes in traditional and nontraditional preparation programs:

    Temper expectations. Teaching is a critically important career, but leading individuals to believe that they can repair the damage done by a complex set of socioeconomic issues – including multigenerational poverty and lack of access to healthy and affordable food, housing, drinking water and health care – puts beginning teachers on a short road to early burnout and departure.

    Give student teachers strong mentors. Working in schools helps student teachers deepen their knowledge not only of teaching but also of how schools, families and communities work together. But these experiences are useful only if they are overseen and supported by an experienced and caring educator and supported by the organization’s leadership.

    Recognize the limits of online learning. Online teacher preparation programs are convenient and have their place but don’t provide student teachers with real-world experience and opportunities for guided discussion about what they see, hear and feel when working with students.

    Respect the process of “becoming.” Professional support should not end when a new teacher is officially certified. Teachers, like other professionals such as nurses, doctors and lawyers, need time to develop skills throughout their careers.

    Providing this support sends a powerful message: that teachers are valued members of the community. Knowing that helps them stay in their jobs.

    The Conversation

    Gail Richmond does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Fixing Michigan’s teacher shortage isn’t just about getting more recruits – https://theconversation.com/fixing-michigans-teacher-shortage-isnt-just-about-getting-more-recruits-252606

  • PBS accounts for nearly half of first graders’ most frequently watched educational TV and video programs

    Source: ForeignAffairs4

    Source: The Conversation – USA (2) – By Rebecca Dore, Director of Research of the Crane Center for Early Childhood Research and Policy, The Ohio State University

    Rep. Robert Garcia, a California Democrat, speaks during a House hearing in March 2025, months before Congress rescinded two years of public media funding. Nathan Posner/Anadolu via Getty Images

    CC BY-ND

    At U.S. President Donald Trump’s request, Congress voted in July 2025 to claw back US$1.1 billion it had previously approved for the Corporation for Public Broadcasting. That measure, which passed in the House and the Senate by very narrow margins, will cut off all federal tax dollars that would have otherwise flowed to PBS and its affiliated TV stations for the next two fiscal years.

    The public media network has played a crucial role in producing educational TV programs, especially for children, for nearly 60 years. It has been getting 15% of its budget in recent years from the federal government. Many of its affiliate stations are far more reliant on Washington than that – leading to a flurry of announcements regarding planned program cuts.

    Sesame Street” is still in production, joined by newer TV shows like “Wild Kratts” and “Daniel Tiger’s Neighborhood.” PBS KIDS, in addition to producing popular age-appropriate programs, has a website and multiple apps with games and activities that provide other opportunities for learning.

    Local PBS affiliate stations offer educational programming and other resources for schools, families and communities.

    I’m a child development researcher studying how kids engage with digital media and how educational programming and other kinds of content help them learn. I also have two children under 5, so I’m now immersed in children’s media both at work and at home.

    What kids watch

    In a study about the kinds of media kids consume that the Journal of Applied Developmental Psychology published in June 2025, my colleagues and I surveyed the parents and other kinds of caregivers of 346 first graders. The study participants listed the TV shows, videos, apps and games the kids used the most.

    Our research team then used a systematic coding process to look at how much children access educational programming in their favorite media – whether it’s through their favorite TV shows, web videos or video games.

    We found that only 12% of this content could be described as educational. This amount varied widely: For some children, according to the adults we surveyed, educational media comprised their top three to five sources. Others listed no educational media consumption at all.

    We also looked into who is taking advantage of educational media.

    Our team found no differences in kids’ educational media use according to how many years of education their parents had. That finding suggests that kids of all backgrounds are equally likely to consume it.

    A tween boy plays a videogame with two screens.
    The vast majority of the media that kids consume has little educational value.
    Neilson Barnard/Getty Images

    The Role of PBS

    This peer-reviewed study didn’t break down our results by specific media outlets. But in light of the cessation of federal funding, I wanted to find out how much of the educational content that children watch comes from PBS.

    By revisiting our data with this objective in mind, I learned that PBS accounted for 45% of the educational TV or videos parents said their kids watched most often. This makes PBS the top source for children’s educational programming by far. Nickelodeon/Nick Jr. was in second place with 14%, and YouTube, at 9%, came in third.

    PBS accounted for a smaller portion, just 6%, of all educational apps and games. I believe that could be because a few non-PBS apps, like Prodigy and i-Ready, which can be introduced in school, dominate this category.

    ‘Daniel Tiger’s Neighborhood,’ a cartoon, will seem familiar to anyone who grew up watching ‘Mr. Rogers’ Neighborhood.‘

    An Uncertain future

    Independent production companies collaborating on programming with PBS consult experts in child development and children’s media and conduct research throughout the production process to see how children respond and learn, often in partnership with PBS KIDS.

    This rigorous production process can include observing children watching the show, conducting focus groups and surveying parents about their experiences. It requires a lot of time and money to produce this kind of thoughtfully crafted educational media. This process ensures that the programming is both fun for children and helps them learn.

    What the end of federal funding will mean for PBS’ educational programming for kids is still unclear. But to me, it seems inevitable that my children – and everyone else’s kids – will have fewer research-informed and freely accessible options for years to come.

    At the same time, there will likely be no shortage of flashy and shallow content marketed to kids that offers little of value for their learning.

    The Conversation

    Rebecca Dore has conducted previous consulting work for PBS KIDS and engages with a PBS KIDS staff member who is a member of the advisory board for one of Dore’s current federally funded grants.

    ref. PBS accounts for nearly half of first graders’ most frequently watched educational TV and video programs – https://theconversation.com/pbs-accounts-for-nearly-half-of-first-graders-most-frequently-watched-educational-tv-and-video-programs-261996

  • National parks are key conservation areas for wildlife and natural resources

    Source: ForeignAffairs4

    Source: The Conversation – USA (2) – By Sarah Diaz, Associate Professor of Recreation and Sport Management, Coastal Carolina University

    A researcher collects water samples in Everglades National Park in Florida to monitor ecosystem health. AP Photo/Rebecca Blackwell

    The United States’ national parks have an inherent contradiction. The federal law that created the National Park Service says the agency – and the parks – must “conserve the scenery and the natural and historic objects and the wildlife … unimpaired for the enjoyment of future generations.”

    That means both protecting fragile wild places and making sure people can visit them. Much of the public focus on the parks is about recreation and enjoyment, but the parks are extremely important places for research and conservation efforts.

    These places contain a wide range of sensitive and striking environments: volcanoes, glaciers, sand dunes, marshlands, ocean ecosystems, forests and deserts. And these areas face a broad variety of conservation challenges, including the effects of climate change, the perils of popularity driving crowds to some places, and the Trump administration’s reductions to park service staff and funding.

    As scholars of recreation who study the national parks and teach a course on them, we have seen the park service make parks far more than just recreational opportunities. They are living laboratories where researchers – park service personnel and others – study nature across wide-ranging ecosystems and apply what they learn to inform public and private conservation efforts around the country.

    A group of wolves on a snowy landscape.
    Gray wolves, long native to the Yellowstone area, were reintroduced to the national park in the mid-1990s and have helped the entire ecosystem flourish since.
    National Park Service via AP

    Returning wolves to Yellowstone

    One of the best known outcomes of conservation research in park service history is still playing out in the nation’s first national park, Yellowstone.

    Gray wolves once roamed the forests and mountains, but government-sanctioned eradication efforts to protect livestock in the late 1800s and early 1900s hunted them to near extinction in the lower 48 states by the mid-20th century. In 1974, the federal government declared that gray wolves needed the protections of the Endangered Species Act.

    Research in the park found that the ecosystem required wolves as apex predators to maintain a healthy balance in nature.

    In the mid-1990s, an effort began to reintroduce gray wolves to Yellowstone National Park. The project brought 41 wolves from Canada to the park. The wolves reproduced and became the basis of a Yellowstone-based population that has numbered as many as 120 and in December 2024 was estimated at 108.

    The return of wolves has not only drawn visitors hoping to see these beautiful and powerful predators, but their return has also triggered what scholars call a “trophic cascade,” in which the wolves decrease elk numbers, which in turn has allowed willow and aspen trees to survive to maturity and restore dense groves of vegetation across the park.

    Increased vegetation in turn led to beaver population increases as well as ecosystem changes brought by their water management and engineering skills. Songbirds also came back, now that they could find shade and shelter in trees near water and food sources.

    A bear climbs a tree.
    Since the establishment of Great Smoky Mountains National Park in 1934, black bear populations have rebounded in the park.
    Great Smoky Mountains National Park via AP

    Black bear protection in the Great Smoky Mountains

    Great Smoky Mountains National Park is the most biologically diverse park in the country, with over 19,000 species documented and another 80,000 to 100,000 species believed to be present. However, the forests of the Appalachian Mountains were nearly completely clear-cut in the late 1800s and early 20th century, during the early era of the logging industry in the region.

    Because their habitat was destroyed, and because they were hunted, black bears were nearly eradicated. By 1934, when Great Smoky Mountains National Park was designated, there were only an estimated 100 bears left in the region. Under the park’s protection, the population rebounded to an estimated 1,900 bears in and around the park in 2025.

    Much like the gray wolves in Yellowstone, bears are essential to the health of this ecosystem by preying on other animals, scavenging carcasses and dispersing seeds.

    Water preservation in the Everglades

    The Everglades are a vast subtropical ecosystem located in southern Florida. They provide drinking water and irrigation to millions of people across the state, help control storm flooding and are home to dozens of federally threatened and endangered species such as the Florida panther and American alligator.

    When Everglades National Park was created in 1947, it was the first time a U.S. national park had been established to protect a natural resource for more than just its scenic value.

    As agriculture and surrounding urban development continue to pollute this natural resource, park professionals and partner organizations have focused on improving habitat restoration, both for the wildlife and for humans’ water quality.

    A large tawny cat springs across an area of gravel and grass.
    A Florida panther, rescued as a kitten, is released into the wild in the Everglades in 2013.
    AP Photo/J Pat Carter

    Inspiring future generations

    To us, perhaps the most important work in the national parks involves young people. Research shows that visiting, exploring and understanding the parks and their ecosystems can foster deep connections with natural spaces and encourage younger generations to take up the mantle of stewardship of the parks and the environment as a whole.

    With their help, the parks – and the landscapes, resources and beauty they protect– can be preserved for the benefit of nature and humans, in the parks and far beyond their boundaries.

    The Conversation

    The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    ref. National parks are key conservation areas for wildlife and natural resources – https://theconversation.com/national-parks-are-key-conservation-areas-for-wildlife-and-natural-resources-261644

  • If everyone in the world turned on the lights at the same time, what would happen?

    Source: ForeignAffairs4

    Source: The Conversation – USA (2) – By Harold Wallace, Curator, Electricity Collections, National Museum of American History, Smithsonian Institution

    This combined satellite image shows how Earth’s city lights would look if it were night around the entire planet at once. White areas of light show cities with larger populations. NASA/Goddard Space Flight Center

    Curious Kids is a series for children of all ages. If you have a question you’d like an expert to answer, send it to CuriousKidsUS@theconversation.com.


    If everyone in the world turned on the lights at the same time, what would happen? – Clara


    The biggest effect of everyone turning lights on at once would be a surge in demand for electricity, which most people worldwide use to operate their lights.

    Electricity is a form of energy that is made using many different fuels. Power plants are electricity factories that generate electricity from sources including coal, natural gas, uranium, water, wind and sunlight. Then they feed it into a network of transmission and distribution wires called the power grid, which delivers the electricity to homes and businesses.

    To keep the grid stable, electricity must be supplied on demand. When someone turns on a light, they draw power from the grid. A generator must immediately feed an equal amount of power into the grid. If the system gets out of balance, even for a few seconds, a blackout can happen.

    System operators use sensors and sophisticated computers to track electricity demand so they can adjust power production up or down as needed. Total power demand, which is called load, varies a lot from hour to hour and season to season. To see why, think of how much electricity your home uses during the day compared with the middle of the night, or during a summer heat wave compared with a cool fall day.

    Charts showing 2019 U.S. electricity consumption nationwide, with seasonal and weekly patterns.
    These images show patterns of electricity use. Through the year (large graph), people use more electricity for summer cooling and winter heating than in spring and fall. Weekly, consumption drops on weekends, when many businesses are closed.
    U.S. Energy Information Administration, Hourly Electric Grid Monitor

    Meeting a demand spike

    If everyone turned on their lights all at once around the world, they would create a huge, sudden demand for electricity. Power plants would have to ramp up generation very quickly to avoid a system crash. But these plants respond to changing demand in different ways.

    Coal and nuclear plants can provide lots of electricity at almost any time, but if they’re shut off for maintenance or they malfunction, they can take many hours to bring back online. They also respond slowly to load changes.

    Power plants that burn natural gas can respond more quickly to changing load, so they typically are the tool of choice to cover periods when the most electricity is needed, such as hot, sunny summer afternoons.

    Renewable electricity sources such as solar, wind and water power produce less pollution but are not as easily controlled. That’s because the wind doesn’t always blow at the same speed, nor is every day equally sunny in most places.

    Grid managers use large batteries to smooth out power flow as demand rises and falls. But it’s not yet possible to store enough electricity in batteries to run an entire town or city. The batteries would be too expensive and would drain too quickly.

    Some hydropower operators can pump water into lakes during periods of low demand, then release that water to generate electricity when demand is high by running it through machines called turbines.

    Fortunately, if everyone turned on their lights at once, two things would work to prevent a total system crash. First, there is no single worldwide power grid. Most countries have their own grids, or multiple regional grids.

    Neighboring grids, such as those in the United States and Canada, are typically connected so that countries can move electricity across their borders. But they can disconnect quickly, so even if the power went out in some areas, it’s unlikely that all the grids would crash at once.

    Second, over the past 20 years, light bulbs called LEDs have replaced many older electric lights. LEDs operate differently from earlier light bulb designs and produce much more light from each unit of electricity, so they require much less power from the grid.

    According to the U.S. Department of Energy, using LED bulbs saves the average household about US$225 yearly. As of 2020, nearly half of all U.S. homes used LEDS for most or all of their lighting needs.

    LEDs, or light–emitting diodes, are semiconductor devices called transistors that generate light with almost no heat.

    More glare, fewer stars

    Beyond powering lights, it’s also important to think about where all that light would go. A big spike in lighting would dramatically increase sky glow − the hazy brightness that hangs over towns and cities at night.

    Sky glow happens when light reflects off haze and dust particles in the air, creating a diffuse glow that washes out the night sky. Light is very difficult to control: For example, it can reflect off bright surfaces, such as car windows and concrete.

    Lighting is often overused at night. Think of empty office buildings where lights burn around the clock, or street lights that shine upward instead of down on streets and sidewalks where illumination is needed.

    A Joshua tree silhouetted against a starry night sky, with orange glow from artificial lights on the horizon.
    Night sky in California’s Joshua Tree National Park, with light pollution from artificial lights in the Coachella Valley.
    NPS/Lian Law

    Even well-designed lighting systems can add to the problem, making cities and highways visible from space and the stars invisible from the ground. This light pollution
    can harm human health by interfering with our bodies’ natural sleep and waking cycles. It can also disorient insects, birds, sea turtles and other wildlife.

    If people worldwide all turned on their lights at once, we’d see a modest increase in power consumption, but a lot more sky glow and no stars in the night sky. That’s not a very enticing view.


    Hello, curious kids! Do you have a question you’d like an expert to answer? Ask an adult to send your question to CuriousKidsUS@theconversation.com. Please tell us your name, age and the city where you live.

    And since curiosity has no age limit – adults, let us know what you’re wondering, too. We won’t be able to answer every question, but we will do our best.

    The Conversation

    Harold Wallace is a member of the Illuminating Engineering Society.

    ref. If everyone in the world turned on the lights at the same time, what would happen? – https://theconversation.com/if-everyone-in-the-world-turned-on-the-lights-at-the-same-time-what-would-happen-256175

  • Research: Endemic anoa and babirusa show surprising resilience on small islands

    Source: ForeignAffairs4

    Source: The Conversation – Indonesia – By Sabhrina Gita Aninta, Postdoctoral research fellow, University of Copenhagen

    ● Small-island populations are thriving in their small numbers.

    ● Small islands can be natural refugia for endangered megafauna.

    ● Protecting ecosystems on small islands is crucial for national conservation plans.


    Animal populations on small islands are often thought to be unlikely to survive in the long term. Continued exploitation of small islands—such as mining in Raja Ampat, West Papua—poses a serious threat to local wildlife.

    Governments often overlook biodiversity in these island ecosystems. Some small Indonesian islands are even listed for sale on websites like Private Island.

    However, our new study in the Proceedings of the National Academy of Sciences found that endemic large mammals on small islands may, in fact, be thriving.

    This finding is based on the genomic sequencing of two endemic species from the Wallacea region: the anoa (a dwarf buffalo) and the babirusa (a pig with upward-curving tusks that resemble antlers).

    Although their populations are small and genetically less diverse, anoa and babirusa appear to thrive better on smaller islands than on larger ones. This could help them survive in the long term—contrary to previous assumptions.

    In other words, small islands can serve as natural refuges for their native biodiversity—provided their ecosystems remain undisturbed. Thus, protecting these ecosystems is essential for their survival.

    Resilient small population of large mammals

    In theory, large-bodied mammals on small islands are prone to extinction due to limited mating opportunities. Restricted movement can lead to inbreeding, which reduces genetic diversity and jeopardises long-term health.

    However, that may not be the full story. Through genomic analysis, we explored the population history of the anoa and babirusa to uncover what has happened over the past few hundred generations.

    We sequenced the whole genomes of 67 anoa and 46 babirusa from across the Wallacea islands, including the large island of Sulawesi (in the north and southeast regions) and nearby smaller islands such as Buton and Togean.

    We found that anoa and babirusa on Buton and Togean had lower genome-wide diversity and higher levels of inbreeding. Surprisingly, however, these populations were more efficient at purging harmful mutations compared to those on the larger island.

    This suggests that small-island populations, having been isolated for long periods, have undergone natural genetic filtering—leaving individuals that are genetically “safe” and capable of thriving.

    In contrast, populations on the larger Sulawesi Island carry a higher “genetic load.” This is likely a consequence of external, human disturbances such as forest degradation, mining, hunting, and poaching, which have fragmented their habitats and populations. As a result, these groups may be more genetically compromised.

    According to the Kunming-Montreal Global Biodiversity Framework (2022), maintaining a sufficient effective population size (Ne) is crucial for long-term species survival. To avoid the risk of extinction, an Ne of at least 500—or roughly 5,000 individuals in total census size—is recommended.

    Illustration of time to extinction based on effective popualtion size (Ne), reproduced from Kunming-Montreal Global Biodiversity Framework metadata page for Target A4: https://www.gbf-indicators.org/metadata/headline/A-4. The smaller Ne, the faster the rate of genetic diversity loss. The left illustration showed not all individual in census (Nc) provide genetic contributions.
    CC BY-SA

    Interestingly, our findings show that even small populations can remain viable over the long term—so long as they are protected from intense external pressures such as habitat loss, hunting, or disease outbreaks.

    Therefore, before conducting any animal translocation to boost genetic diversity, it’s critical to carefully assess the ecological and genetic context of each population.




    Baca juga:
    Membangun ‘big data’ keanekaragaman hayati kita


    Small islands as refugia

    Our study shows that mammals on small islands can be genetically resilient, even with small population sizes.

    Unfortunately, small-island habitats are often overlooked in national development plans.

    While conservation of small islands is legally regulated, the reality on the ground is starkly different. Indonesia’s outermost islands have frequently been allocated for resource exploitation—often without adequate protection of their ecosystems.

    Wallacea is just one example among many island groups that act as a natural laboratory for evolution. These islands have nurtured unique species for millions of years—species that are irreplaceable once lost.

    As an archipelagic nation, Indonesia must prioritise biodiversity conservation by putting greater focus on habitat protection in small islands.

    These islands can serve as natural refuges for endemic species—offering a more cost-effective and ecologically sound alternative to artificial captive breeding programmes.

    The Conversation

    This research is a collaboration between researchers from Queen Mary University of London (QMUL), Ludwig Maximilian Munich (LMU) Germany, and Universitas Indonesia, with the support of joint funding from NERC-Ristekdikti. Sabhrina Gita Aninta were funded by QMUL for her doctoral study that resulted in this research.

    ref. Research: Endemic anoa and babirusa show surprising resilience on small islands – https://theconversation.com/research-endemic-anoa-and-babirusa-show-surprising-resilience-on-small-islands-261063

  • Drones, disinformation and guns-for-hire are reshaping conflict in Africa: new book tracks the trends

    Source: ForeignAffairs4

    Source: The Conversation – Africa (2) – By Alessandro Arduino, Affiliate Lecturer, King’s College London

    Alessandro Arduino has researched Africa’s security affairs with a particular focus on the use of private military companies and other guns-for-hire across the continent. In his latest book, Money for Mayhem, Arduino examines how military privatisation intersects with international power dynamics. Drawing on fieldwork, interviews and firsthand data, he tracks actors from Russia, China and the Middle East to explore how they profit from instability across Africa.

    What war trends did you identify in your book?

    In Money for Mayhem, I chart the rise of mercenaries, private military companies and hackers-for-hire, alongside emerging technologies like armed drones.

    Nowhere does this rise ignite more readily than in Africa. The continent is flush with abundant natural resources that offer lucrative gains, but is hobbled by weak post-coup states desperate for foreign support. The continent has also been fractured by power vacuums, creating ineffective or weak regional and continental institutions that enable militant networks.

    As a result, mercenaries and contractors have returned to the central stage in Africa. They were once the not-so-hidden hand in post-colonial civil wars, such as in Angola in the 1970s and Sierra Leone in the mid-1990s where highly trained mercenaries profited from the conflict.

    Today, guns for hire wield profound geopolitical influence.

    What did you find out about the key players?

    Take Russia’s Wagner Group. It continues to be active from Libya to Sudan. The group is known for deploying paramilitary forces, conducting disinformation campaigns and supporting powerful political figures from Mali to the Central African Republic. Following its leader’s death in 2023, the Wagner Group shifted its operations. Rebranded as the Africa Corps,the group serves as a key instrument of Moscow’s influence on the continent.

    Then there are Turkish private military outfits operating from Tripoli to Mogadishu. Turkey’s private military companies are fast becoming a key instrument in President Recep Erdogan’s foreign policy. What sets these companies apart is their ability to pair boots on the ground with Turkey’s battle-proven armed drones. This fusion of a rentable army and an off-the-shelf air force could become a powerful export, serving Ankara’s political and economic ambitions in Africa.

    Then there are the Chinese private security companies, protecting Chinese investments and citizens in Africa. Their rise mirrors Beijing’s deepening footprint, where it is pouring billions into infrastructure and mining projects. In volatile nations like the Democratic Republic of Congo, Sudan and South Sudan, weak and unreliable local security forces have created a vacuum that’s being filled by Chinese security contractors.

    Through the ages, the mercenary’s paradox has endured: despised yet indispensable. Their business thrives on perpetual chaos. Every ceasefire threatens their livelihood.

    This dynamic was evident after Muammar Gaddafi’s fall in 2011 in Libya. Both the Government of National Accord in Tripoli and the rival Libyan National Army in the east turned to international mercenaries such as the Wagner Group and fighters from sub-Saharan Africa. This heavy dependence on foreign fighters obstructs national reconciliation.

    The Wagner tale is instructive. Once a Kremlin proxy in resource-rich Africa, the group amassed its own power. It was dismantled when it outlived its usefulness. The dispatch of Russian generals to negotiate Wagner’s fate in 2023 from Libya to Niger was a lesson in power: the puppeteer remains firmly in control.

    Russia’s foreign and defence ministries moved swiftly to reassure Middle Eastern and African partners that operations would continue uninterrupted after the death of Wagner’s leader. This signalled that unofficial Russian forces would maintain their presence on the ground.

    What is happening that’s new?

    The revolution in modern warfare is evident across Africa. Mercenaries, armed drones and AI-driven disinformation campaigns are redefining conflict. Today’s battlefields are evolving at such a dizzying pace that even seasoned military experts are routinely caught flatfooted.

    The speed of change is unprecedented.

    Drones, once the province of great powers, have become commonplace. Inexpensive, lethal, versatile and ever more autonomous, they patrol the skies daily, ushering in a remote-warfare era that upends ethical, strategic and tactical norms.

    The cost of a suicide drone, for instance, typically runs into a few thousand US dollars. A battle tank averages US$3–4 million. Three such drones and a skilled pilot can destroy a single tank, dramatically shifting the cost-benefit equation on the modern battlefield.

    Africa was an early proving ground: drones shaped the Libyan civil war. Since 2019, multiple incidents of precision air strikes conducted by unknown aircraft have occurred in apparent violation of a United Nations arms embargo.

    In early 2025, drones served as an off-the-shelf air force in the bombing of Port Sudan. Explosions rocked the vital humanitarian gateway in Sudan’s ongoing civil war between the Sudanese Armed Forces and the paramilitary Rapid Support Forces.

    Sudan’s army pinned these strikes on the Rapid Support Forces, highlighting the paramilitary group’s deadly embrace of drone warfare. Lacking a formal air force, drones offer the Rapid Support Forces a low-cost, high-lethality shortcut that delivers devastating blows while cloaking its operators in plausible deniability.

    How else is the warfare landscape changing?

    War is now being waged on other fronts as well.

    Africa’s youthful population consumes information primarily via social media. This provides fertile ground for propaganda, disinformation and misinformation – amplified by artificial intelligence (AI) at minimal cost.

    Deepfakes have burst onto the scene as a dire cybersecurity threat. AI-driven disinformation at an industrial scale is already a reality, magnifying hate speech and targeting the message to intended audiences with precision and at very low cost.

    For example, TikTok’s own recommendation engine has already come under fire from African human rights groups for amplifying toxic rhetoric.

    Already, false narratives thrive in Africa all on their own. AI’s true danger lies in its ability to turbocharge disinformation.

    Governments recognise that defending the homeland no longer means guarding cables and servers alone. It also means safeguarding the integrity of information itself.

    What needs to be done?

    Based on my findings, I argue that the fractures today are tomorrow’s global crises. War has irrevocably changed, and its next phase is already upon us.

    Marshalling global vigilance is a categorical imperative – or the world risks ceding control over violence. Building international consensus on already available enforcement mechanisms to regulate non-state armed actors is needed. There is also a need to strengthen global intelligence sharing to track the movements and influence of mercenaries across conflict zones.

    The Conversation

    Alessandro Arduino is an Associate Fellow at the Royal United Services Institute (RUSI)

    ref. Drones, disinformation and guns-for-hire are reshaping conflict in Africa: new book tracks the trends – https://theconversation.com/drones-disinformation-and-guns-for-hire-are-reshaping-conflict-in-africa-new-book-tracks-the-trends-262256

  • Ubuntu matters: rural South Africans believe community care should go hand-in-hand with development

    Source: ForeignAffairs4

    Source: The Conversation – Africa – By Simphiwe Gongqa, PhD candidate, Rhodes University

    The failure of many development initiatives has led some scholars, especially those associated with the post-development and decolonial schools of thought, to call for alternatives to development.

    The idea of development is a very influential way of explaining inequalities between different parts of the world. Most people think of some parts of the world as ‘developed’ and others as ‘developing’ and believe that those in the ‘developing’ world need to follow in the footsteps of those ahead of them on a universal path to development.

    However, critics of development reject this way of thinking. They believe that development damages the environment and is a form of cultural imperialism and that people should rather look to Indigenous concepts and practices to find alternative ways to live a good life. The African concept of Ubuntu is often mentioned.

    This term can be explained with reference to the isiZulu saying ‘umuntu ngumuntu ngabantu’ which means ‘a person is a person through other people’. It entails an ethics of care, compassion and cooperation.

    Concepts like Ubuntu are often contrasted with the idea of development. Advocates of alternatives believe that people in the Global South can draw on these concepts, rather than the idea of development, in order to improve their lives.

    We both study development and are interested in how communities in Africa understand development, including the question of whether or not people in Africa are pursuing alternatives to development.

    Based on our work, we contributed a chapter to a recent book which explores the question of alternatives to development in the Global South. Our contribution to this book looks specifically at the question of how South Africans understand development and Ubuntu and whether they see Ubuntu as a possible alternative to development.

    We spoke to people living in four marginalised communities in KwaZulu-Natal and the Eastern Cape. Such communities would be regarded by mainstream development thinkers as in need of development. These communities were also chosen because the people living there would be likely to have some understanding of the concept of Ubuntu as residents are isiZulu or isiXhosa speakers, two of the sociolinguistic groups commonly associated with the idea of Ubuntu.

    We found that people in these communities value both development and Ubuntu and see the two concepts as related to each other, but not necessarily in the way that either development or post-development theorists imagine. This study supports our previous research suggesting that people continue to value development.

    Respondents’ views on development and Ubuntu

    There were some differences in the way in which the communities spoke about development and Ubuntu. The KwaZulu-Natal communities placed emphasis on infrastructure, education and health, when asked to define how they understand development.

    Typical responses of KwaZulu-Natal residents to the question ‘What is development?’ included:

    • We want development … in order to have roads, [government housing], clinics and farming initiatives.

    • When we say that a place is developed, we see schools, libraries, roads, churches and clinics.

    • Things like water, houses [government housing], electricity, and sewerage systems.

    • There should be libraries, schools, houses [government housing], water, electricity, sewerage systems and hospitals.

    In the Eastern Cape, where only rural respondents were interviewed, residents mentioned infrastructure (roads, houses and schools) less often than those in KwaZulu-Natal and placed greater emphasis on income-generation opportunities, employment opportunities and support for farming. Some of the responses are given below:

    • Development means the creation of jobs to me.

    • Development means building. For example, building creches in the village, planting crops and creating jobs.

    • Development is growth. For example, rearing chickens and other animals for you to grow financially.

    When defining Ubuntu, respondents emphasised care, compassion, cooperation, helpfulness, mutual respect, harmony, consideration, dignity and a willingness to share.

    Here are some of the typical responses given when people were asked to define Ubuntu:

    It is being able to live with one another, you see. A person is a person because of other people kind of thing, and you must get along with all people and there shouldn’t be a person that you hate. You must be able to help another person in need if you can and there must be harmony with everyone.
    Ubuntu is about unity and empathy and love, yes. If we speak of Ubuntu, we speak of thinking for each other, and helping each other.

    When asked about the relationship between Ubuntu and development, most respondents suggested that Ubuntu and development can and should work together.

    Respondents commonly argued that development could best be advanced if people showed Ubuntu, which was presented as an ethic of care and cooperation. Consider the following comment:

    [Development and Ubuntu] go hand in hand because when I have something, I have to pull up a person that I see who is struggling and place them at an equal footing with me or maybe higher than me. I don’t look down on them because they are struggling, and I shouldn’t watch them walk to town everyday whilst I have a car that can help them because they are disadvantaged. If I have food, and a fellow person is hungry; I must give them food for free, yes, that is Ubuntu.

    The strong sense from our interviews is that people want development (understood as the provision of basic services and the general improvement of their lives) and they want it to be brought about in a way that is characterised by an ethics of Ubuntu (understood as an ethic of care and cooperation).

    Advocates of alternatives need to be cautious

    Our research suggests that at least some Global South communities engage with concepts like Ubuntu and development in ways that do not support claims that people should abandon development and live according to Indigenous concepts and practices to have a better life. Rather than viewing Ubuntu as an alternative to development, the people we interviewed suggest that development and Ubuntu are complementary.

    When seeking to articulate alternatives, it is important to be attentive to what people mean by development and Ubuntu so that activists and scholars from different communities can work together to build better lives for all.

    We acknowledge the role of Nhlanhla Mkhutle who conducted the KwaZulu-Natal fieldwork for this study and who co-authored the chapter upon which this article is based.

    The Conversation

    The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    ref. Ubuntu matters: rural South Africans believe community care should go hand-in-hand with development – https://theconversation.com/ubuntu-matters-rural-south-africans-believe-community-care-should-go-hand-in-hand-with-development-259422

  • The treaty meant to control nuclear risks is under strain 80 years after the US bombings of Hiroshima and Nagasaki

    Source: ForeignAffairs4

    Source: The Conversation – USA (2) – By Stephen Herzog, Professor of the Practice, Middlebury Institute of International Studies at Monterey, Middlebury

    The city of Hiroshima was destroyed when the United States dropped atomic bomb “Little Boy” on Aug. 6, 1945. Hulton Archive/Getty Images

    Eighty years ago – on Aug. 6 and 9, 1945 – the U.S. military dropped atomic bombs on Hiroshima and Nagasaki, Japan, thrusting humanity into a terrifying new age. In mere moments, tens of thousands of people perished in deaths whose descriptions often defy comprehension.

    The blasts, fires and lingering radiation effects caused such tragedies that even today no one knows exactly how many people died. Estimates place the death toll at up to 140,000 in Hiroshima and over 70,000 in Nagasaki, but the true human costs may never be fully known.

    The moral shock of the U.S. attacks reverberated far beyond Japan, searing itself into the conscience of global leaders and the public. It sparked a movement I and others continue to study: the efforts of the international community to ensure that such horrors are never repeated.

    Two people in protective clothing, helmets and masks stand near blue barrels outside a building.
    Inspectors from the International Atomic Energy Agency visit an Iraqi nuclear facility in 2003, seeking to ensure that the country did not use peaceful nuclear materials to develop weapons.
    Ramzi Haidar/AFP via Getty Images

    Racing toward the brink

    The memories of Hiroshima and Nagasaki cast a long shadow over global efforts to contain nuclear arms. The 1968 Treaty on the Non-Proliferation of Nuclear Weapons, more commonly known as the Nuclear Nonproliferation Treaty, was a powerful, if imperfect, effort to prevent future nuclear catastrophe. Its creation reflected not just morality, but also the practical fears and self-interests of nations.

    As the years passed, views of the bombings as justified acts began to shift. Harrowing firsthand accounts from Hibakusha – the survivors – reached wide audiences. One survivor, Setsuko Thurlow, described the sight of other victims:

    “It was like a procession of ghosts. I say ‘ghosts’ because they simply did not look like human beings. Their hair was rising upwards, and they were covered with blood and dirt, and they were burned and blackened and swollen. Their skin and flesh were hanging, and parts of the bodies were missing. Some were carrying their own eyeballs.”

    Nuclear dangers increased further with the advent of hydrogen bombs, or thermonuclear weapons, capable of destruction far greater than the atomic bombs dropped on Hiroshima and Nagasaki. What had once seemed a decisive end to a global war now looked like the onset of an era wherein no city or civilization would truly be safe.

    These shifting perceptions shaped how nations viewed the nuclear age. In the decades following World War II, nuclear technology rapidly spread. By the early 1960s, the United States and the Soviet Union aimed thousands of nuclear warheads at one another.

    Meanwhile, there were concerns that countries in East Asia, Europe and the Middle East would acquire the bomb. U.S. President John F. Kennedy even warned that “15 or 20 or 25 nations” might be able to develop nuclear weapons during the 1970s, resulting in the “greatest possible danger” to humanity – the prospect of its extinction. This warning, like much of the early nonproliferation rhetoric, drew its urgency from the legacies of Hiroshima and Nagasaki.

    Perhaps the starkest indication of the gravity of the stakes emerged during the Cuban missile crisis of October 1962. For 13 days, the world teetered on the edge of nuclear annihilation until the Soviet Union withdrew its missiles from Cuba in exchange for the secret withdrawal of U.S. missiles from Turkey. During those long days, U.S. and Soviet leaders – and external observers – witnessed how quickly the risks of global destruction could escalate.

    Two ships steam side by side with an aircraft flying overhead.
    A Soviet freighter, center, is escorted out of Cuban waters by a U.S. Navy plane and the destroyer USS Barry during the 1962 Cuban missile crisis.
    Underwood Archives/Getty Images

    Crafting the grand bargain

    In the wake of such “close calls” – moments where nuclear war was narrowly averted due to individual judgment or sheer luck – diplomacy accelerated.

    Negotiations on a treaty to control nuclear proliferation continued at meetings of the Eighteen Nation Disarmament Committee in Geneva from 1965 to 1968. While the enduring horrors of Hiroshima and Nagasaki helped to drive the momentum, national interests largely shaped the talks.

    There were three groups of negotiating parties. The United States was joined by its NATO allies Britain, Canada, Italy and France – which only observed. The Soviet Union led a communist bloc containing Bulgaria, Czechoslovakia, Poland and Romania. And there were nonaligned countries: Brazil; Burma, now known as Myanmar; Ethiopia; India; Mexico; Nigeria; Sweden, which only joined NATO in 2024; and the United Arab Republic, now known as Egypt.

    For the superpowers, a treaty to limit the spread of the bomb was as much a strategic opportunity as a moral imperative.
    Keeping the so-called “nuclear club” small would not only stabilize international tensions, but it would also cement Washington’s and Moscow’s global leadership and prestige.

    U.S. leaders and their Soviet counterparts therefore sought to promote nonproliferation abroad. Perhaps just as important as ensuring nuclear forbearance among their adversaries was preventing a cascade of nuclear proliferation among allies that could embolden their friends and spiral out of control.

    Standing apart from these Cold War blocs were the nonaligned countries. Many of them approached the atomic age through a humanitarian and moral lens. They demanded meaningful action toward nuclear disarmament to ensure that no other city would suffer the tragic fate of Hiroshima and Nagasaki.

    The nonaligned countries refused to accept a two-tiered treaty merely codifying inequality between nuclear “haves” and “have-nots.” In exchange for agreeing to forgo the bomb, they demanded two crucial commitments that shaped the resulting treaty into what historians often describe as a “grand bargain.”

    The nonaligned countries agreed in the treaty to permit the era’s existing nuclear powers – Britain, China, France, the Soviet Union (later Russia) and the United States – to temporarily maintain their arsenals while committing to future disarmament. But in exchange, they were promised peaceful nuclear technology for energy, medicine and development. And to reduce the risks of anyone turning peaceful nuclear materials into weapons, the treaty empowered the International Atomic Energy Agency to conduct inspections around the world.

    People sit at a large table and sign documents.
    U.S. President Lyndon B. Johnson, right, looks on as Secretary of State Dean Rusk signs the Nuclear Non-Proliferation Treaty on July 1, 1968.
    Corbis via Getty Images

    Legacies and limits

    The treaty entered into force in 1970 and with, 191 member nations, is today among the world’s most universal accords. Yet, from the outset, its provisions faced limits. Nuclear-armed India, Israel and Pakistan have always rejected the treaty, and North Korea later withdrew to develop its own nuclear weapons.

    In response to evolving challenges, such as the discovery of Iraq’s clandestine nuclear weapons program in the early 1990s, International Atomic Energy Agency safeguard efforts grew more stringent. Many countries agreed to accept nuclear facility inspections on shorter notice and involving more intrusive tools as part of the initiative to detect and deter the development of the world’s most powerful weapons. And the countries of the world extended the treaty indefinitely in 1995, reaffirming their commitment to nonproliferation.

    The treaty represents a complex compromise between morality and pragmatism, between the painful memories of Hiroshima and Nagasaki and hard-edged geopolitics. Despite its many imperfections and its de facto promotion of nuclear inequality, the treaty is credited with limiting nuclear proliferation to just nine countries today. It has done so through civilian nuclear energy incentives and inspections that give countries confidence that their rivals are not building the bomb. Countries also put pressure on each other to obey the rules, such as when the international community condemned, sanctioned and isolated North Korea after it withdrew from the treaty and tested a nuclear weapon.

    But the treaty continues to face serious challenges. Critics argue that its disarmament provisions remain vague and unfulfilled, with some scholars contending that nonnuclear countries should exit the treaty to encourage the great powers to disarm. Nuclear-armed countries continue to modernize – and in some cases, expand – their arsenals, eroding trust in the grand bargain.

    Armed soldiers walk next to a barbed-wire fence.
    Tensions between India and Pakistan can often carry veiled, or even explicit, threats of nuclear action.
    Mukesh Gupta/AFP via Getty Images

    The behavior of individual countries also points to strains on the treaty. Russia’s persistent nuclear threats during its war on Ukraine show how deeply possessors may still rely on these weapons as tools of coercive foreign policy. North Korea continues to wield its nuclear arsenal in ways that undermine international security. Iran might consider proliferation to deter future Israeli and U.S. strikes on its nuclear facilities.

    Still, I would argue that declaring the treaty to be dead is simply premature. Critics have predicted its demise since the treaty’s inception in 1968. While many countries have growing frustrations with the existing system of nonproliferation, most of them still see more benefit in staying than walking away from the treaty.

    The treaty may be embattled, but it remains intact. Worryingly, the world today appears far removed from the vision of avoiding nuclear catastrophe that Hiroshima and Nagasaki helped awaken. As nuclear dangers intensify and disarmament stalls, moral clarity risks fading into ritual remembrance.

    I believe that for the sake of humanity’s future, the tragedies of the atomic bombings must remain a stark and unmistakable warning, not a precedent. Ultimately, the Nuclear Nonproliferation Treaty’s continued relevance depends on whether nations still believe that shared security begins with shared restraint.

    The Conversation

    Stephen Herzog does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. The treaty meant to control nuclear risks is under strain 80 years after the US bombings of Hiroshima and Nagasaki – https://theconversation.com/the-treaty-meant-to-control-nuclear-risks-is-under-strain-80-years-after-the-us-bombings-of-hiroshima-and-nagasaki-262164

  • From printing presses to Facebook feeds: What yesterday’s witch hunts have in common with today’s misinformation crisis

    Source: ForeignAffairs4

    Source: The Conversation – USA (3) – By Julie Walsh, Whitehead Associate Professor of Critical Thought and Associate Professor of Philosophy, Wellesley College

    An illustration from ‘The History of Witches and Wizards,’ published in 1720, depicting witches offering wax dolls to the devil. Wellcome Collection/Wikimedia Commons

    Between 1400 and 1780, an estimated 100,000 people, mostly women, were prosecuted for witchcraft in Europe. About half that number were executed – killings motivated by a constellation of beliefs about women, truth, evil and magic.

    But the witch hunts could not have had the reach they did without the media machinery that made them possible: an industry of printed manuals that taught readers how to find and exterminate witches.

    I regularly teach a class on philosophy and witchcraft, where we discuss the religious, social, economic and philosophical contexts of early modern witch hunts in Europe and colonial America. I also teach and research the ethics of digital technologies.

    These fields aren’t as different as they seem. The parallels between the spread of false information in the witch-hunting era and in today’s online information ecosystem are striking – and instructive.

    Birth of a publishing empire

    The printing press, invented around 1440, revolutionized how information spread – helping to create the era’s equivalent of a viral conspiracy theory.

    By 1486, two Dominican friars had published the “Malleus Maleficarum,” or “Hammer of Witches.” The book has three central claims that came to dominate the witch hunts.

    A yellowed title page from a manuscript, with print in black and red ink.
    A 1669 edition of ‘Malleus Maleficarum.’
    Wellcome Collection/Wikimedia Commons, CC BY-SA

    First, it describes women as morally weak and therefore more likely to be witches. Second, it tightly links witchcraft with sexuality. The authors claim that women are sexually insatiable – part of what leads them to witchcraft. Third, witchcraft involves a pact with the devil, who tempts would-be witches through pleasures such as orgies and sexual favors. After establishing these “facts,” the authors conclude with instructions for interrogating, torturing and punishing witches.

    The book was a hit. It had more than two dozen editions and was translated into multiple languages. While “Malleus Maleficarum” was not the only text of its kind, its influence was enormous.

    Prior to 1500, witch hunts in Europe were rare. But after the “Malleus Maleficarum,” they picked up steam. Indeed, new printings of the book correlate with surges in witch-hunting in Central Europe. The book’s success wasn’t just about content; it was about credibility. Pope Innocent VIII had recently affirmed the existence of witches and conferred authority on inquisitors to persecute them, giving the book further authority.

    Ideas about witches [from earlier texts and folklore] – such as the fact that witches could use spells to make penises vanish – were recycled and repackaged in the “Malleus Maleficarum,” which in turn served as a “source” for future works. It was often quoted in later manuals and woven into civic law.

    The popularity and influence of the book helped crystallize a new domain of expertise: demonologist, an expert on the nefarious activities of witches. As demonologists repeated one another’s spurious claims, an echo chamber of “evidence” was born. The identity of the witch was thus formalized: dangerous and decisively female.

    Skeptics fight back

    Not everyone bought into the witch hysteria. As early as 1563, dissenting voices emerged – though, notably, most didn’t argue that witches weren’t real. Instead, they questioned the methods used to identify and prosecute them.

    A faded painting of a bald man with a mustache, wearing a white ruff, heavy necklace, and red robe.
    Essayist Michel de Montaigne, painted around 1578 by an unknown artist.
    Conde Museum/Wikimedia Commons

    Dutch physician Johann Weyer argued that women accused of witchcraft were suffering from melancholia – what we might now call mental illness – and needed medical treatment, not execution. In 1580, French philosopher Michel de Montaigne visited imprisoned witches and concluded they needed “hellebore rather than hemlock”: medicine rather than poison.

    These skeptics also identified something more insidious: the moral responsibility of people spreading the stories. In 1677, English chaplain, physician and philosopher John Webster wrote a scathing critique, claiming that most demonologists’ texts were straightforward copy and paste jobs where the authors repeated one another’s lies. The demonologists offered no original analysis, no evidence and no witnesses – failing to meet the standards of good scholarship.

    The cost of this failure was enormous. As Montaigne wrote, “The witches of my neighborhood are in mortal danger every time some new author comes along and attests to the reality of their visions.”

    Demonologists benefited from the social and political status associated with the popularity of their books. The financial benefit was, for the most part, enjoyed by the printers and booksellers – what today we refer to as publishers.

    Witch hunts petered out throughout the 1700s across Europe. Doubt about the standards of evidence, and increased awareness that accused “witches” may have been suffering from delusion, were factors in the end of the persecution. The skeptics’ voices were heard.

    Psychology of viral lies

    Early modern skeptics understood something we’re still grappling with today: Certain people are more vulnerable to believing extraordinary claims. They identified “melancholics,” people predisposed to anxiety and fantastical thinking, as particularly susceptible.

    Nicolas Malebranche, a 17th-century French philosopher, believed that our imaginations have enormous power to convince us of things that are not true – especially fear of invisible, malevolent forces. He noted that “extravagant tales of witchcraft are taken as authentic histories,” increasing people’s credulity. The more stories, and the more they were told, the greater the influence on the imagination. The repetition served as false confirmation.

    “If they were to cease punishing (women accused of witchcraft) and treat them as mad people,” Malebranche wrote, “in a little while they would no longer be sorcerers.”

    A printed book page labelled 'Witches Apprehended, Examined and Executed,' with a drawing of people submerging a woman in a river.
    The title page of a treatise on witchcraft from 1613.
    Wellcome Collection/Wikimedia Commons, CC BY-SA

    Today’s researchers have identified similar patterns in how misinformation and disinformation – false information intended to confuse or manipulate people – spreads online. We’re more likely to believe stories that feel familiar, stories that connect to content we’ve previously seen. Likes, shares and retweets becomes proxies for truth. Emotional content designed to shock or outrage spreads far and fast.

    Social media channels are particularly fertile ground. Companies’ algorithms are designed to maximize engagement, so a post that receives likes, shares and comments will be shown to more people. The more viewers, the higher the likelihood of more engagement, and so on – creating a cycle of confirmation bias.

    Speed of a keystroke

    Early modern skeptics reserved their harshest criticism not for those who believed in witches but for those who spread the stories. Yet they were curiously silent on the ultimate arbiters and financial beneficiaries of what got printed and circulated: the publishers.

    Today, 54% of American adults get at least some news from social media platforms. These platforms, like the printing presses of old, don’t just distribute information. They shape what we believe through algorithms that prioritize engagement over accuracy: The more a story is repeated, the more priority it gets.

    The witch hunts offer a sobering reminder that delusion and misinformation are recurring features of human society, especially during times of technological change and social upheaval. As we navigate our own information revolution, those early skeptics’ questions remain urgent: Who bears responsibility when false information leads to real harm? How do we protect the most vulnerable from exploitation by those who profit from confusion and fear?

    In an age when anyone can be a publisher, and extravagant tales spread at the speed of a keystroke, understanding how previous societies dealt with similar challenges isn’t just academic – it’s essential.

    The Conversation

    Julie Walsh receives funding from the National Science Foundation

    ref. From printing presses to Facebook feeds: What yesterday’s witch hunts have in common with today’s misinformation crisis – https://theconversation.com/from-printing-presses-to-facebook-feeds-what-yesterdays-witch-hunts-have-in-common-with-todays-misinformation-crisis-260995

  • The World Court just ruled countries can be held liable for climate change damage – what does that mean for the US?

    Source: ForeignAffairs4

    Source: The Conversation – USA (2) – By Lauren Gifford, Faculty, Ecosystem Science & Sustainability; Director, Soil Carbon Solutions Center, Colorado State University

    Ralph Regenvanu, climate change minister of Vanuatu, speaks outside the International Court of Justice in The Hague on July 23, 2025. John Thys/AFP via Getty Images

    The International Court of Justice issued a landmark advisory opinion in July 2025 declaring that all countries have a legal obligation to protect and prevent harm to the climate.

    The court, created as part of the United Nations in 1945, affirmed that countries must uphold existing international laws related to climate change and, if they fail to act, could be held responsible for damage to communities and the environment.

    The opinion opens a door for future claims by countries seeking reparations for climate-related harm.

    But while the ruling is a big global story, its legal effect on the U.S. is less clear. We study climate policies, law and solutions. Here’s what you need to know about the ruling and its implications.

    Why island nations called for a formal opinion

    The ruling resulted from years of grassroots and youth-led organizing by Pacific Islanders. Supporters have called it “a turning point for frontline communities everywhere.”

    Small island states like Vanuatu, Tuvalu, Barbados and others across the Pacific and Caribbean are among the most vulnerable to climate change, yet they have contributed little to global emissions.

    Waves spend spray higher than houses and lap at the edges of homes, with palm trees in the background.
    Waves hit the shore in Majuro, the capital of the Marshall Islands, during a storm on Nov. 27, 2019. Waves inundated parts of the island, washing rocks and debris into roads.
    Hilary Hosia/AFP via Getty Images

    For many of them, sea-level rise poses an existential threat. Some Pacific atolls sit just 1 to 2 meters above sea level and are slowly disappearing as waters rise. Saltwater intrusion threatens drinking water supplies and crops.

    Their economies depend on tourism, agriculture and fishing, all sectors easily disrupted by climate change. For example, coral reefs are bleaching more often and dying due to ocean warming and acidification, undermining fisheries, marine biodiversity and economic sectors such as tourism.

    When disasters hit, the cost of recovery often forces these countries to take on debt. Climate change also undermines their credit ratings and investor confidence, making it harder to get the money to finance adaptive measures.

    A satellite image of the Maldives islands.
    The Maldives, shown in a satellite image from 2020, has an average elevation of less than 5 feet (1.5 meters) above sea level. With limited land where people can live, the country has tried to build up new areas of its islands for housing.
    NASA Earth Observatory

    Tuvalu and Kiribati have discussed digital nationhood and leasing land from other countries so their people can relocate while still retaining citizenship. Some projections suggest nations like the Maldives or Marshall Islands could become largely uninhabitable within decades.

    For these countries, sea-level rise is taking more than their land – they’re losing their history and identity in the process. The idea of becoming climate refugees and separating people from their homelands can be culturally destructive, emotionally painful and politically fraught as they move to new countries.

    More than a nonbinding opinion

    The International Court of Justice, commonly referred to as the ICJ or World Court, can help settle disputes between states when requested, or it can issue advisory opinions on legal questions referred to it by authorized U.N. bodies such as the General Assembly or Security Council. The advisory opinion process allows its 15 judges to weigh in on abstract legal issues – such as nuclear weapons or the Israeli occupation of the Palestinian territories – without a formal dispute between states.

    While the court’s advisory opinions are nonbinding, they can still have a powerful impact, both legally and politically.

    The rulings are considered authoritative statements regarding questions of international law. They often clarify or otherwise confirm existing legal obligations that are binding.

    What the court decided

    The ICJ was asked to weigh in on two questions in this case:

    1. “What are the obligations of States under international law to ensure the protection of the climate system … from anthropogenic emissions of greenhouse gases?”

    2. “What are the legal consequences under these obligations for States where they, by their acts and omissions, have caused significant harm to the climate system?”

    In its 140-page opinion, the court cited international treaties and relevant scientific background to affirm that obligations to protect the environment are indeed a matter of international environmental law, international human rights law and general principles of state responsibility.

    The decision means that in the authoritative opinion of the international legal community, all countries are under an obligation to contribute to the efforts to reduce global greenhouse emissions.

    To the second question, the court found that in the event of a breach of any such obligation, three additional obligations arise:

    1. The country in breach of its obligations must stop its polluting activity, which would mean excess greenhouse gas emissions in this case.

    2. It must ensure that such activities do not occur in the future.

    3. It must make reparations to affected states in terms of cleanup, monetary payment and apologies.

    The court affirmed that all countries have a legal duty under customary international law, which refers to universal rules that arise from common practices among states, to prevent harm to the climate. It also clarified that individual countries can be held accountable, even in a crisis caused by many countries and other entities. And it emphasized that countries that have contributed the most to climate change may bear greater responsibility for repairing the damage under an international law doctrine called “common but differentiated responsibility,” which is commonly found in international treaties concerning the environment.

    While the ICJ’s opinion doesn’t assign blame to specific countries or trigger direct reparations, it may provide support for future legal action in both international and national courts.

    What does the ICJ opinion mean for the US?

    In the U.S., this advisory opinion is unlikely to have much legal impact, despite a long-standing constitutional principle that “international law is part of U.S. law.”

    U.S. courts rarely treat international law that has not been incorporated into domestic law as binding. And the U.S. has not consented to ICJ jurisdiction in previous climate cases.

    Contentious cases before international tribunals can be brought by one country against another, but they require the consent of all the countries involved. So there is little chance that the United States’ responsibility for climate harms will be adjudicated by the World Court anytime soon.

    Still, the court’s opinion sends a clear message: All countries are legally obligated to prevent climate harm and cannot escape responsibility simply because they aren’t the only nation to blame.

    The unanimous ruling is particularly remarkable given the current hostile political climate in the United States and other industrial nations around climate change and responses to it. It represents a particularly forceful statement by the international community that the responsibility to ensure the health of the global environment is a legal duty held by the entire world.

    The takeaway

    The ICJ’s advisory opinion marks a turning point in the global effort to hold countries responsible for climate change.

    Vulnerable countries now have a more concrete, legally grounded base to claim rights and press for accountability against historical and ongoing climate harm – including financial claims.

    How it will be used in the coming years remains unclear, but the opinion gives small island states in particular a powerful narrative and a legal tool set.

    The Conversation

    Lauren Gifford receives funding from the National Science Foundation and the US Department of Agriculture.

    Daimeon Shanks-Dumont does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. The World Court just ruled countries can be held liable for climate change damage – what does that mean for the US? – https://theconversation.com/the-world-court-just-ruled-countries-can-be-held-liable-for-climate-change-damage-what-does-that-mean-for-the-us-262272

  • Historian uncovers evidence of second mass grave of Irish immigrant railroaders in Pennsylvania who suffered from cholera, violence and xenophobia

    Source: ForeignAffairs4

    Source: The Conversation – USA – By William E. Watson, Professor of History, Immaculata University

    Caskets of Irish railroaders whose remains were excavated from a mass grave outside Philadelphia. AP Photo/Matt Rourke

    When commuters on the R5 SEPTA train that connects suburban Chester County to Philadelphia approach Malvern station, they might spot a square stone monument on the right side in a clearing surrounded by a thick stand of forest.

    Above it, a sign paid for by the Amtrak electrical workers union and suspended from the trees reads:

    BURIAL PLOT OF IRISH RAILROAD WORKERS: At this site, known as Duffy’s Cut, fifty-seven Irish immigrant railroad workers from the counties of Donegal, Tyrone, Derry and Leitrim died of cholera and murder in the summer of 1832.

    I’m a historian at Immaculata University, about one mile west of Duffy’s Cut. In 2004, my colleagues and I were the ones to discover the mass grave when we excavated the site with the permission of the Pennsylvania Historical and Museum Commission.

    My students, who were about the same age as Duffy’s workers in 1832, provided a great deal of the labor at the excavation.

    More recently, in May 2025, we discovered human remains that suggest a second Irish immigrant railroader mass grave 11 miles west of Duffy’s Cut, in Downingtown.

    Commuter train passes wooded area with large rocks
    A SEPTA commuter train passes Duffy’s Cut in Malvern, Pa.
    William E. Watson, CC BY-NC-SA

    57 dead railroaders

    Duffy’s Cut was named after an Irish Catholic immigrant railroad contractor named Philip Duffy, who lived from 1783-1871 and was probably from County Donegal in northwest Ireland.

    I learned about the site and its possible mass grave from Pennsylvania Railroad documents that survived in my family.

    A 1909 file, labeled “History of Duffy’s Cut Stone enclosure east of Malvern, Pennsylvania, which marks the burial place of 57 track laborers who were victims of the cholera epidemic of 1832,” was compiled by future Pennsylvania Railroad president Martin W. Clement when he was an assistant supervisor. My grandfather, who was Clement’s executive assistant and later director of personnel, obtained the file before the records were auctioned off in 1972, and my brother showed me the file in 2002.

    The Philadelphia & Columbia Railroad, the predecessor of the Pennsylvania Railroad, wanted to shorten the travel time from Philadelphia to Pittsburgh from three to four weeks by Conestoga wagon to three to four days by rail, canal and river.

    The file my brother had in his possession stated that the dead railroaders at Duffy’s Cut were young men, recently arrived from Ireland. It also said the cost of mile 59 was vastly more expensive than the typical Philadelphia & Columbia Railroad mile. Laying a typical mile of P&C railroad cost US$5,000 in the 1830s. But at mile 59, gouging the landscape with a “cut” to lay the tracks on level ground and bridging the valley with a fill – an earthen bridge – cost $32,000. Although the work was especially difficult, the common laborers received about 25 cents a day.

    Artifact that looks like smooth stick with engraved with the word 'Derry'
    Fragment of an Irish-made clay pipe unearthed near the Duffy’s Cut mass grave.
    AP Photo/Matt Rourke

    Most of the men had sailed from the city of Derry in the north of Ireland to Philadelphia from April to June of 1832 aboard the John Stamp. The ship pulled into the Lazaretto quarantine station on the Delaware River in Essington, Pennsylvania, before sailing on to Philadelphia.

    No one on the John Stamp was reported to be ill. This was the height of the 1832 cholera epidemic that ultimately killed at least 10,000 people in the U.S.

    Forty-seven laborers from the John Stamp ship joined 10 other Irish immigrant workers who were already living with Duffy in a rental house in Willistown, a mile south of the work site.

    Yet almost as soon as they arrived to the work camp at mile 59, so did cholera, which had spread to Philadelphia from New York City.

    Cholera in the camps

    Americans could read about the spread of cholera across Europe in 1831 in the newspapers, but very little was known about the disease until decades later.

    Cholera is a bacterial infection that spreads due to poor sanitary practices in which human feces get into drinking water, via excrement passed into streams or by seepage from outhouses to wells.

    But in 1832, people believed cholera was linked to intemperance and vice, which were thought to weaken the body. According to the prevailing miasma theory, it caused outbreaks once airborne. Immigrants and the poor were thought to be especially susceptible to the disease and primary vectors for its spread.

    Cholera causes extreme diarrhea and vomiting that lead to rapid electrolyte loss. In 1832 it was fatal in about 50% of cases. In the Delaware Valley, cholera cases mounted from July into August 1832. Philadelphia registered its peak number of cases, 173, on Aug. 6 and peak number of deaths, 76, on Aug. 7. The hardest-hit areas in the region were working-class neighborhoods and canal and railroad work camps.

    A typical crew on a P&C mile numbered 100 to 120 men. However, the work by Irish immigrants was segregated along sectarian lines on the railroads in the U.S., as it was in the Belfast dockyards at the same time. The other half of the workers at mile 59, according to Canal Commission reports, were Irish Protestant immigrants who worked for an Irish Protestant contractor and did the less dangerous work of laying tracks. They did not die of cholera.

    Four men working in wooded area
    The author, second from left, and his team at the dig site at Duffy’s Cut in 2011.
    William E. Watson, CC BY-NC-SA

    Signs of a massacre

    To excavate the site, we partnered with the Chester County Emerald Society, a law enforcement group that cleared our work with the county district attorney, and the coroner, in case we found human remains. The University of Pennsylvania Museum provided ground-penetrating radar, as well as archaeological and anthropological assistance for the dig. Staff trained my students in how to properly excavate and handle artifacts and bones.

    Our research team uncovered seven sets of remains between 2009 and 2012 in the remaining eastern portions of the fill. The skeletons had been buried in coffins sealed with an exceptional number of nails, perhaps to contain the cholera.

    Analysis at the UPenn Museum showed evidence of violence to each of the skulls – with one skull showing both an ax blow and a bullet encased in the skull. Researchers found no evidence of defensive wounds on any skeleton, suggesting that the men might have been tied up before being killed.

    After our team analyzed the remains, we came to the startling conclusion that the men didn’t die from cholera – they were massacred.

    I believe that fear of cholera, an epidemic that some clergymen in America and England called “a chastisement for the sins of the people,” and anti-immigrant sentiment fueled violence against them by native-born populations.

    After forensic examinations of the remains, five of the skeletons were reburied during a ceremony at West Laurel Hill Cemetery in Bala Cynwyd in 2012. My team determined the identities of two of the deceased – 18-year-old John Ruddy from County Donegal and 29-year-old Catherine Burns, the daughter of one of the workers, from County Tyrone – and their remains were returned to their home counties in Ireland in 2013 and 2015.

    Man wearing red, purple and white vestments shown incensing caskets as crowd of people look on
    Bishop Michael J. Fitzgerald takes part in a funeral at West Laurel Hill Cemetery in 2012 for the five 19th-century Irish immigrants whose remains were excavated from the Duffy’s Cut site.
    AP Photo/Matt Rourke

    A second mass grave in Chester County

    Historical records led us to what we believe is a second mass grave in Chester County.

    An article in the Nov. 7, 1832, issue of the Village Record newspaper in West Chester reported that one man from Duffy’s Cut fled westward down the unfinished track line to another Irish immigrant railroader crew “near the line of East Bradford and East Caln.”

    This was P&C mile 48 in Downingtown, Pennsylvania. It was under the direction of Irish immigrant contractor Peter Connor, whose crew of 100 to 120 men was reported to have all died around the same time as Duffy’s crew.

    Forty years later, Charles Pennypacker’s 1909 “History of Downingtown” recorded that the dead Irishmen in Downingtown were carted north to a field where they were buried in a mass grave on the property of present-day Northwood Cemetery, “in the eastern part of the cemetery, near the gully.”

    Fragments of bones shown in container lined with purple satin
    File photo from March 24, 2009, shows bones recovered from the mass grave at Duffy’s Cut.
    AP Photo/Matt Rourke, File

    On May 15, 2025, the Duffy’s Cut team unearthed the first human remains from the Downingtown crew in the exact place reported by Pennypacker. This work has just started.

    Up and down the East Coast, there are numerous mass graves of anonymous workers who died of epidemics and overwork in the 1820s and 1830s. Most of those people will never have their stories told.

    At Duffy’s Cut, and now at the Downingtown site, we hope to humanize some of the hardworking immigrants who died building a crucial part of America’s industrial landscape.

    Visitors can view artifacts found at Duffy’s Cut at the Duffy’s Cut Museum in the Gabriele Library at Immaculata University in Malvern, Pa.

    Read more of our stories about Philadelphia and Pennsylvania.

    The Conversation

    William E. Watson serves as the unpaid director of the 501 c 3 educational non-profit and in 2016 served as director of an NEH summer teachers’ institute at Immaculata University. .

    ref. Historian uncovers evidence of second mass grave of Irish immigrant railroaders in Pennsylvania who suffered from cholera, violence and xenophobia – https://theconversation.com/historian-uncovers-evidence-of-second-mass-grave-of-irish-immigrant-railroaders-in-pennsylvania-who-suffered-from-cholera-violence-and-xenophobia-261442

  • Quantum scheme protects videos from prying eyes and tampering

    Source: ForeignAffairs4

    Source: The Conversation – USA – By Yashas Hariprasad, Assistant Professor of Computer Science, California State University, East Bay


    Quantum physics enables hack-proof video transmission.
    sakkmesterke/iStock via Getty Images

    We have developed a new way to secure video transmissions so even quantum computers in the future won’t be able to break into private video livestreams or recordings. We are computer scientists who study computer security. Our research introduces quantum-safe video encryption, which combines two complementary techniques: quantum encryption and secure internet transmission.

    With our encryption system, a hacker wouldn’t be able to access or understand the video data because it’s scrambled using a quantum key that changes unpredictably. Cryptographic keys scramble data so that only someone with the correct key can unscramble it. If the hacker even tries to peek, the system detects it and raises an alarm. The video also travels in the digital equivalent of a locked box over the internet, so nobody can swap or tamper with it in transit.

    Quantum encryption scrambles video data using truly random cryptographic keys based on quantum physics. Unlike traditional encryption that relies on mathematical complexity, quantum encryption uses the fundamental unpredictability of quantum states to generate unbreakable keys.

    Quantum refers to the scale of atoms and molecules, which behave in counterintuitive ways. Quantum computers take advantage of these strange behaviors to solve problems that are difficult or impossible for ordinary computers.

    We combine this quantum encryption scheme with secure transmission over the internet using transport layer security. This is the encryption scheme used to keep connections between web browsers and web pages private.

    Our approach works by converting each video frame into a quantum image representation, essentially a mathematical framework that captures visual information in quantum states. We then scramble the data by combining it with quantum-generated random keys, making the encrypted video statistically indistinguishable from pure noise.

    Quantum encryption explained.

    And because quantum encryption is resistant to future technology such as quantum computers, that video is safe for years to come.

    Why it matters

    Today’s encryption works well, until tomorrow’s quantum computers arrive. These super-powerful machines will be able to crack most current encryption methods in seconds. That means today’s private videos, stored on cloud platforms or transmitted over the internet, could be decrypted years from now.

    More dangerously, these stolen videos can be manipulated into deepfakes: AI-generated videos that can make anyone appear to say or do anything. A forged video can ruin reputations, sway decisions and even incite violence. A secure encryption system not only protects privacy, it helps protect truth.

    What other research is being done

    Researchers around the world are exploring quantum key distribution to securely share encryption keys. Others use chaos theory, deep learning or hybrid algorithms to secure video and image content.

    But most existing work focuses on images, or only on key exchange, without fully securing live or stored video data.

    What’s next

    We’re working toward scaling this system to encrypt full video files and real-time video streams, such as those used in video conferencing and surveillance systems.

    Next steps include reducing the performance overhead for smoother playback and testing the system in real-world environments. We’re also exploring how it can work alongside deepfake detection tools, so we not only stop hackers from accessing videos but also prove the videos haven’t been altered.

    While our framework shows strong early results, practical use will depend on phased adoption as quantum systems become more accessible over the years.

    The Research Brief is a short take on interesting academic work.

    The Conversation

    The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    ref. Quantum scheme protects videos from prying eyes and tampering – https://theconversation.com/quantum-scheme-protects-videos-from-prying-eyes-and-tampering-261049

  • As wrestling fans reel from the sudden death of Hulk Hogan, a cardiologist explains how to live long and healthy − and avoid chronic disease

    Source: ForeignAffairs4

    Source: The Conversation – USA (3) – By William Cornwell, Associate Professor of Cardiology, University of Colorado Anschutz Medical Campus

    Hulk Hogan’s international fame as a wrestling superstar began in the 1980s. This photo is from 2009. Paul Kane via Getty Images Entertainment

    On July 24, 2025, the American pro wrestling celebrity Hulk Hogan, whose real name was Terry Bollea, died at the age of 71. Hogan had chronic lymphocytic leukemia and a history of atrial fibrillation, or A-fib, a condition in which the upper chambers of the heart, or atria, beat irregularly and often rapidly. His cause of death has been confirmed as acute myocardial infarction, commonly known as a heart attack.

    Hogan became a household name in the 1980s and has long been known for maintaining fitness and a highly active lifestyle, despite having had 25 surgeries in 10 years, including a neck surgery in May.

    Hogan’s death has brought renewed attention to the importance of maintaining heart health through exercise. Many people think that bodybuilders are the “picture” of health. However, the truth is that too much muscle can increase strain on the heart and may actually be harmful. It may seem ironic, then, that people who exercise to extreme levels and appear healthy on the outside can, in fact, be quite unhealthy on the inside.

    As the director of sports cardiology at the University of Colorado Anschutz Medical Campus, I see patients of all age groups and at varying levels of fitness who are interested in promoting health by incorporating exercise into their lifestyle, or by optimizing their current exercise program.

    Two older women exercising together in a park.
    More exercise and less sedentary behavior reduces the risk of heart disease, stroke, cancer and dementia.
    andreswd/E+ via Getty Images

    Exercise is the foundation for good health

    When people think of vital signs, they usually think about things such as heart rate, blood pressure, temperature, breathing rate and blood oxygen levels. However, the American Heart Association also includes “fitness” as an additional vital sign that should be considered when determining a patient’s overall health and risk of heart disease, cancer and death.

    While fitness may be determined in various ways, the best way is by checking what is known as peak oxygen uptake, or VO2 max, through a specialized evaluation called a cardiopulmonary exercise test. These can be performed at many doctors’ offices and clinics, and they provide a wealth of information related to overall health, as well as heart, lung and skeletal muscle function.

    Exercise is one of the most effective interventions to prolong life and reduce the risk of developing chronic diseases throughout life – in effect, prolonging lifespan and improving health span, meaning the number of years that people spend in good health.

    In fact, a large study done by the Cleveland Clinic found that a low level of fitness poses a greater risk of death over time than other traditional risk factors that people commonly think of, such as smoking, diabetes, coronary artery disease and severe kidney disease.

    When it comes to brain health, the American Stroke Association emphasizes the importance of routine exercise and avoiding sedentary behavior in their 2024 guidelines on primary prevention of stroke. The risk of stroke increases with the amount of sedentary time spent throughout the day and also with the amount of time spent watching television, particularly four hours or more per day.

    Regarding cognitive decline, the Alzheimer’s Society states that regular exercise reduces the risk of dementia by almost 20%. Furthermore, the risk of Alzheimer’s disease is twice as high among individuals who exercise the least, when compared to individuals who exercise the most.

    There is also strong evidence that regular exercise reduces the risk of certain types of cancer, especially, colon, breast and endometrial cancer. This reduction in cancer risk is achieved through several mechanisms.

    For one, obesity is a risk factor for up to 13 forms of cancer, and excess body weight is responsible for about 7% of all cancer deaths. Regular exercise helps to maintain a healthy weight.

    Second, exercise helps to keep certain hormones – such as insulin and sex hormones – within a normal range. When these hormone levels get too high, they may increase cancer cell growth. Exercise also helps to boost the immune system by improving the body’s ability to fight off pathogens and cancer cells. This in turn helps prevent cancer cell growth and also reduces chronic inflammation, which left unchecked damages tissue and increases cancer risk.

    Finally, exercise improves the quality of life for all people, regardless of their health or their age. In 2023, Hulk Hogan famously quipped, “I’m 69 years old, but I feel like I’m 39.”

    7,000 steps is just over 3 miles – depending on your pace, that’s about 40 to 60 minutes of walking.

    The optimal dose of exercise

    Major health organizations, such as the American Heart Association, American Cancer Society and Department of Health and Human Services, all share similar recommendations when it comes to the amount of exercise people should aim for.

    These organizations all recommend doing at least 150 minutes per week of moderate-intensity exercise, or at least 75 minutes per week of vigorous-intensity exercise. Moderate exercises include activities such as walking briskly (2.5 to 4 miles per hour), playing doubles tennis or raking the yard. Vigorous exercise includes activities such as jogging, running or shoveling snow.

    A good rule of thumb for figuring out how hard a specific exercise is is to apply the “talk test”: During moderate-intensity exercise, you can talk, but not sing, during the activity. During vigorous intensity exercise, you can say only a few words before having to stop and take a breath.

    There is a lot of solid data to support these recommendations. For example, in a very large analysis of about 48,000 people followed for 30 years, the risk of death from any cause was about 20% lower among those who followed the physical activity guidelines for Americans.

    Life can be busy, and some people may find it challenging to squeeze in at least 150 minutes of exercise throughout the course of the week. However, “weekend warriors” – people who cram all their exercise into one to two days over the weekend – still receive the benefits of exercise. So, a busy lifestyle during the week should not prevent people from doing their best to meet the guidelines.

    What about the number of steps per day? In a new analysis in The Lancet, when compared with walking only 2,000 steps per day, people who walked 7,000 steps per day had a 47% lower risk of death from any cause, a 25% lower risk of developing heart disease, about a 50% lower risk of death from heart disease, a 38% lower risk of developing dementia, a 37% lower risk of dying from cancer, a 22% lower risk of depression and a 28% lower risk of falls.

    Historically, people have aimed for 10,000 steps per day, but this new data indicates that there are tremendous benefits gained simply from walking 7,000 steps daily.

    It’s never too late to start

    One question that many patients ask me – and other doctors – is: “Is it ever too late to start exercising?” There is great data to suggest that people can reap the benefits even if they don’t begin an exercise program into their 50s.

    Being sedentary while aging will cause the heart and blood vessels to stiffen. When that happens, blood pressure can go up and people may be at risk of other things such as heart attacks, strokes or heart failure.

    However, in a study of previously sedentary adults with an average age of 53, two years of regular exercise reversed the age-related stiffening of the heart that otherwise occurs in the absence of routine exercise.

    And it is important to remember that you do not have to look like a body builder or fitness guru in order to reap the benefits of exercise.

    Almost three-quarters of the total benefit to heart, brain and metabolic health that can be gained from exercise will be achieved just by following the guidelines.

    The Conversation

    William Cornwell does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. As wrestling fans reel from the sudden death of Hulk Hogan, a cardiologist explains how to live long and healthy − and avoid chronic disease – https://theconversation.com/as-wrestling-fans-reel-from-the-sudden-death-of-hulk-hogan-a-cardiologist-explains-how-to-live-long-and-healthy-and-avoid-chronic-disease-262103

  • Shingles vaccination rates rose during the COVID-19 pandemic, but major gaps remain for underserved groups

    Source: ForeignAffairs4

    Source: The Conversation – USA (3) – By Jialing Lin, Research fellow in Health Systems, International Centre for Future Health Systems, UNSW Sydney

    The CDC recommends shingles vaccination for all adults age 50 and older. xavierarnau/E+ via Getty Images

    Vaccination against shingles increased among adults age 50 and older in the U.S. during the COVID-19 pandemic, but not equally across all population groups. That’s the key finding from a new study my colleagues and I published in the journal Vaccine.

    Shingles is caused by the reactivation of the varicella-zoster virus, the same virus that causes chickenpox. It leads to a painful rash and potentially serious complications – especially in older adults – such as persistent nerve pain, vision loss and neurological problems. While antiviral treatments can ease symptoms, vaccination is the most effective way to prevent shingles.

    We analyzed nationally representative survey data from almost 80,000 adults age 50 and over between 2018 and 2022, collected by the Centers for Disease Control and Prevention to monitor the health of the U.S. population. The survey tracked vaccination rates in people of different ethnic backgrounds as well as other factors such as sex, household income and the presence of chronic conditions like diabetes and cardiovascular disease.

    The uptake of shingles vaccines rose notably during the pandemic – from 25.1% of people for whom it is recommended in 2018-2019, to 30.1% during 2020-2022. We observed this overall increase across nearly all groups in our study.

    We saw the greatest relative increases among groups that historically have had lower rates of shingles vaccination. These included adults ages 50-64, men, people from racial and ethnic minority groups such as non-Hispanic Black adults, those with lower household incomes, current smokers and people without chronic conditions like cancer or arthritis.

    Red bumpy skin rash caused by shingles
    Shingles is caused by the same virus that causes chickenpox. It leads to a painful rash and other potentially serious complications.
    Irena Sowinska/Moment via Getty Images

    Why it matters

    In the U.S., the CDC recommends shingles vaccination for all adults age 50 and older. However, uptake has been low, partly due to limited awareness, cost concerns and missed opportunities during routine health care visits.

    The COVID-19 pandemic, while disruptive, may have inadvertently created new opportunities to improve adult vaccination uptake, particularly among groups with historically low uptake of the shingles vaccine. Factors contributing to this shift likely included heightened public awareness of the importance of vaccination, more frequent health care encounters, especially during COVID-19 vaccine rollouts, and the expanded availability of adult vaccines in pharmacies and primary care settings.

    Replacing the older, less effective live attenuated zoster vaccine, called Zostavax, with the newer, non-live zoster vaccine, Shingrix, in 2020 also played a role. Public health campaigns that promoted co-administration of vaccines and launched targeted outreach to underserved populations further contributed to these gains.

    However, major inequities persist. While shingles vaccination rates improved across the board, groups that had lower uptake before the pandemic continued to lag behind wealthier, non-Hispanic white populations with greater health care access. Overall, the vaccination rate for shingles is still low – below other vaccines such as the flu vaccine.

    This gap reflects long-standing disparities in getting needed health care, which became even more prominent during the pandemic. It also highlights the need for fairer policies and customized outreach efforts to underserved communities that build trust and raise awareness about the health benefits of the shingles vaccine.

    What still isn’t known

    Although the upward trend we observed is encouraging, several questions remain. For example, we could not tell from the survey data we worked with whether participants received both doses of the Shingrix vaccine. Both are needed for full protection against shingles.

    Nor could we tell whether participants received the shingles vaccine alongside their COVID-19 vaccination. Receiving multiple vaccines at a single health care visit makes vaccination more convenient and may boost vaccine uptake by reducing the number of needed visits. Also unknown is how immunocompromised people fared during this period. Current guidelines recommend that immunocompromised adults regardless of age also receive the shingles vaccine, but the data only included adults age 50 and over.

    Addressing these questions in future studies would help public health experts develop strategies to encourage more eligible people to receive the shingles vaccine.

    The Research Brief is a short take on interesting academic work.

    The Conversation

    Jialing Lin does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Shingles vaccination rates rose during the COVID-19 pandemic, but major gaps remain for underserved groups – https://theconversation.com/shingles-vaccination-rates-rose-during-the-covid-19-pandemic-but-major-gaps-remain-for-underserved-groups-262020

  • A university bookshop in Ibadan tells the story of Nigeria’s rich publishing culture

    Source: ForeignAffairs4

    Source: The Conversation – Africa – By Tinashe Mushakavanhu, Assistant Professor, Harvard University

    Driven by a desire to explore Nigeria’s literary and cultural history beyond the metropolis of Lagos, I took a road trip to Ibadan, once the most important university town in the country. Ibadan, in Oyo State, was the first city in Nigeria to have a university set up in 1948.

    Ibadan is where the Mbari Club once gathered, an experimental space where Nigerian writers, artists and thinkers – among them Chinua Achebe, Wole Soyinka, JP Clark, Christopher Okigbo, Uche Okeke, Bruce Onobrakpeya, Mabel Segun and South Africa’s Es’kia Mphahlele – met, debated and dreamed in the 1960s and 70s.

    It’s the city where celebrated Nigerian artist and architect Demas Nwoko imagined and built his utopias. Where the Oxford University Press and Heinemann Educational Books established their west African headquarters.




    Read more:
    Chimamanda’s Lagos homecoming wasn’t just a book launch, it was a cultural moment


    Books have always been a form of cultural currency in Ibadan. The presence of major publishers meant that bookshops were not just retail outlets, but intellectual salons, sites of encounter and exchange.

    So while in Ibadan I visited cultural spaces and independent bookshops but it was the charms of the University campus that mostly captured my imagination. And my favourite place was the University of Ibadan Bookshop. At this campus bookshop I lingered the most, in awe and wonder. Its eclectic range of books, journals, public lecture pamphlets, novels, poetry collections and monographs excited me.

    Today, when the global publishing economy has increasingly digitised and centralised, the bookshop feels almost radical just by existing. It’s a reminder that intellectual life in Africa is not peripheral or derived from the west. It is present, prolific and profoundly local. To walk through the shelves of this bookshop was to encounter a history of African thought written and produced on its own terms.

    As a scholar of African literature and archives, my research traces the hidden lives of spaces that have shaped publishing and archives. University bookshops have been overlooked but are essential nodes in the continent’s intellectual history.

    A snapshot of Nigeria

    This campus bookshop gives a snapshot of Nigeria as a print country. Here we witness the nation through its printed matter. A nation of prolific publishing. I found the literary output in the Ibadan campus bookshop not only vast but exuberant and unrelenting. It reflects the texture of the Nigerian personality: loud, boisterous, layered and insistent. Stacks upon stacks of books.

    In these stacks, it dawned on me that beneath the surface lies a vibrant, ongoing literary discourse that is unmistakably Nigerian, and sadly not resonant far beyond its borders. These are books you don’t see on reference lists of “popular” and “influential” scholarship that privileges work produced and imported to Africa from the Euro-American academy.

    I was especially intrigued with how the Nigerian academic and writer does not tire in producing academic and cultural journals. There are journals for every subject under the sun.

    While the critical framework of African literature is too often shaped by the global north (see critiques by Ato Quayson, Biodun Jeyifo, Simon Gikandi and Grace Musila) in Ibadan, I saw a distinctly local and deeply African critical discourse rooted in place, language and lived experience. To walk into the University of Ibadan Bookshop is to step into legacy. Its shelves bear the weight of decades of African thought, theory and storytelling.

    Despite being housed in an ageing building, it has stayed defiant. Even though floods destroyed books and computers worth a small fortune in 2019, the bookshop is still standing proudly. And there was pride too among the staff who were eager to help or answer any questions about the books.

    More than bookshops

    The University of Ibadan bookshop reminded me of the bookshop from my undergraduate days in Zimbabwe. Even though our campus bookshop was much smaller, I used to find pleasure going there in between lectures. It often felt like walking into a vault of African knowledge and memory.

    Our bookshop at Midlands State University stocked old, canonical books alongside current literature. On occasion, rare, out-of-print secondhand books would appear on the shelves. The bargain sales also meant I spent most of my money there.

    But to call these spaces on African university campuses “bookshops” hardly does them justice. They are hybrid cultural ecosystems that function as part bookshop, part print shop, stationer, library and sometimes even archive. They have long served as vital nodes in the circulation of African knowledge and thought.

    Yet this ecosystem is rapidly eroding, undermined by the rise of internet culture, artificial intelligence, piracy and harsh economic conditions. The result is a slow but devastating disappearance of African intellectual memory. As scholars remind us, digital platforms are not neutral. They are structured by algorithms that often marginalise black and African knowledge. So, the loss of these analogue spaces is more than nostalgic, it is epistemic erasure.

    In this digital age, there is something vital about the physical presence of bookshops on African campuses. Thanks to them, as a student, for me literature was the serendipity of discovery, the tactile feel of books, the beautiful persistence of a local knowledge system that was relatable and produced by people like me.




    Read more:
    Nigerian architect Demas Nwoko on his award-winning work: ‘Whatever you build, it should suit your culture’


    On the way out of the city, we stopped at Bower’s Tower. From there you can see Ibadan’s sprawling layout, the ancient hills from which the settlement was built, and its red roofs.

    The view reflected the complexity and density of ideas the city has nurtured. And despite shifts in Nigeria’s publishing geography from here to Lagos and Abuja, Ibadan still matters. It’s a city that remembers, that archives, that holds on to knowledge.

    The Conversation

    Tinashe Mushakavanhu does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. A university bookshop in Ibadan tells the story of Nigeria’s rich publishing culture – https://theconversation.com/a-university-bookshop-in-ibadan-tells-the-story-of-nigerias-rich-publishing-culture-262050

  • The African activists who challenged colonial-era slavery in Lagos and the Gold Coast

    Source: ForeignAffairs4

    Source: The Conversation – Africa – By Michael E Odijie, Associate Professor, University of Oxford

    When historians and the public think about the end of domestic slavery in west Africa, they often imagine colonial governors issuing decrees and missionaries working to end local traffic in enslaved people.

    Two of my recent publications tell another part of the story. I am a historian of west Africa, and over the past five years, I have been researching anti-slavery ideas and networks in the region as part of a wider research project.

    My research reveals that colonial administrations continued to allow domestic slavery in practice and that African activists fought this.

    In one study I focused on Francis P. Fearon, a trader based in Accra, the Ghanaian capital. He exposed pro-slavery within the colonial government through numerous letters written in the 1890s (when the colony was known as the Gold Coast).

    In another study I examined the Lagos Auxiliary, a coalition of lawyers, journalists and clergy in Nigeria. Their campaigning secured the repeal of Nigeria’s notorious Native House Rule Ordinance in 1914. That ordinance had been enacted by the colonial government to maintain local slavery in the Niger Delta region.

    Considered together, the two studies demonstrate how local campaigners used letters, print culture, imperial pressure points and personal networks to oppose practices that had kept thousands of Africans in bondage.

    The methods Fearon and the Lagos Auxiliary pioneered still matter because they show how marginalised communities can compel power‑holders to close the gap between laws and lived reality. They remind us that well‑documented local testimony, amplified trans-nationally, can still overturn official narratives, compel policy change, and keep institutions honest.

    Colonial ‘abolition’ that wasn’t

    West Africa was a major source of enslaved people during the transatlantic slave trade. The transatlantic trade was suppressed in the early 19th century, but this did not bring an end to domestic slavery.

    One of the principal rationales for colonisation in west Africa was the eradication of domestic slavery.

    Accordingly, when the Gold Coast was formally annexed as a British colony in 1874, the imperial government declared slave dealing illegal. And slave-dealing was criminalised across southern Nigeria in 1901. On paper these measures promised freedom, but in practice loopholes empowered slave-holders, chiefs and colonial officials who continued to demand coerced labour.

    On the Gold Coast, the 1874 abolition law was never enforced. The British governor informed slave-owners that they might retain enslaved persons provided those individuals did not complain. By 1890, child slavery had become widespread in towns such as Accra. According to the local campaigners, it was even sanctioned by the colonial governor. This led to some Africans uniting to establish a network to oppose it.

    The Niger Delta region of Nigeria had a similar experience. The colonial administration enacted the Native House Rule Ordinance to counteract the effects of the Slave-Dealing Proclamation of 1901 which criminalised slave dealing with a penalty of seven years’ imprisonment for offenders. The Native House Rule Ordinance required every African to belong to a “House” under a designated head. It went on to criminalise any person who attempted to leave their “House”. In the Niger Delta kingdoms such as Bonny, Kalabari and Okrika, the word “House” never referred to a single dwelling. Rather, it denoted a self-perpetuating, named corporation of relatives, dependants and slaves under a chief, which owned property and spoke with one voice. By the 1900s, “Houses” had become the primary units through which slave ownership was organised.

    Therefore, the Native House Rule Ordinance compelled enslaved people in Houses to remain with their masters. The masters were empowered to use colonial authority to discipline them. District commissioners executed arrest warrants against runaways. In exchange, the House heads and local chiefs supplied the colonial administration with unpaid labour for public works.

    African campaigners in Accra and Lagos organised to challenge what they perceived as the British colonial state’s support for slavery.

    Fearon: an undercover abolitionist in Accra

    Francis Fearon was an educated African, active in the Accra scene during the second half of the 19th century. He was highly literate and part of elite circles. He was closely associated with the journalist Edmund Bannerman. He regularly wrote to local newspapers, often expressing concerns about racism against Black people and moral decay.

    On 24 June 1890, Fearon sent a 63-page letter, with ten appendices, to the Aborigines’ Protection Society in London. That dossier would form the basis of several further communications. He alleged that child trafficking continued.

    As evidence, he transcribed the confidential court register of Accra and claimed that Governor W. B. Griffith had instructed convicted slave-owners to recover their “property”.

    Fearon’s tactics were audacious. He remained anonymous, relied on court clerks for documents, and supplied the Aborigines’ Protection Society with evidence. He pleaded with the society to investigate the colonial administration in the Gold Coast.

    Although the society publicised the scandal, subsequent narratives quietly effaced the African source.

    Lagos elites organise – and name the problem

    Like Fearon, Nigerian campaigners also wrote to the Anti-Slavery and Aborigines’ Protection Society. They denounced the colonial government in Nigeria for promoting slavery, but they did not remain anonymous.

    By this time, the Native House Rule Ordinance had prompted some enslaved people to flee the districts in which it was enforced. They sought refuge in Lagos. Through these arrivals, Lagosian elites learned of the ordinance. They unleashed a vigorous campaign against the colonial state.

    The principal figures in this movement included Christopher Sapara Williams, a barrister, and James Bright Davies, editor of The Nigerian Times. Others included politician Herbert Macaulay, Herbert Pearse, a prominent merchant, Bishop James Johnson and the Reverend Mojola Agbebi. Unlike Fearon’s lone-wolf strategy, they mounted a coordinated assault on the colonial administration. They drafted petitions, briefed sympathetic European organisations, and inundated local newspapers with commentary.

    Their arguments blended humanitarian indignation with constitutional acumen. They insisted that the ordinance contravened both British liberal ideals and African custom.

    After years of pressure the law was amended and then quietly repealed in 1914.

    Why these stories matter now

    Contemporary scholarship on abolition is gradually shifting from asking “what Britain did for Africa” to examining the role Africans played in ending slavery.

    Many African abolitionists who fought and lost their lives in the struggle against slavery have long gone unacknowledged. This is beginning to change.

    The two articles discussed here highlight the creativity of Africans who, decades before radio or civil-rights NGOs, used transatlantic information circuits. They exposed colonial governments that continued to rely on forced-labour economies long after slavery was supposed to have ended.

    They remind us that grassroots documentation can overturn official narratives. Evidence-based advocacy, coalition-building, and the strategic use of global media remain potent instruments.

    The Conversation

    Research for these articles was funded by the European Research Council under the European Union’s Horizon 2020 research and innovation programme (Grant Agreement No. 885418).

    ref. The African activists who challenged colonial-era slavery in Lagos and the Gold Coast – https://theconversation.com/the-african-activists-who-challenged-colonial-era-slavery-in-lagos-and-the-gold-coast-261089

  • Cricket’s great global divide: elite schools still shape the sport

    Source: ForeignAffairs4

    Source: The Conversation – Africa – By Habib Noorbhai, Professor (Health & Sports Science), University of Johannesburg

    If you were to walk through the corridors of some of the world’s leading cricket schools, you might hear the crack of leather on willow long before the bell for the end of the day rings.

    Across the cricketing world, elite schools have served as key feeder systems to national teams for decades. They provide young players with superior training facilities, high-level coaching and competitive playing opportunities.

    This tradition has served as cricket’s most dependable talent pipeline. But is it a strength or a symptom of exclusion?

    My recent study examined the school backgrounds of 1,080 elite men’s cricketers across eight countries over a 30-year period. It uncovered telling patterns.




    Read more:
    Cricket: children are the key to the future of the game, not broadcast rights


    Top elite cricket countries such as South Africa, England and Australia continue to draw heavily from private education systems. In these nations, cricket success seems almost tied to one’s school uniform.

    I argue that if cricket boards want to promote equity and competitiveness, they will need to broaden the talent search by investing in grassroots cricket infrastructure in under-resourced areas.

    For cricket to be a sport that anyone with talent can succeed in, there will need to be more school leagues and entry-level tournaments as well as targeted investment in community-based hubs and non-elite school zones.

    Findings

    South Africa is a case in point. My previous study in 2020 outlined that more than half of its national players at One-Day International (ODI) World Cups came from boys-only schools (mostly private).

    These schools are often well-resourced, with turf wickets, expert coaches and an embedded culture of competition. Unsurprisingly, the same schools tend to produce a high number of national team batters, as they offer longer game formats and better playing surfaces. Cricket’s colonial origins have influenced the structure and culture of school cricket being tied to a form of privilege.




    Read more:
    Elite boys’ schools still shape South Africa’s national cricket team


    In Australia and England, the story is not very different. Despite their efforts to diversify player sourcing, private schools still dominate. Even in cricketing nations that celebrate working-class grit, such as Australia, private school players continue to shape elite squads.

    The statistics say as much; for example: about 44% of Australian Ashes test series players since 2010 attended private schools, and for England, the figure is 45%. That’s not grassroots, it could be regarded as gated turf…

    Yet not all countries follow this route. The West Indies, Pakistan and Sri Lanka reflect very different models. Club cricket, informal play and community academies provide their players with opportunities to rise. These countries have lower reliance on private schools. Some of their finest players emerged from modest public schooling or neighbourhood cricketing networks.

    India provides an interesting hybrid. Although elite schools such as St. Xavier’s and Modern School contribute players, most national stars emerge from public institutions or small-town academies. The explosion of the Indian Premier League since 2008 has also democratised access, pulling in talent from previously overlooked and underdeveloped cities.

    In these regions, scouting is based on potential, not privilege.

    So why does this matter?

    At first glance, elite schools producing elite cricketers might appear logical. These institutions have the resources to nurture talent. But scratch beneath the surface and troubling questions appear.

    Are national teams truly reflecting their countries? Or are they simply echo chambers of social advantage?




    Read more:
    Cricket inequalities in England and Wales are untenable – our report shows how to rejuvenate the game


    In South Africa, almost every Black African cricketer to represent the country has come through a private school (often on scholarship). That suggests that talent without access remains potentially invisible. It also places unfair pressure on the few who make it through, as if they carry the hopes of entire communities.

    I found that in England, some county systems have started integrating players from state schools, but progress is slow. In New Zealand, where cricket is less centralised around private institutions, regional hubs and public schools have had more success in spreading opportunities. However, even there, Māori and Pasifika players remain underrepresented in elite squads.

    Four steps that can be taken

    1. One solution lies in recognising that schools don’t have a monopoly on talent. Cricket boards must increase investment in grassroots infrastructure, particularly in under-resourced areas. Setting up community hubs, supporting school-club partnerships and more regional competitions could discover hidden talent.

    2. Another step is to improve the visibility and reach of scouting networks. Too often, selection favours players from known institutions. By diversifying trial formats and leveraging technology (such as video submissions or performance-tracking apps), selectors can widen their net. It’s already happening in India, where IPL scouts visit the most unlikely of places.

    3. Coaching is another stumbling block. In many countries, high-level coaches are clustered in elite schools. National boards should consider optimising salaries as well as rotating certified coaches into public schools and regional academies. They should also ensure coaches are developed to be equipped to work with diverse learners and conditions.

    4. Technology offers other exciting possibilities too. Virtual simulations, motion tracking and AI-assisted video reviews are now common in high-performance centres. Making simplified versions available to lower-income schools could level the playing field. Imagine a township bowler in South Africa learning to analyse their technique using only a smartphone and a free app?

    Fairness in sport

    The conversation about schools and cricket is not just about numbers or stats. It is about fairness. Sport should be the great leveller, not another mechanism of exclusion. If cricket is to thrive, it needs to look beyond scoreboards and trophies. It must ask who gets to play and who never gets seen?




    Read more:
    Why is cricket so popular on the Indian sub-continent?


    A batter from a village school in India, a wicket-keeper from a government school in Sri Lanka or a fast bowler in a South African township; each deserves the chance to be part of the national story. Cricket boards, policymakers and educators must work together to make that possible.

    The game will only grow when it welcomes players from all walks of life. That requires more than scholarships. It requires a reset of how we think about talent. Because the next cricket superstar may not wear a crest on their blazer. They may wear resilience on their sleeve.

    The Conversation

    Habib Noorbhai does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Cricket’s great global divide: elite schools still shape the sport – https://theconversation.com/crickets-great-global-divide-elite-schools-still-shape-the-sport-261709

  • Canada could use thermal infrastructure to turn wasted heat emissions into energy

    Source: ForeignAffairs4

    Source: The Conversation – Canada – By James (Jim) S. Cotton, Professor, Department of Mechanical Engineering, McMaster University

    Buildings are the third-largest source of greenhouse gas emissions in Canada. In many cities, including Vancouver, Toronto and Calgary, buildings are the single highest source of emissions.

    The recently launched Infrastructure for Good barometer, released by consulting firm Deloitte, suggests that Canada’s infrastructure investments already top the global list in terms of positive societal, economic and environmental benefits.

    In fact, over the past 150 years, Canada has built railways, roads, clean water systems, electrical grids, pipelines and communication networks to connect and serve people across the country.

    Now, there’s an opportunity to build on Canada’s impressive tradition by creating a new form of infrastructure: capturing, storing and sharing the massive amounts of heat lost from industry, electricity generation and communities, even in summer.

    Natural gas precedent

    Indoor heating often comes from burning fossil fuels — three-quarters of Ontario homes, for example, are heated by natural gas. Until about 1966, homes across Canada were primarily heated by wood stoves, coal boilers, oil furnaces or heaters using electricity from coal-fired power plants.

    After the oil crisis of the 1970s, many of those fuels were replaced by natural gas, delivered directly to individual homes. The cost of the natural gas infrastructure, including a national network of pipelines, was amortized over more than 50 years to make the cost more practical.

    two pie charts showing the source of Ontario's greenhouse gas emissions
    Sources of greenhouse gas emissions in Ontario.
    (J. Cotton), CC BY

    This reliable, low-cost energy source quickly proved to be popular. The change cut heating emissions across Ontario by roughly half throughout the 1970s and 1980s, long before climate change was the concern it is today.

    Now, as the need to decarbonize becomes more pressing, recent studies not only emphasize the often-overstated emissions reductions benefits from using natural gas; they also indicate that burning this fuel source is still far from net-zero.

    However, there’s no reason why Canadian governments can’t invest in new infrastructure-based alternative heating solutions. This time, they could replace natural gas with an alternative, net-zero source: the wasted heat already emitted by other energy uses.

    Heat capture and storage

    Depending on the source temperature, technology used and system design, heat can be captured throughout the year, stored and distributed as needed. A type of infrastructure called thermal networks could capture leftover heat from factories and nuclear and gas-fired power plants.

    In essence, thermal networks take excess thermal energy from industrial processes (though thermal energy can theoretically be captured from a variety of different sources), and use it as a centralized heating source for a series of insulated underground pipelines connected to multiple other buildings. These pipelines, in turn, are used to heat or cool these connected buildings.

    A substantial potential to capture heat similarly exists in every neighbourhood. Heat is produced by data centres, grocery stores, laundromats, restaurants, sewage systems and even hockey arenas.

    In Ontario, the amount of energy we dump in the form of heat is greater than all the natural gas we use to heat our homes.

    A restaurant, for example, can produce enough heat for seven family homes. To take advantage of the wasted heat, Canada needs to build thermal networks, corridors and storage to capture and distribute heat directly to consumers.

    The effort demands substantial leadership from all levels of government. Creating these systems would be expensive, but the technology does exist, and the one-time cost would pay for itself many times over.

    Such systems are already working in other cold countries. Thermal networks heat half the homes in Sweden and two-thirds of homes in Denmark.

    pipes being laid under a city street
    District heating pipes being laid at Gullbergs Strandgata in Gothenburg, Sweden in May 2021.
    (Shutterstock)

    The oil crisis of the 1970s motivated both countries to find new domestic heating sources. They financed their new infrastructure over 50 years and reduced their investment risks through low-interest bonds (loaned by public banks) and generous subsidies.

    These were offered to utility companies looking to expand district energy operations, and to consumers by incentivizing connections to such systems. Additionally, in Denmark, controlled consumer prices served a similar function.

    At least seven American states have established thermal energy networks, with New York being the first. The state’s Utility Thermal Energy Network and Jobs Act allows public utilities to own, operate and manage thermal networks.

    They can supply thermal energy, but so can private producers such as data centres, all with public oversight. Such a strategy avoids monopolies and allows gas and electric utilities to deliver services through central networks.

    An opportunity for Canada

    Canada has a real opportunity to learn from the experiences of Sweden, Denmark and New York. In doing so, Canada can create a beneficial and truly national heating system in the process. Beginning with federal government leadership, thermal networks could be built across Canada, tailored to the unique and individual needs, strengths and opportunities of municipalities and provinces.

    Such a shift would reduce emissions and generate greater energy sovereignty for Canada. It could drive a just energy strategy that could provide employment opportunities for those displaced by the transition away from fossil fuels, while simultaneously increasing Canada’s economic independence in the process.

    Thermal networks could be built using pipelines made from Canadian steel. Oil-well drillers from Alberta could dig borehole heat-storage systems. A new market for heat-recovery pumps would create good advanced-manufacturing jobs in Canada.




    Read more:
    How heat storage technologies could keep Canada’s roads and bridges ice-free all winter long


    Funding for the infrastructure could come through public-private partnerships, with major investments from public banks and pension funds, earning a solid and secure rate of return. A regulated approach and process could permit this infrastructure cost to be amortized over decades, similar to the way past governments have financed gas, electrical and water networks.

    As researchers studying the engineering and policy potential of such an opportunity, we view such actions as essential if net-zero is to be achieved in the Canadian building sector. They are also a win-win solution for incumbent industry, various levels of government and citizens across Canada alike.

    Yet efforts to install robust thermal networks remain stalled by institutional inertia, the strong influence of the oil industry, limited citizen awareness of the technology’s potential and a tendency for government to view the electrification of heating as the primary solution to building decarbonization.

    In this time of environmental crisis and international uncertainty, pushing past these barriers, drawing on Canada’s lengthy history of constructing infrastructure and creating this new form thermal energy infrastructure would be a safe, beneficial and conscientious way to move Canada into a more climate-friendly future.

    The Conversation

    James (Jim) S. Cotton receives funding from the Natural Sciences and Engineering Research Council of Canada.

    Caleb Duffield does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Canada could use thermal infrastructure to turn wasted heat emissions into energy – https://theconversation.com/canada-could-use-thermal-infrastructure-to-turn-wasted-heat-emissions-into-energy-254972

  • Emil Bove confirmed – his appeals court nomination echoed earlier controversies, but with a key difference

    Source: ForeignAffairs4

    Source: The Conversation – USA – By Paul M. Collins Jr., Professor of Legal Studies and Political Science, UMass Amherst

    Emil Bove, Donald Trump’s nominee to serve as a federal appeals judge for the 3rd Circuit, is sworn in during a confirmation hearing in Washington, D.C., on June 25, 2025. Bill Clark/CQ-Roll Call, Inc, via Getty Images

    President Donald Trump’s nomination of his former criminal defense attorney, Emil Bove, to be a judge on the United States Court of Appeals for the 3rd Circuit, was mired in controversy.

    On June 24, 2025, Erez Reuveni, a former Department of Justice attorney who worked with Bove, released an extensive, 27-page whistleblower report. Reuveni claimed that Bove, as the Trump administration’s acting deputy attorney general, said “that it might become necessary to tell a court ‘fuck you’” and ignore court orders related to the administration’s immigration policies. Bove’s acting role ended on March 6 when he resumed his current position of principal associate deputy attorney general.

    When asked about this statement at his June 25 Senate confirmation hearing, Bove said, “I don’t recall.”

    And on July 15, 80 former federal and state judges signed a letter opposing Bove’s nomination. The letter argued that “Mr. Bove’s egregious record of mistreating law enforcement officers, abusing power, and disregarding the law itself disqualifies him for this position.”

    A day later, more than 900 former Department of Justice attorneys submitted their own letter opposing Bove’s confirmation. The attorneys argued that “Few actions could undermine the rule of law more than a senior executive branch official flouting another branch’s authority. But that is exactly what Mr. Bove allegedly did through his involvement in DOJ’s defiance of court orders.”

    On July 17, Democrats walked out of the Senate Judiciary Committee vote, in protest of the refusal by Chairman Chuck Grassley, a Republican from Iowa, to allow further investigation and debate on the nomination. Republicans on the committee then unanimously voted to move the nomination forward for a full Senate vote.

    Late in the evening of July 29, and after two more whistleblower complaints about Bove’s conduct had emerged, the U.S. Senate confirmed Bove’s nomination in a 50-49 vote.

    As a scholar of the courts, I know that most federal court appointments are not as controversial as Bove’s nomination. But highly contentious nominations do arise from time to time.

    Here’s how three controversial nominations turned out – and how Bove’s nomination was different in a crucial way.

    A man smiles and looks toward a microphone with people sitting behind him. All of them are dressed formally.
    Robert Bork testifies before the Senate Judiciary Committee for his confirmation as associate justice of the Supreme Court in September 1987.
    Mark Reinstein/Corbis via Getty Images

    Robert Bork

    Bork is the only federal court nominee whose name became a verb.

    “Borking” is “to attack or defeat (a nominee or candidate for public office) unfairly through an organized campaign of harsh public criticism or vilification,” according to Merriam-Webster.

    This refers to Republican President Ronald Reagan’s 1987 appointment of Bork to the Supreme Court.

    Reagan called Bork “one of the finest judges in America’s history.” Democrats viewed Bork, a federal appeals court judge, as an ideologically extreme conservative, with their opposition based largely on his extensive scholarly work and opinions on the U.S. Court of Appeals for the District of Columbia Circuit.

    In opposing the Bork nomination, Sen. Ted Kennedy of Massachusetts took the Senate floor and gave a fiery speech: “Robert Bork’s America is a land in which women would be forced into back-alley abortions, blacks would sit at segregated lunch counters, rogue police could break down citizens’ doors in midnight raids, schoolchildren could not be taught about evolution, writers and artists could be censored at the whim of government, and the doors of the federal courts would be shut on the fingers of millions of citizens for whom the judiciary is often the only protector of the individual rights that are the heart of our democracy.”

    Ultimately, Bork’s nomination failed by a 58-42 vote in the Senate, with 52 Democrats and six Republicans rejecting the nomination.

    Ronnie White

    In 1997, Democratic President Bill Clinton nominated White to the United States District Court for the Eastern District of Missouri. White was the first Black judge on the Missouri Supreme Court.

    Republican Sen. John Ashcroft, from White’s home state of Missouri, led the fight against the nomination. Ashcroft alleged that White’s confirmation would “push the law in a pro-criminal direction.” Ashcroft based this claim on White’s comparatively liberal record in death penalty cases as a judge on the Missouri Supreme Court.

    However, there was limited evidence to support this assertion. This led some to believe that Ashcroft’s attack on the nomination was motivated by stereotypes that African Americans, like White, are soft on crime.

    Even Clinton implied that race may be a factor in the attacks on White: “By voting down the first African-American judge to serve on the Missouri Supreme Court, the Republicans have deprived both the judiciary and the people of Missouri of an excellent, fair, and impartial Federal judge.”

    White’s nomination was defeated in the Senate by a 54-45 party-line vote. In 2014, White was renominated to the same judgeship by President Barack Obama and confirmed by largely party-line 53-44 vote, garnering the support of a single Republican, Susan Collins of Maine.

    A man with brown skin and a black suit places a hand on a leather chair and stands alongside people dressed formally.
    Ronnie White, a former justice for the Missouri Supreme Court, testifies during an attorney general confirmation hearing in Washington in January 2001.
    Alex Wong/Newsmakers

    Miguel Estrada

    Republican President George W. Bush nominated Estrada to the Court of Appeals for the District of Columbia Circuit in 2001.

    Estrada, who had earned a unanimous “well-qualified” rating from the American Bar Association, faced deep opposition from Senate Democrats, who believed he was a conservative ideologue. They also worried that, if confirmed, he would later be appointed to the Supreme Court.

    A dark-haired man in a suit, standing while swearing an oath.
    Miguel Estrada, President George Bush’s nominee to the U.S. Court of Appeals for the District of Columbia, is sworn in during his hearing before Senate Judiciary on Sept. 26, 2002.
    Scott J. Ferrell/Congressional Quarterly/Getty Images

    However, unlike Bork – who had an extensive paper trail as an academic and judge – Estrada’s written record was very thin.

    Democrats sought to use his confirmation hearing to probe his beliefs. But they didn’t get very far, as Estrada dodged many of the senators’ questions, including ones about Supreme Court cases he disagreed with and judges he admired.

    Democrats were particularly troubled by allegations that Estrada, when he was screening candidates for Justice Anthony Kennedy, disqualified applicants for Supreme Court clerkships based on their ideology.

    According to one attorney: “Miguel told me his job was to prevent liberal clerks from being hired. He told me he was screening out liberals because a liberal clerk had influenced Justice Kennedy to side with the majority and write a pro-gay-rights decision in a case known as Romer v. Evans, which struck down a Colorado statute that discriminated against gays and lesbians.”

    When asked about this at his confirmation hearing, Estrada initially denied it but later backpedaled. Estrada said, “There is a set of circumstances in which I would consider ideology if I think that the person has some extreme view that he would not be willing to set aside in service to Justice Kennedy.”

    Unlike the Bork nomination, Democrats didn’t have the numbers to vote Estrada’s nomination down. Instead, they successfully filibustered the nomination, knowing that Republicans couldn’t muster the required 60 votes to end the filibuster. This marked the first time in Senate history that a court of appeals nomination was filibustered. Estrada would never serve as a judge.

    Bove stands out

    As the examples of Bork, Estrada and White make clear, contentious nominations to the federal courts often involve ideological concerns.

    This is also true for Bove, who was opposed in part because of the perception that he is a conservative ideologue.

    But the main concerns about Bove were related to a belief that he is a Trump loyalist who shows little respect for the rule of law or the judicial branch.

    This makes Bove stand out among contentious federal court nominations.

    This story, originally published on July 21, 2025, has been updated to reflect the Senate’s confirmation of Bove.

    The Conversation

    Paul M. Collins Jr. does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Emil Bove confirmed – his appeals court nomination echoed earlier controversies, but with a key difference – https://theconversation.com/emil-bove-confirmed-his-appeals-court-nomination-echoed-earlier-controversies-but-with-a-key-difference-261347

  • AI can be responsibly integrated into classrooms by answering the ‘why’ and ‘when’

    Source: ForeignAffairs4

    Source: The Conversation – Canada – By Soroush Sabbaghan, Associate Professor, Werklund School of Education, University of Calgary

    Scroll through social media and you’ll find numerous videos such as “How to Use AI to write your essay in 5 minutes” or “How to skip the readings with ChatGPT.”

    The discourse surrounding AI in education is deafening and it’s almost entirely consumed by the question: How? How do we write the perfect prompt? How should educators integrate ChatGPT into academic work or detect its use?

    This obsession with methods and mechanics is a dangerous distraction. By racing to master the “how,” we have skipped the two far more critical, foundational questions: why should we use these tools in the first place, and when is it appropriate to do so?

    Answering the “how” is a technical challenge. Answering the “why” and “when” is a philosophical one. Until educators and educational leaders ground their approaches in a coherent philosophical and theoretical foundation for learning, integrating AI will be aimless, driven by novelty and efficiency rather than human development.

    Two frameworks provide the essential lens we need to move beyond the hype and engage with AI responsibly: “virtue epistemology,” which argues that knowledge is not merely a collection of correct facts or a well-assembled product, but the outcome of practising intellectual virtues; and a care-based approach that prioritizes relationships.

    Virtue over volume

    The current “how-to” culture implicitly defines the goal of learning as the production of a polished output (like a a comprehensive report or a functional piece of code). From this perspective, AI is a miracle of efficiency. But is the output the point of learning?

    Virtue epistemology, as championed by philosophers like Linda Zagzebski, suggests the real goal of an assignment is not just writing the essay itself — but the cultivation of curiosity, intellectual perseverance, humility and critical thinking that the process is meant to instil.

    This reframes the “why” of using AI. From this perspective, the only justification for integrating AI into a learning process should be to support and sustain intellectual labour.

    If a student uses AI to brainstorm counterarguments for a debate, they are practising intellectual flexibility as part of that labour. If another student uses AI to map connections between theoretical frameworks for a research paper, they are deepening conceptual understanding through guided synthesis.

    When AI undermines ‘why’

    However, when the “how” of AI is used to bypass the very struggle that builds virtue (by exercising intellectual labour, including analysis, deliberation and judgment), it directly undermines the “why” of the assignment. A graduate student who generates a descriptive list of pertinent research about a topic without engaging with the sources skips the valuable process of synthesis and critical engagement.

    This stands in direct contrast to philosopher and educator John Dewey’s view of learning as an active, experiential process.

    For Dewey, learning happens through doing, questioning and grappling with complexity, not by acquiring information passively. Assignments that reward perfection and correctness over process and growth further incentivize the use of AI as a shortcut, reducing learning to prompting and receiving rather than engaging in the intellectual labour of constructing meaning.

    Care over compliance

    If the “why” is about supporting human intellectual labour and fostering intellectual virtue, the “when” is about the specific, contextual and human needs of the learner.

    This is where an “ethics of care” becomes indispensable. As philosopher Nel Noddings proposed, a care-based approach prioritizes relationships and the needs of the individual over rigid, universal rules. It moves away from a one-size-fits-all policy and toward discretionary judgment.

    The question: “When is it appropriate to use AI?” cannot be answered with a simple rubric. For a student with a learning disability or severe anxiety, using AI to help structure their initial thoughts might be a compassionate and enabling act, allowing them to engage with the intellectual labour of the task without being paralyzed by the mechanics of writing. In this context, the “when” is when the tool removes a barrier to deeper learning.

    Conversely, for a student who needs to develop foundational writing skills, relying on that same tool for the same task would be irresponsible. Deciding the “when” requires educators to know their learner, understand the learning goal and act with compassion and wisdom. It is a relational act, not a technical one.

    Educators must ensure that AI supports rather than displaces the development of core capabilities.

    AI as mediator

    This is also where we must confront historian and philosopher Michel Foucault’s challenge to the idea of the lone, autonomous author. Foucault argued that the concept of the author functions to make discourse controllable and to have a name that can be held accountable. Our obsession with policing students’ authorship — a “how” problem focused on originality and plagiarism — is rooted in this system of control.

    It rests on the convenient fiction of the unmediated creator, ignoring that all creation is an act of synthesis, mediated by language, culture and the texts that came before. AI is simply a new, more powerful mediator that makes this truth impossible to ignore.

    This perspective reframes an educator’s task away from policing a fragile notion of originality. The more crucial questions become when and why to use a mediator like AI. Does the tool enable deeper intellectual labour, or does it supplant the struggle that builds virtue? The focus shifts from controlling the student to intentionally shaping the learning experience.

    Reorienting AI through values and virtue

    The rush to adopt AI tools without a philosophical framework is already leading us toward a more surveilled, less trusting and pedagogically shallow future.

    Some educational systems are investing money in AI detection software when what’s needed is investing in redesigning assessment.

    Policy is emerging that requires students to declare their use of AI. But it’s essential to understand that disclosure isn’t the same as meaningful conversations about intellectual virtue.

    Answering the questions of why and when to use AI requires us to be architects of learning. It demands that we engage with thinking about learning and what it means to produce knowledge through the works of people like Dewey, Noddings, Zagzebski and others as urgently as we do with the latest tech blogs.

    For educators, the responsible integration of AI into our learning environments depends on our commitments to cultivating a culture that values intellectual labour and understands it as inseparable from the knowledge and culture it helps generate.

    It is time to stop defaulting to “how” and instead lead the conversation about the values that define when and why AI fits within meaningful and effective learning.

    The Conversation

    Soroush Sabbaghan receives funding from SSHRC.

    ref. AI can be responsibly integrated into classrooms by answering the ‘why’ and ‘when’ – https://theconversation.com/ai-can-be-responsibly-integrated-into-classrooms-by-answering-the-why-and-when-261496

  • Ontario’s forest management is falling short on key sustainability test

    Source: ForeignAffairs4

    Source: The Conversation – Canada – By Jay R. Malcolm, Professor Emeritus, Forestry, University of Toronto

    Forest degradation is increasingly recognized as a major global threat. Such degradation refers to the gradual erosion of a forest’s ability to store carbon, support biodiversity and sustain livelihoods, including those of Indigenous Peoples.

    International frameworks such as the United Nations Convention on Biological Diversity and the UN Framework Convention on Climate Change now address degradation alongside deforestation.

    While tropical forests have long been the focus, attention is also turning to temperate and boreal forests, where forest management is widespread and the potential for degradation is growing.

    Some scientists have argued that if forest management is designed to be “ecologically sustainable,” then there should be little concern about degradation. But is this principle being upheld in practice? Our recent study in Ontario suggests otherwise.

    Emulating natural forest disturbances

    A widely used strategy to support ecological sustainability is to emulate natural disturbances; that is, to design human-caused disturbances so they fall within the range of variation observed in nature.

    The ecological theory behind this approach is that species are adapted to cope with, or even benefit from, natural disturbances. In Canada’s managed boreal forests, for example, harvesting is explicitly designed to mimic natural fires, both in individual cutblocks and across the broader landscape.

    In fact, this principle is enshrined in Ontario’s 1994 Crown Forest Sustainability Act that states:

    “The long-term health and vigour of Crown forests should be provided for by using forest practice…that emulates natural disturbances and landscape patterns…”

    The ecological sustainability of forest management is not a given: it is a hypothesis, and like any hypothesis, it must be tested. Are we actually managing forests in ecologically sustainable ways, or are we witnessing gradual forest degradation?

    Our study examined the state of a 7.9 million hectare area of boreal forest in northeastern Ontario from 2012 to 2021 to test whether the provincial management regime was emulating natural disturbances, as required by law, or was instead prioritizing timber harvesting.

    We used three indicators:

    1) The rate at which forest was disturbed (including harvesting and fire).

    2) The amount of relatively old forest (greater than 100 years old).

    3) Modelled habitat for two species that have been used as indicators of sustainability: America marten and boreal caribou.

    Our research did not find evidence that current practices in northeastern Ontario are emulating natural disturbances across the boreal landscape. Rather, the observed disturbance patterns appear to reflect strategies primarily focused on timber harvesting priorities.

    What we found

    A particular risk for boreal forests is a focus on timber production and economic returns over ecological goals. Such an approach is fundamentally at odds with the idea of emulating nature.

    In particular, forests older than 100 years old have high ecological value in natural systems. They keep large amounts of carbon out of the atmosphere and provide habitat for myriad species. But if one is prioritizing timber, they are viewed as wasteful because they do not produce timber as rapidly as younger forests and are often targeted for removal. In that perspective, they are labelled “decadent.”

    We found that the amount of forest disturbed per year was often higher than expected under natural fire regimes and, in some coniferous forest types, even exceeded the rates expected under a strategy that prioritized timber harvesting.

    Relatively old forests were also much rarer than in natural landscapes: only 22 per cent of the forest in the study area was more than 100 years old compared to an average of 54 per cent in natural landscapes.

    This amount was lower than even the most conservative threshold of natural variability.

    Habitats for marten and caribou were similarly degraded and fragmented. Marten habitat covered just 36 per cent of the study landscape, compared to 76 per cent in a reconstructed natural landscape. For boreal caribou, habitat was even more compromised, covering only four per cent of the study area compared to 53 per cent in the natural landscape.

    Strikingly, for caribou, levels of habitat disturbance — including disturbances from harvesting, fire and roads — exceeded 70 per cent of the landscape, jeopardizing the sustainability of the two caribou populations.

    Surprisingly, the clearest evidence of forest management prioritizing timber occurred within zones meant explicitly to sustain caribou. Our modelling showed that such areas will contain even less caribou habitat in the future than they do today.

    A path to an ecologically sustainable future

    The Ontario government is currently revisiting its boreal management strategy — a welcome and timely development. But rather than relying solely on a virtual reality model (Boreal Forest Landscape Disturbance Simulator) to define natural landscapes as is currently the case, it is evident that policy must be grounded in empirical data from real, unmanaged forests.

    Scientific research over the past several decades has identified forest management approaches that can deliver timber while also sustaining ecological services within natural bounds.

    These strategies, however, rely on tools the province has yet to embrace, including longer harvest rotations, increased use of partial harvesting instead of over-relying on clearcutting, expanded areas set aside from logging, and explicit targets for amounts of forest up to 200 years of age or older.

    Our findings indicate that forest degradation is already underway in the boreal forests of Ontario. Substantial changes to forest management are required to reverse this trend and safeguard the ecosystem services on which people and wildlife depend.

    The Conversation

    Jay R. Malcolm has received funding from the Natural Sciences and Engineering Research Council of Canada and Wahkohtowin Development GP Inc. (WDGP). The research also benefited from research on American marten habitat funded by Mitacs
    and WDGP. WDGP played a role in defining the study area, but otherwise funders were not involved in the study design; in the collection, analysis, and interpretation of data; in the writing of the manuscript; or in the decision to submit the article for publication.

    Justina C. Ray is President and Senior Scientist of Wildlife Conservation Society Canada.

    ref. Ontario’s forest management is falling short on key sustainability test – https://theconversation.com/ontarios-forest-management-is-falling-short-on-key-sustainability-test-261054

  • Unpacking Florida’s immigration trends − demographers take a closer look at the legal and undocumented population

    Source: ForeignAffairs4

    Source: The Conversation – USA – By Matt Brooks, Assistant Professor of Sociology, Florida State University

    Immigration has dominated recent public discourse about Florida, whether it be the opening of Alligator Alcatraz, a migrant detention facility in the middle of the Everglades, or Florida Gov. Ron DeSantis declaring an “immigration emergency” for the state that has lasted more than two years.

    As demographers – that is, people who count people – we’ve noticed that this conversation has proceeded largely without the benefit of a clear description of Florida’s immigrant population.

    Here’s a snapshot.

    How many immigrants are in Florida?

    We used data from the Office of Homeland Security Statistics and the American Community Survey, conducted annually by the U.S. Census Bureau. Homeland Security provides estimates of the state’s undocumented population and annual counts of authorized arrivals. Census data allow us to describe the social and economic characteristics of Florida’s immigrant population.

    In 2023, the most recent year for which the Department of Homeland Security provides publicly available data, an estimated 590,000 immigrants without legal status were living in Florida. This is the third-largest population of immigrants without legal status in the U.S., behind California and Texas. But in contrast to those two states, the number of immigrants entering Florida illegally has been shrinking since 2018.

    On the other hand, DHS data points to recent growth in Florida’s population of immigrants with legal status. This represents a rebound from declines between 2016 and 2020.

    In 2023, Florida welcomed 72,850 residents from outside the country. This is just 0.3% of Florida’s population that year. About 95% of these new Florida residents were admitted as lawful permanent residents, or green card holders. The remainder entered as refugees (3%) and people granted asylum (2%).

    For comparison, U.S. Census Bureau estimates suggest roughly 640,000 people moved to Florida in 2023 from other states.

    Who makes up Florida’s immigrant population?

    The American Community Survey data tells us even more about Florida’s immigrant population. The survey estimates that 4,996,874 foreign-born individuals lived in Florida in 2023, up from 3,798,062 in 2013. These numbers include those who are in the U.S. legally and illegally and encompass both recent arrivals and long-term residents.

    In 2023, about 22% of Florida residents – and nearly 7% of Florida children – were immigrants. An additional 29% of Florida children have at least one immigrant parent.

    According to the American Community Survey, nearly half of Florida’s immigrants were born in Cuba, Haiti, Venezuela, Colombia or Mexico. Despite being born elsewhere, Florida’s immigrants in many ways resemble other Floridians: About 20% hold a bachelor’s degree, compared to 22% of nonimmigrant Floridians, and 13% of both groups have a graduate degree. Nearly all Florida immigrants, 89%, speak English, and the majority, 57%, are naturalized citizens.

    Immigrants make up a disproportionate share of Florida’s workforce, particularly in essential sectors of the state’s economy. They account for more than 47% of Florida’s agricultural workers, 41% of hotel workers and 35% of construction workers.

    Florida immigrants also work in sectors that many might not consider to be “immigrant jobs.” They constitute 33% of child care workers, 21% of school and university employees and 27% of the health care workers.

    Across all sectors, immigrants have lower unemployment rates than nonimmigrants. Although available data cannot tell us the extent to which these numbers are bolstered by undocumented immigrants, the importance of Florida’s immigrants for the state’s economy is undeniable.

    Florida’s population is growing at a faster rate than any other state in the country, boosted by people moving in from abroad and from other states. This growth both reflects and feeds the state’s economic vitality. Between 2019 and 2024, Florida’s GDP grew twice as fast as the nation’s as a whole.

    Is Florida experiencing an “immigration emergency”? That’s for politicians to decide. Our research suggests that policies that discourage new arrivals or encourage – or force – migrants to leave could jeopardize Florida’s robust economy and the well-being of its population.

    The Conversation

    The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    ref. Unpacking Florida’s immigration trends − demographers take a closer look at the legal and undocumented population – https://theconversation.com/unpacking-floridas-immigration-trends-demographers-take-a-closer-look-at-the-legal-and-undocumented-population-261425

  • When socialists win Democratic primaries: Will Zohran Mamdani be haunted by the Upton Sinclair effect?

    Source: ForeignAffairs4

    Source: The Conversation – USA – By James N. Gregory, Professor of History, University of Washington

    Democratic mayoral candidate Zohran Mamdani, right, and Attorney General of New York Letitia James walk in the NYC Pride March on June 29, 2025, in New York. AP Photo/Olga Fedorova

    It has happened before: an upset victory by a Democratic Socialist in an important primary election after an extraordinary grassroots campaign.

    In the summer of 1934, Upton Sinclair earned the kind of headlines that greeted Zohran Mamdani’s primary victory on June 24, 2025, in the New York City mayoral election.

    Mamdani’s win surprised nearly everyone. Not just because he beat the heavily favored former governor Andrew Cuomo, but because he did so by a large margin. Because he did so with a unique coalition, and because his Muslim identity and membership in the Democratic Socialists of America should have, in conventional political thinking, made victory impossible.

    This sounds familiar, at least to historians like me. Upton Sinclair, the famous author and a socialist for most of his life, ran for governor in California in 1934 and won the Democratic primary election with a radical plan that he called End Poverty in California, or EPIC.

    The news traveled the globe and set off intense speculation about the future of California, where Sinclair was then expected to win the general election. His primary victory also generated theories about the future of the Democratic Party, where this turn toward radicalism might complicate the policies of the Democratic administration of Franklin D. Roosevelt.

    What happened next may concern Mamdani supporters. Business and media elites mounted a campaign of fear that put Sinclair on the defensive. Meanwhile, conservative Democrats defected, and a third candidate split progressive votes.

    In the November election, Sinclair lost decisively to incumbent Gov. Frank Merriam, who would have stood less chance against a conventional Democrat.

    As a historian of American radicalism, I have written extensively about Sinclair’s EPIC movement, and I direct an online project that includes detailed accounts of the campaign and copies of campaign materials.

    Upton’s 1934 campaign initiated the on-again, off-again influence of radicals in the Democratic Party and illustrates some of the potential dynamics of that relationship, which, almost 100 years later, may be relevant to Mamdani in the coming months.

    A man waves through the window of a black car.
    Upton Sinclair is seen in September 1934 in Poughkeepsie, N.Y., following a conference with President Franklin D. Roosevelt.
    Bettmann/Contributor/Getty Images

    California, 1934

    Sinclair launched his gubernatorial campaign in late 1933, hoping to make a difference but not expecting to win. California remained mired in the Great Depression. The unemployment rate had been estimated at 29% when Roosevelt took office in March and had improved only slightly since then.

    Sinclair’s Socialist Party had failed badly in the 1932 presidential election as Democrat Roosevelt swept to victory. Those poor results included California, where the Democratic Party had been an afterthought for more than three decades.

    Sinclair decided that it was time to see what could be accomplished by radicals working within that party.

    Reregistering as a Democrat, he dashed off a 64-page pamphlet with the futuristic title I, Governor of California and How I Ended Poverty. It detailed his plan to solve California’s massive unemployment crisis by having the state take over idle farms and factories and turn them into cooperatives dedicated to “production for use” instead of “production for profit.”

    A black and white photo shows a man on a stage, the American flag behind him, speaking to a crowd.
    Sinclair speaks to a group in his campaign headquarters in Los Angeles, Calif., in September 1934.
    Bettmann/ Contributor/Getty Images

    Sinclair soon found himself presiding over an explosively popular campaign, as thousands of volunteers across the state set up EPIC clubs – numbering more than 800 by election time – and sold the weekly EPIC News to raise campaign funds.

    Mainstream Democrats waited too long to worry about Sinclair and then failed to unite behind an alternative candidate. But it would not have mattered. Sinclair celebrated a massive primary victory, gaining more votes than all of his opponents combined.

    Newspapers around the world told the story.

    “What is the matter with California?” The Boston Globe asked, according to author Greg Mitchell. “That is the farthest shift to the left ever made by voters of a major party in this country.”

    Building fear

    Primaries are one thing. But in 1934, the November general election turned in a different direction.

    Terrified by Sinclair’s plan, business leaders mobilized to defeat EPIC, forming the kind of cross-party coalition that is rare in America except when radicals pose an electoral threat. Sinclair described the effort in a book he wrote shortly after the November election: “I, Candidate for Governor: And How I Got Licked.”

    Nearly every major newspaper in the state, including the five Democratic-leaning Hearst papers, joined the effort to stop Sinclair. Meanwhile, a high-priced advertising agency set up bipartisan groups with names like California League Against Sinclairism and Democrats for Merriam, trumpeting the names of prominent Democrats who refused to support Sinclair.

    Few people of any party were enthusiastic about Merriam, who had recently angered many Californians by sending the National Guard to break a Longshore strike in San Francisco, only to trigger a general strike that shut down the city.

    A black and white photo depicts a billboard criticizing Democrat Upton Sinclair.
    A billboard supports Republican Frank Merriam and opposes Democrat Upton Sinclair for governor of California in January 1934.
    Bettmann /Contributor/Getty Images

    The campaign against Sinclair attacked him with billboards, radio and newsreel programming, and relentless newspaper stories about his radical past and supposedly dangerous plans for California.

    EPIC faced another challenge, candidate Raymond Haight, running on the Progressive Party label. Haight threatened to divide left-leaning voters.

    Sinclair tried to defend himself, energetically denouncing what he called the “Lie Factory” and offering revised, more moderate versions of some elements of the EPIC plan. But the Red Scare campaign worked. Merriam easily outdistanced Sinclair, winning by a plurality in the three-way race.

    New York, 2025

    Will a Democratic Socialist running for mayor in New York face anything similar in the months ahead?

    A movement to stop Mamdani is coming together, and some of what they are saying resonates with the 1934 campaign to stop Sinclair.

    The Guardian newspaper has quoted “loquacious billionaire hedge funder Bill Ackman, who said he and others in the finance industry are ready to commit ‘hundreds of millions of dollars’ into an opposing campaign.”

    In 1934, newspapers publicized threats by major companies, most famously Hollywood studios, to leave California in the event of a Sinclair victory. The Wall Street Journal, Fortune magazine and other media outlets have recently warned of similar threats.

    And there may be something similar about the political dynamics.

    Sinclair’s opponents could offer only a weak alternative candidate. Merriam had few friends and many critics.

    In 2025, New York City Mayor Eric Adams, who abandoned the primary when he was running as a Democrat and is now running as an independent, is arguably weaker still, having been rescued by President Donald Trump from a corruption indictment that might have sent him to prison. If he is the best hope to stop Mamdani, the campaign strategy will likely parallel 1934. All attack ads – little effort to promote Adams.

    But there is an important difference in the way the New York contest is setting up. Andrew Cuomo remains on the ballot as an independent, and his name could draw votes that might otherwise go to Adams.

    Curtis Sliwa, the Republican candidate, will also be on the ballot. Whereas in 1934 two candidates divided progressive votes, in 2025 three candidates are going to divide the stop-Mamdani votes.

    Religion also looms large in the campaign ahead. The New York City metro area’s U.S. Muslim population is said to be at least 600,000, compared to an estimated 1.6 million Jewish residents. Adams has announced that the threat of antisemitism will be the major theme of his campaign.

    The stop-Sinclair campaign also relied on religion, focusing on his professed atheism and pulling quotations from books he had written denouncing organized religion. However, a statistical analysis of voting demographics suggests that this effort proved unimportant.

    The Conversation

    James N. Gregory does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. When socialists win Democratic primaries: Will Zohran Mamdani be haunted by the Upton Sinclair effect? – https://theconversation.com/when-socialists-win-democratic-primaries-will-zohran-mamdani-be-haunted-by-the-upton-sinclair-effect-260168

  • ‘AI veganism’: Some people’s issues with AI parallel vegans’ concerns about diet

    Source: ForeignAffairs4

    Source: The Conversation – USA – By David Joyner, Associate Dean and Senior Research Associate, College of Computing, Georgia Institute of Technology

    Ethical concerns – like the mistreatment of content creators decried by this protester – drive both veganism and resistance to using AI. Mario Tama/Getty Images

    New technologies usually follow the technology adoption life cycle. Innovators and early adopters rush to embrace new technologies, while laggards and skeptics jump in much later.

    At first glance, it looks like artificial intelligence is following the same pattern, but a new crop of studies suggests that AI might follow a different course – one with significant implications for business, education and society.

    This general phenomenon has often been described as “AI hesitancy” or “AI reluctance.” The typical adoption curve assumes a person who is hesitant or reluctant to embrace a technology will eventually do so anyway. This pattern has repeated over and over – why would AI be any different?

    Emerging research on the reasons behind AI hesitancy, however, suggests there are different dynamics at play that might alter the traditional adoption cycle. For example, a recent study found that while some causes of this hesitation closely mirror those regarding previous technologies, others are unique to AI.

    In many ways, as someone who closely watches the spread of AI, there may be a better analogy: veganism.

    AI veganism

    The idea of an AI vegan is someone who abstains from using AI, the same way a vegan is someone who abstains from eating products derived from animals. Generally, the reasons people choose veganism do not fade automatically over time. They might be reasons that can be addressed, but they’re not just about getting more comfortable eating animals and animal products. That’s why the analogy in the case of AI is appealing.

    Unlike many other technologies, it’s important not to assume that skeptics and laggards will eventually become adopters. Many of those refusing to embrace AI actually fit the traditional archetype of an early adopter. The study on AI hesitation focused on college students who are often among the first demographics to adopt new technologies.

    There is some historical precedent for this analogy. Under the hood, AI is just a set of algorithms. Algorithmic aversion is a well-known phenomenon where humans are biased against algorithmic decision-making – even if it is shown to be more effective. For example, people prefer dating advice from humans over advice from algorithms, even when the algorithms perform better.

    But the analogy to veganism applies in other ways, providing insights into what to expect in the future. In fact, studies show that three of the main reasons people choose veganism each have a parallel in AI avoidance.

    Ethical concerns

    One motivation for veganism is concern over the ethical sourcing of animal by-products. Similarly, studies have found that when users are aware that many content creators did not knowingly opt into letting their work be used to train AI, they are more likely to avoid using AI.

    a woman in a crowd holds a sign over her head
    Many vegans have ethical concerns about the treatment of animals. Some people who avoid using AI have ethical concerns about the treatment of content creators.
    Vuk Valcic/SOPA Images/LightRocket via Getty Images

    These concerns were at the center of the Writers Guild of America and Screen Actors Guild-American Federation of Television and Radio Artists strikes in 2023, where the two unions argued for legal protections against companies using creatives’ works to train AI without consent or compensation. While some creators may be protected by such trade agreements, lots of models are instead trained on the work of amateur, independent or freelance creators without these systematic protections.

    Environmental concerns

    A second motivation for veganism is concern over the environmental impacts of intensive animal agriculture, from deforestation to methane production. Research has shown that the computing resources needed to support AI are growing exponentially, dramatically increasing demand for electricity and water, and that efficiency improvements are unlikely to lower the overall power usage due to a rebound effect, which is when efficiency gains spur new technologies that consume more energy.

    One preliminary study found that increasing users’ awareness of the power demands of AI can affect how they use these systems. Another survey found that concern about water usage to cool AI systems was a factor in students’ refusal to use the technology at Cambridge University.

    a woman in a crowd holds a hand-painted sign
    Both AI and meat production spark concerns about environmental impact.
    Kichul Shin/NurPhoto via Getty Images

    Personal wellness

    A third motivation for veganism is concern for possible negative health effects of eating animals and animal products. A potential parallel concern could be at work in AI veganism.

    A Microsoft Research study found that people who were more confident in using generative AI showed diminished critical thinking. The 2025 Cambridge University survey found some students avoiding AI out of concern that using it could make them lazy.

    It is not hard to imagine that the possible negative mental health effects of using AI could drive some AI abstinence in the same way the possible negative physical health effects of an omnivorous diet may drive some to veganism.

    How society reacts

    Veganism has led to a dedicated industry catering to that diet. Some restaurants feature vegan entrees. Some manufacturers specialize in vegan foods. Could it be the case that some companies will try to use the absence of AI as a selling point for their products and services?

    If so, it would be similar to how companies such as DuckDuckGo and the Mozilla Foundation provide alternative search engines and web browsers with enhanced privacy as their main feature.

    There are few vegans compared to nonvegans in the U.S. Estimates range as high as 4% of the population. But the persistence of veganism has enabled a niche market to serve them. Time will tell if AI veganism takes hold.

    The Conversation

    David Joyner does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. ‘AI veganism’: Some people’s issues with AI parallel vegans’ concerns about diet – https://theconversation.com/ai-veganism-some-peoples-issues-with-ai-parallel-vegans-concerns-about-diet-260277

  • Light pollution is encroaching on observatories around the globe – making it harder for astronomers to study the cosmos

    Source: ForeignAffairs4

    Source: The Conversation – USA – By Richard Green, Astronomer Emeritus, Steward Observatory, University of Arizona

    Light pollution from human activity can threaten radio astronomy – and people’s view of the night sky. Estellez/iStock via Getty Images

    Outdoor lighting for buildings, roads and advertising can help people see in the dark of night, but many astronomers are growing increasingly concerned that these lights could be blinding us to the rest of the universe.

    An estimate from 2023 showed that the rate of human-produced light is increasing in the night sky by as much as 10% per year.

    I’m an astronomer who has chaired a standing commission on astronomical site protection for the International Astronomical Union-sponsored working groups studying ground-based light pollution.

    My work with these groups has centered around the idea that lights from human activities are now affecting astronomical observatories on what used to be distant mountaintops.

    A map of North America showing light pollution, with almost all the eastern part of the U.S. covered from Maine to North Dakota, and hot spots on the West Coast.
    Map of North America’s artificial sky brightness, as a ratio to the natural sky brightness.
    Falchi et al., Science Advances (2016), CC BY-NC

    Hot science in the cold, dark night

    While orbiting telescopes like the Hubble Space Telescope or the James Webb Space Telescope give researchers a unique view of the cosmos – particularly because they can see light blocked by the Earth’s atmosphere – ground-based telescopes also continue to drive cutting-edge discovery.

    Telescopes on the ground capture light with gigantic and precise focusing mirrors that can be 20 to 35 feet (6 to 10 meters) wide. Moving all astronomical observations to space to escape light pollution would not be possible, because space missions have a much greater cost and so many large ground-based telescopes are already in operation or under construction.

    Around the world, there are 17 ground-based telescopes with primary mirrors as big or bigger than Webb’s 20-foot (6-meter) mirror, and three more under construction with mirrors planned to span 80 to 130 feet (24 to 40 meters).

    The newest telescope starting its scientific mission right now, the Vera Rubin Observatory in Chile, has a mirror with a 28-foot diameter and a 3-gigapixel camera. One of its missions is to map the distribution of dark matter in the universe.

    To do that, it will collect a sample of 2.6 billion galaxies. The typical galaxy in that sample is 100 times fainter than the natural glow in the nighttime air in the Earth’s atmosphere, so this Rubin Observatory program depends on near-total natural darkness.

    Two pictures of the constellation Orion, with one showing many times more stars.
    The more light pollution there is, the fewer stars a person can see when looking at the same part of the night sky. The image on the left depicts the constellation Orion in a dark sky, while the image on the right is taken near the city of Orem, Utah, a city of about 100,000 people.
    jpstanley/Flickr, CC BY

    Any light scattered at night – road lighting, building illumination, billboards – would add glare and noise to the scene, greatly reducing the number of galaxies Rubin can reliably measure in the same time, or greatly increasing the total exposure time required to get the same result.

    The LED revolution

    Astronomers care specifically about artificial light in the blue-green range of the electromagnetic spectrum, as that used to be the darkest part of the night sky. A decade ago, the most common outdoor lighting was from sodium vapor discharge lamps. They produced an orange-pink glow, which meant that they put out very little blue and green light.

    Even observatories relatively close to growing urban areas had skies that were naturally dark in the blue and green part of the spectrum, enabling all kinds of new observations.

    Then came the solid-state LED lighting revolution. Those lights put out a broad rainbow of color with very high efficiency – meaning they produce lots of light per watt of electricity. The earliest versions of LEDs put out a large fraction of their energy in the blue and green, but advancing technology now gets the same efficiency with “warmer” lights that have much less blue and green.

    Nevertheless, the formerly pristine darkness of the night sky now has much more light, particularly in the blue and green, from LEDs in cities and towns, lighting roads, public spaces and advertising.

    The broad output of color from LEDs affects the whole spectrum, from ultraviolet through deep red.

    The U.S. Department of Energy commissioned a study in 2019 which predicted that the higher energy efficiency of LEDs would mean that the amount of power used for lights at night would go down, with the amount of light emitted staying roughly the same.

    But satellites looking down at the Earth reveal that just isn’t the case. The amount of light is going steadily up, meaning that cities and businesses were willing to keep their electricity bills about the same as energy efficiency improved, and just get more light.

    Natural darkness in retreat

    As human activity spreads out over time, many of the remote areas that host observatories are becoming less remote. Light domes from large urban areas slightly brighten the dark sky at mountaintop observatories up to 200 miles (320 kilometers) away. When these urban areas are adjacent to an observatory, the addition to the skyglow is much stronger, making detection of the faintest galaxies and stars that much harder.

    A white-domed building on a hilltop among trees.
    The Mt. Wilson Observatory in the Angeles National Forest may look remote, but urban sprawl from Los Angeles means that it is much closer to dense human activity today than it was when it was established in 1904.
    USDA/USFS, CC BY

    When the Mt. Wilson Observatory was constructed in the Angeles National Forest near Pasadena, California, in the early 1900s, it was a very dark site, considerably far from the 500,000 people living in Greater Los Angeles. Today, 18.6 million people live in the LA area, and urban sprawl has brought civilization much closer to Mt. Wilson.

    When Kitt Peak National Observatory was first under construction in the late 1950s, it was far from metro Tucson, Arizona, with its population of 230,000. Today, that area houses 1 million people, and Kitt Peak faces much more light pollution.

    Even telescopes in darker, more secluded regions – like northern Chile or western Texas – experience light pollution from industrial activities like open-pit mining or oil and gas facilities.

    A set of buildings atop a mountain in the desert.
    European Southern Observatory’s Very Large Telescope at the Paranal site in the sparsely populated Atacama Desert in northern Chile.
    J.L. Dauvergne & G. Hüdepohl/ESO, CC BY-ND

    The case of the European Southern Observatory

    An interesting modern challenge is facing the European Southern Observatory, which operates four of the world’s largest optical telescopes. Their site in northern Chile is very remote, and it is nominally covered by strict national regulations protecting the dark sky.

    AES Chile, an energy provider with strong U.S. investor backing, announced a plan in December 2024 for the development of a large industrial plant and transport hub close to the observatory. The plant would produce liquid hydrogen and ammonia for green energy.

    Even though formally compliant with the national lighting norm, the fully built operation could scatter enough artificial light into the night sky to turn the current observatory’s pristine darkness into a state similar to some of the legacy observatories now near large urban areas.

    A map showing two industrial sites, one large, marked on a map of Chile. Just a few miles to the north are three telescope sites.
    The location of AES Chile’s planned project in relation to the European Southern Observatory’s telescope sites.
    European Southern Observatory, CC BY-ND

    This light pollution could mean the facility won’t have the same ability to detect and measure the faintest galaxies and stars.

    Light pollution doesn’t only affect observatories. Today, around 80% of the world’s population cannot see the Milky Way at night. Some Asian cities are so bright that the eyes of people walking outdoors cannot become visually dark-adapted.

    In 2009, the International Astronomical Union declared that there is a universal right to starlight. The dark night sky belongs to all people – its awe-inspiring beauty is something that you don’t have to be an astronomer to appreciate.

    The Conversation

    Richard Green is affiliated with the International Astronomical Union and the American Astronomical Society, as well as DarkSky International.

    ref. Light pollution is encroaching on observatories around the globe – making it harder for astronomers to study the cosmos – https://theconversation.com/light-pollution-is-encroaching-on-observatories-around-the-globe-making-it-harder-for-astronomers-to-study-the-cosmos-260387