Category: Global

  • MIL-OSI Global: Reproducibility may be the key idea students need to balance trust in evidence with healthy skepticism

    Source: The Conversation – USA – By Sarah R. Supp, Associate Professor of Data Analytics, Denison University

    Reproducing results can increase trust in scientific studies. Huntstock via Getty Images

    Many people have been there.

    The dinner party is going well until someone decides to introduce a controversial topic. In today’s world, that could be anything from vaccines to government budget cuts to immigration policy. Conversation starts to get heated. Finally, someone announces with great authority that a scientific study supports their position. This causes the discussion to come to an abrupt halt because the dinner guests disagree on their belief in scientific evidence. Some may believe science always speaks the truth, some may think science can never be trusted, and others may disagree on which studies with contradicting claims are “right.”

    How can the dinner party – or society – move beyond this kind of impasse? In today’s world of misinformation and disinformation, healthy skepticism is essential. At the same time, much scientific work is rigorous and trustworthy. How do you reach a healthy balance between trust and skepticism? How can researchers increase the transparency of their work to make it possible to evaluate how much confidence the public should have in any particular study?

    As teachers and scholars, we see these problems in our own classrooms and in our students – and they are mirrored in society.

    The concept of reproducibility may offer important answers to these questions.

    Reproducibility is what it sounds like: reproducing results. In some ways, reproducibility is like a well-written recipe, such as a recipe for an award-winning cake at the county fair. To help others reproduce their cake, the proud prizewinner must clearly document the ingredients used and then describe each step of the process by which the ingredients were transformed into a cake. If others can follow the directions and come up with a cake of the same quality, then the recipe is reproducible.

    Think of the English scholar who claims that Shakespeare did not author a play that has historically been attributed to him. A critical reader will want to know exactly how they arrived at that conclusion. What is the evidence? How was it chosen and interpreted? By parsing the analysis step by step, reproducibility allows a critical reader to gauge the strength of any kind of argument.

    We are a group of researchers and professors from a wide range of disciplines who came together to discuss how we use reproducibility in our teaching and research.

    Based on our expertise and the students we encounter, we collectively see a need for higher-education students to learn about reproducibility in their classes, across all majors. It has the potential to benefit students and, ultimately, to enhance the quality of public discourse.

    The foundation of credibility

    Reproducibility has always been a foundation of good science because it allows researchers to scrutinize each other’s studies for rigor and credibility and expand upon prior work to make new discoveries. Researchers are increasingly paying attention to reproducibility in the natural sciences, such as physics and medicine, and in the social sciences, such as economics and environmental studies. Even researchers in the humanities, such as history and philosophy, are concerned with reproducibility in studies involving analysis of texts and evidence, especially with digital and computational methods. Increased interest in transparency and accessibility has followed the rising importance of computer algorithms and numerical analysis in research. This work should be reproducible, but it often remains opaque.

    Broadly, research is reproducible if it answers the question: “How do you know?” − such that another researcher could theoretically repeat the study and produce consistent results.

    Reproducible research is explicit about the materials and methods that were used in a study to make discoveries and come to conclusions. Materials include everything from scientific instruments such as a tensiometer measuring soil moisture to surveys asking people about their daily diet. They also include digital data such as spreadsheets, digitized historic texts, satellite images and more. Methods include how researchers make observations and analyze data.

    To reproduce a social science study, for example, we would ask: What is the central question or hypothesis? Who was in the study? How many individuals were included? What were they asked? After data was collected, how was it cleaned and prepared for analysis? How exactly was the analysis run?

    Proper documentation of all these steps, plus making available the original data from the study, allows other scientists to redo the research, evaluate the decisions made during the process of gathering and analyzing information, and assess the credibility of the findings.

    This short video, made by the National Academies, explains the key concepts in reproducing scientific findings and notes ways the process can be improved.

    Over the past 20 years, the need for reproducibility has become increasingly important. Scientists have discovered that some published studies are too poorly documented for others to repeat, lack verified data sources, are questionably designed, or even fraudulent.

    Putting reproducibility to work: An example

    A highly contentious, retracted study from 1998 linked the measles, mumps and rubella (MMR) vaccine and autism. Scientists and journalists used their understanding of reproducibility to discover the flaws in the study.

    The central question of the study was not about vaccines but aimed to explore a possible relationship between colitis − an inflammation of the large intestine − and developmental disorders. The authors explicitly wrote, “We did not prove an association between measles, mumps, and rubella vaccine and the syndrome described.”

    The study observed just 12 patients who were referred to the authors’ gastroenterology clinic and had histories of recent behavioral disorders, including autism. This sample of children is simply too small and selective to be able to make definitive conclusions.

    In this study, the researchers translated children’s medical charts into summary tables for comparison. When a journalist attempted to reproduce the published data tables from the children’s medical histories, they found pervasive inconsistencies.

    Reproducibility allows for corrections in research. The article was published in a respected journal, but it lacked transparency with regard to patient recruitment, data analysis and conflicts of interest. Whereas traditional peer review involves critical evaluation of a manuscript, reproducibility also opens the door to evaluating the underlying data and methods. When independent researchers attempted to reproduce this study, they found deep flaws. The article was retracted by the journal and by most of its authors. Independent research teams conducted more robust studies, finding no relationship between vaccines and autism.

    Each research discipline has its own set of best practices for achieving reproducibility. Disciplines in which researchers use computational or statistical analysis require sharing the data and software code for reproducing studies. In other disciplines, researchers interpret nonnumerical qualities of data sources such as interviews, historical texts, social media content and more. These disciplines are working to develop standards for sharing their data and research designs for reproducibility. Across disciplines, the core principles are the same: transparency of the evidence and arguments by which researchers arrived at their conclusions.

    Reproducibility in the classroom

    Colleges and universities are uniquely situated to promote reproducibility in research and public conversations. Critical thinking, effective communication and intellectual integrity, staples of higher-education mission statements, are all served by reproducibility.

    Teaching faculty at colleges and universities have started taking some important steps toward incorporating reproducibility into a wide range of undergraduate and graduate courses. These include assignments to replicate existing studies, training in reproducible methods to conduct and document original research, preregistration of hypotheses and analysis plans, and tools to facilitate open collaboration among peers. A number of initiatives to develop and disseminate resources for teaching reproducibility have been launched.

    Despite some progress, reproducibility still needs a central place in higher education. It can be integrated into any course in which students weigh evidence, read published literature to make claims, or learn to conduct their own research. This change is urgently needed to train the next generation of researchers, but that is not the only reason.

    Reproducibility is fundamental to constructing and communicating claims based on evidence. Through a reproducibility lens, students evaluate claims in published studies as contingent on the transparency and soundness of the evidence and analysis on which the claims are based. When faculty teach reproducibility as a core expectation from the beginning of a curriculum, they encourage students to internalize its principles in how they conduct their own research and engage with the research published by others.

    Institutions of higher education already prioritize cultivating engaged, literate and critical citizens capable of solving the world’s most challenging contemporary problems. Teaching reproducibility equips students, and members of the public, with the skills they need to critically analyze claims in published research, in the media and even at dinner parties.

    Also contributing to this article are participants in the 2024 Reproducibility and Replicability in the Liberal Arts workshop, funded by the Alliance to Advance Liberal Arts Colleges (AALAC) [in alphabetical order]: Ben Gebre-Medhin (Department of Sociology and Anthropology, Mount Holyoke College), Xavier Haro-Carrión (Department of Geography, Macalester College), Emmanuel Kaparakis (Quantitative Analysis Center, Wesleyan University), Scott LaCombe (Statistical and Data Sciences, Smith College), Matthew Lavin (Data Analytics Program, Denison University), Joseph J. Merry (Sociology Department, Furman University), Laurie Tupper (Department of Mathematics and Statistics, Mount Holyoke College).

    Sarah Supp receives funding from the National Science Foundation, awards #1915913, #2120609, and #2227298.

    Joseph Holler receives funding from the National Science Foundation, award #2049837.

    Peter Kedron receives funding from the National Science Foundation, award #2049837 and from Esri.

    Richard Ball has received funding from the Alfred P. Sloan Foundation and the United Kingdom Reproducibility Network.

    Anne M. Nurse and Nicholas J. Horton do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    ref. Reproducibility may be the key idea students need to balance trust in evidence with healthy skepticism – https://theconversation.com/reproducibility-may-be-the-key-idea-students-need-to-balance-trust-in-evidence-with-healthy-skepticism-251771

    MIL OSI – Global Reports

  • MIL-OSI Global: How your electric bill may be paying for big data centers’ energy use

    Source: The Conversation – USA – By Ari Peskoe, Lecturer on Law, Harvard University

    Your power bill may be hiding something. photoschmidt/iStock/Getty Images Plus

    In the race to develop artificial intelligence, large technology companies such as Google and Meta are trying to secure massive amounts of electricity to power new data centers. Electric utilities see the prospect of earning large profits by providing electricity to these power-hungry facilities and are competing for their business by offering discounts not available to average consumers.

    In our paper Extracting Profits from the Public, we explain how utilities are forcing regular ratepayers to pay for the discounts enjoyed by some of the nation’s largest companies and identify ways policymakers can limit the costs to the public.

    Shifting costs

    In much of the U.S., utilities are monopolists. Within their service territories, they are the only companies allowed to deliver electricity to consumers. To fund their operations, utilities split the costs of maintaining and expanding their systems among all ratepayers – homeowners, businesses, warehouses, factories and anyone else who uses electricity.

    Historically, a utility expanded its system to meet growing demand for electricity from new factories, businesses and homes. To pay for its expansion − new power plants, new transmission lines and other equipment − the utility would propose to raise electricity rates by different amounts for various types of consumers.

    Public utility commissions are state agencies charged with ensuring that the public gets a fair deal. These commissions monitor how much money the utility spends to provide electric service and how its costs are shared among various types of ratepayers, including residential, commercial and industrial consumers. Ultimately, the public utility commission is supposed to approve any rate increases based on its assessment of what’s fair to consumers.

    Splitting the utility’s costs among all consumers made perfect sense when population growth and economic development across the economy stimulated the need for new infrastructure. But today, in many utility service territories, most of the projected growth in electricity demand is due to new data centers.

    Here’s the problem for consumers: To meet data center demand, utilities are building new power plants and power lines that are needed only because of data center growth. If state regulators allow utilities to follow the standard approach of splitting the costs of new infrastructure among all consumers, the public will end up paying to supply data centers with all that power.

    An artist’s rendering of a proposed Meta data center in Richland Parish, La.
    Meta via Facebook

    A big price tag

    One particularly acute example is in Louisiana. A Meta data center under development in the northeastern corner of the state is projected to use, by our calculations, twice as much energy as the city of New Orleans.

    Entergy, the regional monopoly utility, is proposing to build more than US$3 billion worth of new gas-fired power plants and delivery infrastructure to meet the data center’s energy demand. Rather than billing Meta directly for these costs, Entergy is proposing to include the costs in rates paid by all customers.

    Entergy claims its contract with Meta will cover some portion of the $3 billion price tag and that will mitigate any increases in consumers’ bills. But Entergy has asked state regulators to keep key terms of the contract secret, and only a redacted version of its application is available online.

    The public has no idea how much it might pay if the commission approves the contract. And if the Meta data center ends up using much less power than the company anticipates, the public does not know whether it would be on the hook to pay higher electricity rates for longer periods to guarantee Entergy a profit.

    The electronics in data centers consume large amounts of electricity.
    RJ Sangosti/MediaNews Group/The Denver Post via Getty Images

    Secret agreements

    Our research, reviewing nearly 50 public utility commission proceedings about data centers’ power needs across 10 states, uncovered dozens of secretive contracts between utilities and data centers. Unlike Louisiana, most states require utilities to submit to the public utility commission their one-off deals with data centers, but they allow utilities to conceal the pricing terms from the public.

    In normal rate-review cases, numerous parties advocate for their interests in a public proceeding, including members of the public, industry groups and the utility itself. But as our paper finds, utility commission reviews of data center contracts are based on confidential utility filings that are inaccessible to the general public. Few, if any, outsiders participate, and as a result the commission often hears only the utility’s version of the deal.

    Because the pricing terms are secret, it is impossible to know whether the deal that a utility is offering to a data center is too low to cover the utility’s costs of providing power to the data center, which would mean that the public is subsidizing the deal. History shows, however, that utilities have a long history of exploiting their monopolies to shift costs to the public, including through secret contracts.

    Electric utilities also charge customers for the costs of building and maintaining transmission networks.
    Jay L. Clendenin/Getty Images

    Other public costs

    Our paper also explores other ways that the public pays for data center energy costs. For instance, many high-voltage interstate transmission projects, which connect large power plants to local delivery systems, are developed through regional planning processes run by numerous utilities. These alliances have complex rules for splitting the costs of new transmission lines and equipment among their utility members.

    Once a utility is charged its share, it spreads the costs of new transmission projects among its local ratepayers. Because some regions are building new transmission capacity to accommodate data centers, our analysis finds that the public has been forced to pay billions of dollars for data center growth.

    Data center energy costs can also be shifted when data centers connect directly to existing power plants. Under what are called “co-location” deals, the power plant stops selling energy to the wider public and just sells to the data center. With less supply in the overall market, prices go up and the public faces higher bills as a result.

    Many state legislatures are noticing these problems and working to figure out how to address them. Several recent bills would set new terms and conditions for future data center deals that could help protect the public from data center energy costs.

    Ari Peskoe is the Director of the Electricity Law Initiative at the Harvard Law School Environmental and Energy Law Program (EELP). EELP receives funding from philanthropic foundations that support the clean energy transition.

    Eliza Martin does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. How your electric bill may be paying for big data centers’ energy use – https://theconversation.com/how-your-electric-bill-may-be-paying-for-big-data-centers-energy-use-257794

    MIL OSI – Global Reports

  • MIL-OSI Global: 100 years ago, the Supreme Court made a landmark ruling on parents’ rights in education – today, another case raises new questions

    Source: The Conversation – USA – By Charles J. Russo, Joseph Panzer Chair in Education and Research Professor of Law, University of Dayton

    A selection of books that are part of the Supreme Court case Mahmoud v. Taylor are pictured on April, 15, 2025, in Washington. AP Photo/Pablo Martinez Monsivais

    A century ago, the Supreme Court handed down one of its most important cases about education. On June 1, 1925, the court struck down an Oregon statute requiring all students to attend public school – a law critics argued was meant to limit faith-based schools, at a time when anti-Catholic bias was still common in parts of the United States.

    The majority opinion in Pierce v. Society of Sisters of the Holy Name of Jesus and Mary included a now-famous dictum about parents’ rights to shape their children’s upbringing. According to the court, “the child is not the mere creature of the state; those who nurture him and direct his destiny have the right, coupled with the high duty, to recognize and prepare him for additional obligations.”

    Soon, the Supreme Court is expected to release another decision around parental beliefs and education: Mahmoud v. Taylor. The plaintiffs are parents who want to excuse their children from public school lessons involving storybooks with LGBTQ+ characters – lessons they assert contradict their religious beliefs.

    As someone who teaches education law, I believe this is perhaps the court’s most significant case on parental rights since Pierce. Mahmoud raises questions not only about religious freedom, but also about educators’ ability to determine curricula, and public education in a pluralistic society.

    Picture-book debate

    Controversy arose during the 2022-23 school year in Montgomery County, Maryland’s largest school district, when officials approved various storybooks with LGBTQ+-inclusive themes to be incorporated into the English language-arts curriculum for preschool and elementary students.

    Some parents challenged the materials, including “Pride Puppy!”, a picture book the board later removed from use. Originally approved for preschool and pre-K, the story portrays a family whose puppy gets lost at a LGBTQ+ Pride parade, devoting a page to each letter of the alphabet. At the end of the book, a long “search and find” list of words for children to go back and look for in the pictures of the parade includes “[drag] queen” and “king,” “leather” and “lip ring.”

    Other materials for older children included stories about same-sex marriage, a transgender child and nonbinary bathroom signs.

    Parents who objected to the use of these materials on religious grounds sought to excuse their children from lessons using them. The parents basically argued that requiring their children to participate compelled or coerced them to go against their families’ religious beliefs.

    A group of parents protest in Rockville, Md., on June 27, 2023, in an effort to opt out of books that feature LGBTQ+ characters in Montgomery County schools.
    Sarah L. Voisin/The Washington Post via Getty Images

    Initially, officials agreed to allow opt-outs for elementary schoolers whose parents objected to the materials. However, a day later they changed their minds. Since then, school officials cited concerns about absenteeism, the feasibility of accommodating opt-out requests, and a desire to avoid stigmatizing LGBTQ+ students or families as reasons for their policy.

    A group of Muslim, Orthodox Christian and Catholic families challenged the board’s refusal to excuse their children from lessons using the disputed materials.

    The federal trial court, however, rejected the parents’ claim that having no opt-outs violated their right to due process.

    Parents appealed, and the 4th Circuit affirmed in favor of the school board 2-1. The court added that officials had not violated the parents’ First Amendment rights to freely exercise their faith. “There’s no evidence at present that the Board’s decision not to permit opt-outs compels the Parents or their children to change their religious beliefs or conduct, either at school or elsewhere,” the panel concluded.

    The dissenting judge stridently countered. Officials violated the parents’ free exercise rights by forcing them “to make a choice,” he wrote, between “either adher[ing] to their faith, or receiv[ing] a free public education for their children.” He also noted that the board’s opt-out policy was not neutral toward religion, because under Maryland regulations, children may be excused from sex-ed lessons.

    In January 2025, the Supreme Court agreed to hear the parents’ appeal, addressing whether the schools are burdening parents’ free-exercise rights.

    Court record

    In their brief to the Supreme Court and oral arguments, the parents cited Wisconsin v. Yoder, a Supreme Court ruling from 1972. The court found that Amish parents did not have to send their children to school after the eighth grade, which the families argued would violate their religious beliefs. Amish communities descend from Anabaptist Christians who fled persecution in Europe and emphasize living simply, eschewing many modern technologies.

    In Yoder, the justices agreed with the parents that their children received all the education they needed in their home communities. Under the First Amendment, parents have the right “to guide the religious future and education of their children,” the majority wrote, a matter “established beyond debate.”

    During oral arguments for Mahmoud in April 2025, some justices briefly discussed another precedent: the Supreme Court’s 1943 judgment in West Virginia State Board of Education v. Barnette, resolved at the height of U.S. involvement in World War II. Here, three parents who were Jehovah’s Witnesses refused to have their children participate in public schools’ flag salute and Pledge of Allegiance because they viewed it as a form of idolatry contrary to their religious beliefs. Others objected
    to the salute as “being too much like Hitler’s.”

    The court reasoned that educators could not compel students to participate, because forcing children – or anyone – to engage in activities inconsistent with their beliefs is contrary to their First Amendment rights to the free exercise of religion and freedom of speech.

    Viewed together, these cases highlight how the court has granted parents significant leeway to exempt their children from educational activities inconsistent with their religious beliefs.

    Questions at court

    During oral arguments, a majority of justices appeared to support the parents’ request to excuse children from lessons involving the books about LGBTQ+ characters.

    The board’s attorney argued that students did not have to agree with the books’ messages, simply to participate in the lesson. Being exposed to an idea “does not burden free exercise,” he said.

    Protesters in support of LGBTQ+ rights and against book bans outside the U.S. Supreme Court building on April 22, 2025, the day the court heard arguments in Mahmoud v. Taylor.
    Anna Moneymaker/Getty Images

    Chief Justice John Roberts, however, queried whether it is realistic for 5-year-olds to understand that distinction. He asked, “Do you want to say you don’t have to follow the teacher’s instructions, you don’t have to agree with the teacher? I mean, that may be a more dangerous message than some of the other things.”

    Other conservative justices also appeared skeptical of the idea that the lessons were merely exposing young children to ideas, but not instilling moral lessons. The storybooks do not simply explain that some people believe something and others do not, Justice Amy Coney Barrett suggested; they inform students that “this is the right view of the world.” Similarly, Justice Neil Gorsuch remarked that telling students that “some people think X, and X is wrong and hurtful and negative” is “more than exposure.”

    “What is the big deal about allowing them to opt out of this?” Justice Samuel Alito asked.

    Conversely, Justice Elena Kagan acknowledged that parents’ concerns were “serious,” but wondered how to draw limits on opt-out policies. Did the parents’ argument suggest that anytime “a religious person confronts anything in a classroom that conflicts with her religious beliefs or her parents’ that – that the parent can then demand an opt-out?”

    Justice Sonia Sotomayor pressed the plaintiffs’ attorney on whether “the mere exposure to things that you object to” really counts as coercion. And Justice Ketanji Brown Jackson questioned why, even if opt-outs are not allowed, public schools teaching “something that the parent disagrees with” is coercive, given that homeschooling and private schools are legal.

    Mahmoud raises challenging questions about curricular content, parental control and free exercise of religion – questions the court will hopefully resolve. A ruling is expected in June or early July 2025.

    Charles J. Russo does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. 100 years ago, the Supreme Court made a landmark ruling on parents’ rights in education – today, another case raises new questions – https://theconversation.com/100-years-ago-the-supreme-court-made-a-landmark-ruling-on-parents-rights-in-education-today-another-case-raises-new-questions-257876

    MIL OSI – Global Reports

  • MIL-Evening Report: Punishment for Te Pāti Māori over Treaty haka stands – but MPs ‘will not be silenced’

    RNZ News

    Aotearoa New Zealand’s Parliament has confirmed the unprecedented punishments proposed for opposition indigenous Te Pāti Māori MPs who performed a haka in protest against the Treaty Principles Bill.

    Te Pāti Māori co-leaders Debbie Ngarewa-Packer and Rawiri Waititi will be suspended for 21 days, and MP Hana-Rawhiti Maipi-Clarke suspended for seven days, taking effect immediately.

    Opposition parties tried to reject the recommendation, but did not have the numbers to vote it down.

    Te Pati Maori MPs speak after being suspended.  Video: RNZ/Mark Papalii

    The heated debate to consider the proposed punishment came to an end just before Parliament was due to rise.

    Waititi moved to close the debate and no party disagreed, ending the possibility of it carrying on in the next sitting week.

    Leader of the House Chris Bishop — the only National MP who spoke — kicked off the debate earlier in the afternoon saying it was “regrettable” some MPs did not vote on the Budget two weeks ago.

    Bishop had called a vote ahead of Budget Day to suspend the privileges report debate to ensure the Te Pāti Māori MPs could take part in the Budget, but not all of them turned up.

    Robust, rowdy debate
    The debate was robust and rowdy with both the deputy speaker Barbara Kuriger and temporary speaker Tangi Utikare repeatedly having to ask MPs to quieten down.

    Flashback: Te Pāti Māori MP Hana-Rawhiti Maipa-Clarke led a haka in Parliament and tore up a copy of the Treaty Principles Bill at the first reading on 14 November 2024 . . . . a haka is traditionally used as an indigenous show of challenge, support or sorrow. Image: RNZ/Samuel Rillstone/APR screenshot

    Tākuta Ferris spoke first for Te Pāti Māori, saying the haka was a “signal of humanity” and a “raw human connection”.

    He said Māori had faced acts of violence for too long and would not be silenced by “ignorance or bigotry”.

    “Is this really us in 2025, Aotearoa New Zealand?” he asked the House.

    “Everyone can see the racism.”

    He said the Privileges Committee’s recommendations were not without precedent, noting the fact Labour MP Peeni Henare, who also participated in the haka, did not face suspension.

    MP Tākuta Ferris spoke for Te Pāti Māori. Image: RNZ/Samuel Rillstone

    Henare attended the committee and apologised, which contributed to his lesser sanction.

    ‘Finger gun’ gesture
    MP Parmjeet Parmar — a member of the Committee — was first to speak on behalf of ACT, and referenced the hand gesture — or “finger gun” — that Te Pāti Māori co-leader Debbie Ngarewa-Packer made in the direction of ACT MPs during the haka.

    Parmar told the House debate could be used to disagree on ideas and issues, and there was not a place for intimidating physical gestures.

    Greens co-leader Marama Davidson said New Zealand’s Parliament could lead the world in terms of involving the indigenous people.

    She said the Green Party strongly rejected the committee’s recommendations and proposed their amendment of removing suspensions, and asked the Te Pāti Māori MPs be censured instead.

    Davidson said the House had evolved in the past — such as the inclusion of sign language and breast-feeding in the House.

    She said the Greens were challenging the rules, and did not need an apology from Te Pāti Māori.

    Foreign Minister and NZ First party leader Winston Peters called Te Pāti Māori “a bunch of extremists”. Image: RNZ/Samuel Rillstone

    NZ First leader Winston Peters said Te Pāti Māori and the Green Party speeches so far showed “no sincerity, saying countless haka had taken place in Parliament but only after first consulting the Speaker.

    “They told the media they were going to do it, but they didn’t tell the Speaker did they?

    ‘Bunch of extremists’
    “The Māori party are a bunch of extremists,” Peters said, “New Zealand has had enough of them”.

    Peters was made to apologise after taking aim at Waititi, calling him “the one in the cowboy hat” with “scribbles on his face” [in reference to his traditional indigenous moko — tatoo]. He continued afterward, describing Waititi as possessing “anti-Western values”.

    Labour’s Willie Jackson congratulated Te Pāti Māori for the “greatest exhibition of our culture in the House in my lifetime”.

    Jackson said the Treaty bill was a great threat, and was met by a great haka performance. He was glad the ACT Party was intimidated, saying that was the whole point of doing the haka.

    He also called for a bit of compromise from Te Pāti Māori — encouraging them to say sorry — but reiterated Labour’s view the sanctions were out of proportion with past indiscretions in the House.

    Green Party co-leader Chlöe Swarbrick said the prime minister was personally responsible if the proposed sanctions went ahead. Image: RNZ/Samuel Rillstone

    Greens co-leader Chlöe Swarbrick said the debate “would be a joke if it wasn’t so serious”.

    “Get an absolute grip,” she said to the House, arguing the prime minister “is personally responsible” if the House proceeds with the committee’s proposed sanctions.

    Eye of the beholder
    She accused National’s James Meager of “pointing a finger gun” at her — the same gesture coalition MPs had criticised Ngarewa-Packer for during her haka. The Speaker accepted he had not intended to; Swarbrick said it was an example where the interpretation could be in the eye of the beholder.

    She said if the government could “pick a punishment out of thin air” that was “not a democracy”, putting New Zealand in very dangerous territory.

    An emotional Maipi-Clarke said she had been silent on the issue for a long time, the party’s voices in haka having sent shockwaves around the world. She questioned whether that was why the MPs were being punished.

    “Since when did being proud of your culture make you racist?”

    “We will never be silenced, and we will never be lost,” she said, calling the Treaty Principles bill a “dishonourable vote”.

    She had apologised to the Speaker and accepted the consequence laid down on the day, but refused to apologise. She listed other incidents in Parliament that resulted in no punishment.

    NZ Parliament TV: Te Pāti Māori Privileges committee debate.  Video: RNZ

    Maipi-Clarke called for the Treaty of Waitangi to be recognised in the Constitution Act, and for MPs to be required to honour it by law.

    ‘Clear pathway forward’
    “The pathway forward has never been so clear,” she said.

    ACT’s Nicole McKee said there were excuses being made for “bad behaviour”, that the House was for making laws and having discussions, and “this is not about the haka, this is about process”.

    She told the House she had heard no good ideas from the Te Pāti Māori, who she said resorted to intimidation when they did not get their way, but the MPs needed to “grow up” and learn to debate issues. She hoped 21 days would give them plenty of time to think about their behaviour.

    Labour MP and former Speaker Adrian Rurawhe started by saying there were “no winners in this debate”, and it was clear to him it was the government, not the Parliament, handing out the punishments.

    He said the proposed sanctions set a precedent for future penalties, and governments might use it as a way to punish opposition, imploring National to think twice.

    He also said an apology from Te Pāti Māori would “go a long way”, saying they had a “huge opportunity” to have a legacy in the House, but it was their choice — and while many would agree with the party there were rules and “you can’t have it both ways”.

    Te Pāti Māori co-leader Rawiri Waititi speaking to the media after the Privileges Committee debate. Image: RNZ/Mark Papalii

    Te Pāti Māori co-leader Rawiri Waititi said there had been many instances of misinterpretations of the haka in the House and said it was unclear why they were being punished, “is it about the haka . . . is about the gun gestures?”

    “Not one committee member has explained to us where 21 days came from,” he said.

    Hat and ‘scribbles’ response
    Waititi took aim at Peters over his comments targeting his hat and “scribbles” on his face.

    He said the haka was an elevation of indigenous voice and the proposed punishment was a “warning shot from the colonial state that cannot stomach” defiance.

    Waititi said that throughout history when Māori did not play ball, the “coloniser government” reached for extreme sanctions, ending with a plea to voters: “Make this a one-term government, enrol, vote”.

    He brought out a noose to represent Māori wrongfully put to death in the past, saying “interpretation is a feeling, it is not a fact . . .  you’ve traded a noose for legislation”.

    This article is republished under a community partnership agreement with RNZ.

    MIL OSI AnalysisEveningReport.nz

  • MIL-OSI Global: Why the global tax system needs fixing – podcast

    Source: The Conversation – UK – By Mend Mariwany, Producer, The Conversation Weekly Podcast, The Conversation

    Cagkan Sayin/Shutterstock

    For decades, multinational corporations have used sophisticated strategies to shift profits away from where they do business. As a result, countries around the world lose an estimated US$500 billion annually in unpaid taxes, with developing nations hit particularly hard.

    In the first of two episodes for The Conversation Weekly podcast called The 15% solution, we explore how companies have exploited loopholes in the global tax system. The episode features insights from Annette Alstadsæter, director of the Centre for Tax Research at the Norwegian University of Life Sciences, and Tarcisio Diniz Magalhaes, a professor of tax law at the University of Antwerp in Belgium.

    The problem goes beyond clever accounting. Our international tax rules were built for an industrial age where companies were physically present where they operated. But today’s tech giants can generate billions in revenue from users around the world, without having a single employee or office there, leaving those nations unable to tax those profits at all.

    In 2021, after years of international negotiations, the Organisation for Economic Co-operation and Development unveiled a global tax deal designed to address tax avoidance through a minimum corporate tax rate of 15%. But will this new framework actually work? And what happens when major economies refuse to participate?

    Across two episodes, The 15% solution explores why a new global tax regime is needed, whether it can fix a broken system, and what’s at stake if it fails. Part two will be published on June 6.


    This episode of The Conversation Weekly was written and produced by Mend Mariwany. Gemma Ware is the executive producer. Mixing and sound design by Eloise Stevens and theme music by Neeta Sarl.

    Newsclips in this episode from NBC News, France24, BBC News, DW News and TRT World.

    Listen to The Conversation Weekly via any of the apps listed above, download it directly via our RSS feed or find out how else to listen here. A transcript of this episode is available on Apple Podcasts.

    Tarcísio Diniz Magalhães has received funding from the University of Antwerp Research Fund, Flanders Research Foundation, Social Sciences and Humanities Research Council in Canada and the Ford Foundation. He is a member of the Antwerp Tax Academy and DigiTax Centre of Excellence and is lead professor on International Taxation, Working Group on Tax Reform, ACMinas – Commercial and Business Association of Minas Gerais. Annette Alstadsæter is the Director of Skatteforsk – Centre for Tax Research which collaborates with the EU Tax Observatory on the Atlas of the Offshore World.

    ref. Why the global tax system needs fixing – podcast – https://theconversation.com/why-the-global-tax-system-needs-fixing-podcast-257672

    MIL OSI – Global Reports

  • MIL-OSI Global: The secret to Ukraine’s battlefield successes against Russia – it knows wars are never won in the past

    Source: The Conversation – Global Perspectives – By Matthew Sussex, Associate Professor (Adj), Griffith Asia Institute; and Fellow, Strategic and Defence Studies Centre, Australian National University

    The iconoclastic American general Douglas Macarthur once said that “wars are never won in the past”.

    That sentiment certainly seemed to ring true following Ukraine’s recent audacious attack on Russia’s strategic bomber fleet, using small, cheap drones housed in wooden pods and transported near Russian airfields in trucks.

    The synchronised operation targeted Russian Air Force planes as far away as Irkutsk – more than 5,000 kilometres from Ukraine. Early reports suggest around a third of Russia’s long-range bombers were either destroyed or badly damaged. Russian military bloggers have put the estimated losses lower, but agree the attack was catastrophic for the Russian Air Force, which has struggled to adapt to Ukrainian tactics.

    This particular attack was reportedly 18 months in the making. To keep it secret was an extraordinary feat. Notably, Kyiv did not inform the United States that the attack was in the offing. The Ukrainians judged – perhaps understandably – that sharing intelligence on their plans could have alerted the Kremlin in relatively short order.

    Ukraine’s success once again demonstrates that its armed forces and intelligence services are the modern masters of battlefield innovation and operational security.

    Finding new solutions

    Western military planners have been carefully studying Ukraine’s successes ever since its forces managed to blunt Russia’s initial onslaught deep into its territory in early 2022, and then launched a stunning counteroffensive that drove the Russian invaders back towards their original starting positions.

    There have been other lessons, too, about how the apparently weak can stand up to the strong. These include:

    • attacks on Russian President Vladimir Putin’s vanity project, the Kerch Bridge, linking the Russian mainland to occupied Crimea (the last assault occurred just days ago)

    • the relentless targeting of Russia’s oil and gas infrastructure with drones

    • attacks against targets in Moscow to remind the Russian populace about the war, and

    • its incursion into the Kursk region, which saw Ukrainian forces capture around 1,000 square kilometres of Russian territory.

    On each occasion, Western defence analysts have questioned the wisdom of Kyiv’s moves.

    Why invade Russia using your best troops when Moscow’s forces continue laying waste to cities in Ukraine?

    Why hit Russia’s energy infrastructure if it doesn’t markedly impede the battlefield mobility of Russian forces?

    And why attack symbolic targets like bridges when it could provoke Putin into dangerous “escalation”?

    The answer to this is the key to effective innovation during wartime. Ukraine’s defence and security planners have interpreted their missions – and their best possible outcomes – far more accurately than conventional wisdom would have thought.

    Above all, they have focused on winning the war they are in, rather than those of the past. This means:

    • using technological advancements to force the Russians to change their tactics

    • shaping the information environment to promote their narratives and keep vital Western aid flowing, and

    • deploying surprise attacks not just as ways to boost public morale, but also to impose disproportionate costs on the Russian state.

    The impact of Ukraine’s drone attack

    In doing so, Ukraine has had an eye for strategic effects. As the smaller nation reliant on international support, this has been the only logical choice.

    Putin has been prepared to commit a virtually inexhaustible supply of expendable cannon fodder to continue his country’s war ad infinitum. Russia has typically won its wars this way – by attrition – albeit at a tremendous human and material cost.

    That said, Ukraine’s most recent surprise attack does not change the overall contours of the war. The only person with the ability to end it is Putin himself.

    That’s why Ukraine is putting as much pressure as possible on his regime, as well as domestic and international perceptions of it. It is key to Ukraine’s theory of victory.

    This is also why the latest drone attack is so significant. Russia needs its long-range bomber fleet, not just to fire conventional cruise missiles at Ukrainian civilian and infrastructure targets, but as aerial delivery systems for its strategic nuclear arsenal.

    The destruction of even a small portion of Russia’s deterrence capability has the potential to affect its nuclear strategy. It has increasingly relied on this strategy to threaten the West.

    A second impact of the attack is psychological. The drone attacks are more likely to enrage Putin than bring him to the bargaining table. However, they reinforce to the Russian military that there are few places – even on its own soil – that its air force can act with operational impunity.

    The surprise attacks also provide a shot in the arm domestically, reminding Ukrainians they remain very much in the fight.

    Finally, the drone attacks send a signal to Western leaders. US President Donald Trump and Vice President JD Vance, for instance, have gone to great lengths to tell the world that Ukraine is weak and has “no cards”. This action shows Kyiv does indeed have some powerful cards to play.

    That may, of course, backfire: after all, Trump is acutely sensitive to being made to look a fool. He may look unkindly at resuming military aid to Ukraine after being shown up for saying Ukrainian President Volodymyr Zelensky would be forced to capitulate without US support.

    But Trump’s own hubris has already done that for him. His regular claims that a peace deal is just weeks away have gone beyond wishful thinking and are now monotonous.

    Unsurprisingly, Trump’s reluctance to put anything approaching serious pressure on Putin has merely incentivised the Russian leader to string the process along.

    Indeed, Putin’s insistence on a maximalist victory, requiring Ukrainian demobilisation and disarmament without any security guarantees for Kyiv, is not diplomacy at all. It is merely the reiteration of the same unworkable demands he has made since even before Russia’s full-scale invasion in February 2022.

    However, Ukraine’s ability to smuggle drones undetected onto an opponent’s territory, and then unleash them all together, will pose headaches for Ukraine’s friends, as well as its enemies.

    That’s because it makes domestic intelligence and policing part of any effective defence posture. It is a contingency democracies will have to plan for, just as much as authoritarian regimes, who are also learning from Ukraine’s lessons.

    In other words, while the attack has shown up Russia’s domestic security services for failing to uncover the plan, Western security elites, as well as authoritarian ones, will now be wondering whether their own security apparatuses would be up to the job.

    The drone strikes will also likely lead to questions about how useful it is to invest in high-end and extraordinarily expensive weapons systems when they can be vulnerable. The Security Service of Ukraine estimates the damage cost Russia US$7 billion (A$10.9 billion). Ukraine’s drones, by comparison, cost a couple of thousand dollars each.

    At the very least, coming up with a suitable response to those challenges will require significant thought and effort. But as Ukraine has repeatedly shown us, you can’t win wars in the past.

    Matthew Sussex has received funding from the Australian Research Council, the Atlantic Council, the Fulbright Foundation, the Carnegie Foundation, the Lowy Institute and various Australian government departments and agencies.

    ref. The secret to Ukraine’s battlefield successes against Russia – it knows wars are never won in the past – https://theconversation.com/the-secret-to-ukraines-battlefield-successes-against-russia-it-knows-wars-are-never-won-in-the-past-258172

    MIL OSI – Global Reports

  • MIL-OSI Global: Unprecedented heat in the North Atlantic Ocean kickstarted Europe’s hellish 2023 summer. Now we know what caused it

    Source: The Conversation – Global Perspectives – By Matthew England, Scientia Professor and Deputy Director of the ARC Australian Centre for Excellence in Antarctic Science, UNSW Sydney

    Westend61/Getty Images

    In June 2023, a record-breaking marine heatwave swept across the North Atlantic Ocean, smashing previous temperature records.

    Soon after, deadly heatwaves broke out across large areas of Europe, and torrential rains and flash flooding devastated parts of Spain and Eastern Europe. That year Switzerland lost more than 4% of its total glacier volume, and severe bushfires broke out around the Mediterranean.

    It wasn’t just Europe that was impacted. The coral reefs of the Caribbean were bleaching under severe heat stress. And hurricanes, fuelled by ocean heat, intensified into disasters. For example, Hurricane Idalia hit Florida in August 2023 – causing 12 deaths and an estimated US$3.6 billion in damages.

    Today, in a paper published in Nature, we uncover what drove this unprecedented marine heatwave.

    A strange discovery

    In a strange twist to the global warming story, there is a region of the North Atlantic Ocean to the southeast of Greenland that has been cooling over the last 50 to 100 years.

    This so-called “cold blob” or “warming hole” has been linked to the weakening of what’s known as the Atlantic Meridional Overturning Circulation – a system of ocean currents that conveys warm water from the equator towards the poles.

    During July 2023 we met as a team to analyse this cold blob – how deep it reaches and how robust it is as a measure of the strength of the Atlantic overturning circulation – when it became clear there was a strong reversal of the historical cooling trend. The cold blob had warmed to 2°C above average.

    But was that a sign the overturning circulation had been reinvigorated? Or was something else going on?

    A layered story

    It soon became clear the anomalous warm temperatures southeast of Greenland were part of an unprecedented marine heatwave that had developed across much of the North Atlantic Ocean. By July, basin-averaged warming in the North Atlantic reached 1.4°C above normal, almost double the previous record set in 2010.

    To uncover what was behind these record breaking temperatures, we combined estimates of the atmospheric conditions that prevailed during the heatwave, such as winds and cloud cover, with ocean observations and model simulations.

    We were especially interested in understanding what was happening in the mixed upper layer of water of the ocean, which is strongly affected by the atmosphere.

    Distinct from the deeper layer of cold water, the ocean’s surface mixed layer warms as it’s exposed to more sunlight during spring and summer. But the rate at which this warming happens depends on its thickness. If it’s thick, it will warm more gradually; if it’s thin, rapid warming can ensue.

    During summer the thickness of this surface mixed layer is largely set by winds. Winds churn up the surface ocean and the stronger they are the deeper the mixing penetrates, so strong winds create a think upper layer and weak winds generate a shallower layer.

    Sea surface temperature anomaly (°C) for the month of June 2023, relative to the 1991–2020 reference period.
    Copernicus Climate Change Service/ECMWF

    Thinning at the surface

    Our new research indicates that the primary driver of the marine heatwave was record-breaking weak winds across much of the basin. The winds were at their weakest measured levels during June and July, possibly linked to a developing El Niño in the east Pacific Ocean.

    This led to by far the shallowest upper layer on record. Data from the Argo Program – a global array of nearly 4,000 robotic floats that measure the temperature and salinity in the upper 2,000 metres of the ocean – showed in some areas this layer was only ten metres deep, compared to the usual 20 to 40 metres deep.

    This caused the sun to heat the thin surface layer far more rapidly than usual.

    In addition to these short term changes in 2023, previous research has shown long-term warming associated with anthropogenic climate change is reducing the ability of winds to mix the upper ocean, causing it to gradually thin.

    We also identified a possible secondary driver of more localised warming during the 2023 marine heatwave: above-average solar radiation hitting the ocean. This could be linked in part with the introduction of new international rules in 2020 to reduce sulfate emissions from ships.

    The aim of these rules was to reduce air pollution from ship’s exhaust systems. But sulfate aerosols also reflect solar radiation and can lead to cloud formation. The resultant clearer skies can then lead to more ocean warming.

    Early warning signs

    The extreme 2023 heatwave provides a preview of the future. Marine heatwaves are expected to worsen as Earth continues to warm due to greenhouse gas emissions, with devastating impacts on marine ecosystems such as coral reefs and fisheries. This also means more intense hurricanes – and more intense land-based heatwaves.

    Right now, although the “cold blob” to the southeast of Greenland has returned, parts of the North Atlantic remain significantly warmer than the average. There is a particularly warm patch of water off the coast of the United Kingdom, with temperatures up to 4°C above normal. And this is likely priming Europe for extreme land-based heatwaves this summer.

    Global ocean temperatures on June 2 2025. A patch of abnormally warm water is visible off the southern coast of the United Kingdom.
    National Oceanic and Atmospheric Administration

    To better understand, forecast and plan for the impacts of marine heatwaves, long-term ocean and atmospheric data and models, including those provided by the National Oceanic and Atmospheric Administration (NOAA) in the United States, are crucial. In fact, without these data and models, our new study would not have been possible.

    Despite this, NOAA faces an uncertain future. A proposed budget for the 2026 fiscal year released by the White House last month could mean devastating funding cuts of more than US$1.5 billion – mostly targeting climate-based research and data collection.

    This would be a disaster for monitoring our oceans and climate system, right at a time when change is severe, unprecedented, and proving very costly.

    Matthew England receives funding from the Australian Research Council.

    Alex Sen Gupta receives funding from the Australian Research Council.

    Andrew Kiss receives funding from the Australian Research Council.

    Zhi Li receives funding from the Australian Research Council.

    ref. Unprecedented heat in the North Atlantic Ocean kickstarted Europe’s hellish 2023 summer. Now we know what caused it – https://theconversation.com/unprecedented-heat-in-the-north-atlantic-ocean-kickstarted-europes-hellish-2023-summer-now-we-know-what-caused-it-258061

    MIL OSI – Global Reports

  • MIL-OSI Global: Getting away with it … sort of. How a dictator and a fugitive Nazi advanced international human rights law

    Source: The Conversation – Global Perspectives – By Olivera Simic, Associate Professor in Law, Griffith University

    Pinochet and Rauff? They were alike. Each had two faces. One gentle, the other hard. They were joined.

    And they both got away with it … Sort of.

    Philippe Sands loves to tell stories. A master of historical non-fiction, he has become known for his unique blend of deeply personal, legal and historical narratives, which weave together incredible coincidences with moving stories of human courage in the face of mass atrocities and horror.

    Sands is a leading practitioner of international law, a professor at University College London, an author, a playwright, and the recipient of numerous literary awards. He is also someone whose family was murdered in the vortex of the Holocaust in Ukraine.

    With his previous two books, East West Street: On the Origins of Genocide and Crimes Against Humanity (2016) and The Ratline: Love, Lies and Justice on the Trail of a Nazi Fugitive (2020), he demonstrated his unique skill in presenting complex legal cases to avid readers.

    His latest book, 38 Londres Street: On Impunity, Pinochet in England and a Nazi in Patagonia, rounds out the trilogy.

    If it weren’t based on facts, one might think it was a brilliantly crafted thriller.


    Review: 38 Londres Street: On Impunity, Pinochet in England and a Nazi in Patagonia – Philippe Sands (Weidenfeld & Nicolson)


    38 Londres Street weaves together several narratives, but at its heart is the story of the legal attempts to end impunity for two accused criminals. One is Chilean dictator Augusto Pinochet. The other is Walther Rauff, a former SS officer who fled to South America and allegedly worked with Pinochet’s Secret Intelligence Service.

    Sands brings these two men into a single narrative to highlight the legal struggle against impunity for mass atrocities, though he never loses sight of the victims and their human stories of suffering, courage and persistence.

    These were people whose lives were abruptly and violently taken. Sands includes many of their names and tragic fates in his book. He informs his readers that the Cementerio Sara Braun in Punta Arenas, Chile, has a memorial bearing the names of Pinochet’s many victims. He clearly wants these individuals never to be forgotten.

    Universal jurisdiction and the Pinochet precedent

    The building at 38 Londres Street in Santiago was once a site of pain. At this secret interrogation centre, one of many across Santiago and the rest of Chile, Pinochet’s agents imprisoned, tortured, executed and disappeared tens of thousands of people deemed leftists, socialists, communists or “other undesirables”.

    Pinochet came to power on September 11, 1973, overthrowing the democratically elected socialist government of President Salvador Allende in a military coup. He would rule Chile with an iron fist until 1990.

    Chile’s youth became the targets of his murderous regime. Sands notes that most victims were between 21 and 30 years old. The majority of them were workers; the rest mainly comprised academics, professionals and students. The atrocities were committed with impunity.

    Like all dictators, Pinochet believed himself untouchable. But in October 1998, while visiting the UK, he was arrested in London. Spanish judge Baltasar Garzón was seeking Pinochet’s extradition to Spain in order to try him for human rights abuses.

    Garzón was acting under the then-controversial legal principle of universal jurisdiction, which allows courts in one country to prosecute grave human rights violations committed outside its borders, regardless of the nationality of the accused.

    Never before had a former head of state of one country been arrested by, and in another, for committing international crimes.

    Sands would become involved in one of the most famous cases in international law since the Nuremberg trials more than 50 years earlier. Pinochet’s lawyers offered him an opportunity to participate in the case, arguing for the former dictator’s immunity as a former head of state. His wife threatened to divorce him if he accepted.

    He declined the offer. Instead, Sands represented Human Rights Watch when the Pinochet case was considered by the Law Lords.

    Pinochet had been indicted for crimes against humanity and genocide. At issue was the question of whether Pinochet, as a former head of state, had immunity before the English courts for acts committed in another country while he was in office. Should there be a legal protection for former dictators?

    The proceedings in London were novel and remarkable, writes Sands, because this was an open legal question when Pinochet was arrested. His arrest raised an unprecedented issue: was there an exception to the rule of immunity for a former head of state when a crime in international law was involved? And did the exception apply before a national court, rather than an international one?


    Many believed Pinochet’s immunity should be lifted and extradition proceedings should go ahead, so that he could answer for the deaths of Spanish nationals and others. If that did not happen, it was argued, the travesty of justice would signal that any dictator could get away with genocide. As Sands writes, immunity and impunity often go hand in hand.

    In this landmark case, Pinochet was stripped of the immunity from prosecution he had enjoyed as a former president. He was ordered to stand trial on charges of human rights abuses.

    For the next 16 months, he remained in the UK, awaiting extradition to Spain. But it never happened. The initial judgement on immunity was quashed, due to concerns about possible bias of one of the judges. The case returned to square one. New hearings took place.

    In January 2000, the UK eventually decided not to proceed with extradition, claiming that Pinochet was too ill to stand trial and that “it would not be fair”. He was allowed to return to Chile as a free man, thanks to medical doctors rather than lawyers.

    Political leaders in Europe generally welcomed the ruling. Margaret Thatcher, former British prime minister and Pinochet’s longstanding ally, was adamant that the lengthy legal wrangle had been a waste of public money. Seemingly agitated, she said in front of the cameras:

    Senator Pinochet was a staunch friend of Britain throughout the Falklands War. His reward from this government was to be held prisoner for 16 months. In the meantime, his health has been broken, his reputation tarnished, and vast funds of public money have been squandered on a political vendetta.

    Subsequent attempts to prosecute Pinochet in Chile were unsuccessful. He died in 2006 at the age of 91, without ever being tried for the human rights abuses that occurred while he was in power. Retributive justice, in the end, was not served. But Pinochet’s case opened the gates for efforts to bring other former and serving heads of state to justice.

    Today, the 38 Londres Street serves as a place of national memory where visitors can walk through its halls and learn about its dark past.

    The Nazi who invented the gas chambers

    Running parallel with Pinochet’s story is that of Nazi fugitive Walther Rauff.

    Rauff invented the mobile gas chambers that were precursors to the gas chambers in Nazi concentration camps. At the end of the second world war, he escaped to South America, settling in Chile. Germany made numerous attempts to have Rauff extradited to face charges, but the Chilean government refused these demands. He spent his days in the backwaters of Patagonia, running a king-crab cannery business.

    Sands travels to Patagonia and meets people who remember Rauff, whose identity seems to have been common knowledge among his neighbours and co-workers: “everyone knew rumours and stories of his past”; they knew about “the gas vans” and that he “once killed many people”. But no one seemed to be bothered. They describe Rauff as “cultivated and kind”. To many of Sands’ interlocutors, the stories about Rauff “were long ago and far away”.

    While dealing with the failed attempts for his extradition, Rauff put his energies into “harvesting crabs, making sure the tins were packed tight, [and] managing the workers”. He continued to do so, enjoying the company of his dog Bobby, when Pinochet became Chile’s new leader.

    Pinochet was an old friend. Sands records that the two men met in the 1950s in Quito, Ecuador, where Rauff was staying, having fled an Italian prison camp at the end of the war. The men shared a contempt for communism and an affinity for German culture. Pinochet encouraged Rauff to move to Chile.

    Rauff delighted in Pinochet’s murderous regime. Sands tell us that Pinochet used Rauff’s “expertise” to help with the murder and disappearance of thousands of people. But the controversy over whether Rauff worked for the Chilean military, becoming “chief advisor” to its intelligence services, or perhaps even its “head”, remains unresolved. Definitive and provable evidence about the assistance Rauff may have given to Pinochet was never obtained.

    Holding dictators to account

    One of the many coincidences Sands stumbles upon is that Rauff lived in Punta Arenas in southern Chile on a street called “Jugoslavija”, named after the country where I was born, which disintegrated in the 1990s in a brutal civil war marked by mass atrocities and genocide.

    Former Yugoslavian and Serbian president Slobodan Milošević would become the first-ever serving head of state to be charged with international crimes and extradited to an international court.

    Milošević was extradited to The Hague in 2001 after he was indicted for war crimes committed in Kosovo and Croatia, and for genocide in Bosnia and Herzegovina following an order from the Serbian government. His trial is widely hailed as a landmark moment in the development of international criminal law, though he died in his cell before his trial ended, dying “innocent” like his counterparts Pinochet and Rauff.

    Slobodan Milošević in The Hague, July 2001.
    Robert Goddyn, via Wikimedia Commons, CC BY

    In 38 Londres Street, Sands brings to light the behind-the-scenes struggles to hold Pinochet and Rauff accountable. The book explores the intricacies and politics of international law. Despite its bitter ending, Pinochet’s case remains one of the most far reaching and important in the field of human rights. It caused other countries to reflect on their own legal immunities.

    As a researcher and academic, I found the book significant because it also offers insight into what it takes to conduct such expansive archival and qualitative research. Over several years, “in between work and life”, Sands travels to different corners of the globe and speaks to informants from all walks of life, including descendants of the perpetrators. He visits the sites of the events he recounts, most of them places marked by pain. He seeks to see and feel a past that still lingers.

    His method requires stamina, passion and unwavering diligence. His strong commitment to neutrality, decency and impartiality makes him stand out not only as a highly skilled writer, but a survivor who continues to unpack and share the legacy of the Holocaust. There is much to respect and learn from in Sands’ account, not least about the intricacies of writing a compelling story.

    Holding dictators to account is hard. Pinochet and Rauff deprived victims of the retributive justice they needed and deserved. Yet justice and reparations have many different meanings. They can be symbolic too, and still profoundly meaningful to victims. As one of the survivors of Pinochet’s regime replied to Sands when asked whether he believed his case was one of total impunity: “Not quite total […] Dawson [an island detention camp] has been recognised as a site of national memory, a protected monument, and that means something.”

    Pinochet and Rauff were never convicted, but they were not free. Pinochet spent years under house arrest, bitter and devastated, unable to walk the streets. Rauff lived in constant fear of being arrested and extradited. They were both haunted. This, after all, may have brought some satisfaction to the victims.

    Sands was once asked: “Do you believe in justice?” He replied: “Sort of.” Sands comes to understand that justice is “uneven in its delivery”. He has learned “to tamper expectations”. Maybe we all need to learn that skill from him too. Ultimately, justice remains a work-in-progress, just like the process of learning from a dark past.

    Olivera Simic does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Getting away with it … sort of. How a dictator and a fugitive Nazi advanced international human rights law – https://theconversation.com/getting-away-with-it-sort-of-how-a-dictator-and-a-fugitive-nazi-advanced-international-human-rights-law-257241

    MIL OSI – Global Reports

  • MIL-OSI Global: Taylor Swift now owns all the music she has ever made: a copyright expert breaks it down

    Source: The Conversation – Global Perspectives – By Wellett Potter, Lecturer in Law, University of New England

    On Friday, Taylor Swift announced she now owns all the music she has ever made. This reported US$360 million acquisition includes all the master recordings to her first six albums, music videos, concert films, album art, photos and unreleased material.

    The purchase of this catalogue from private equity firm Shamrock Capital is a profoundly happy event for Swift. She has expressed how personal and difficult it was not to own these works.

    In her announcement, Swift acknowledged that it was due to her fans purchasing her rerecorded music (known as “Taylor’s Version”) and the financial success of the record-breaking Eras Tour which enabled this purchase.

    The story behind “Taylor’s Version” and why she didn’t own the catalogue to her original six albums is due to copyright, music industry practices and contractual terms. Let’s break it down.

    What’s in a music catalogue?

    When it comes to valuing a music catalogue, it largely comes down to two types of rights: master rights and publishing rights.

    Master rights are rights pertaining to the ownership of the actual sound recordings – the final recorded version. These are called “masters” because they’re the original source from which all copies are made.

    Under traditional music industry contracts, record labels usually hold ownership of masters and associated materials. This can be music videos, tour videos, unreleased works, photographs and album covers.

    Through licensing, the label controls the use of this material and retains the majority of the royalties. In return, the label provides the artist with financial backing, recording resources and marketing.

    Publishing rights, on the other hand, relate to the underlying composition – the music and lyrics. The rights to music publishing usually belong to the songwriter, regardless of who performs the song.

    Publishing rights govern how a song can be used and who earns royalties from that use. For example, a song may be played on a streaming platform, covered in a live performance or licensed for a commercial or film.

    Swift’s contracts

    Swift was 15-years-old when she was signed to Scott Borchetta’s Big Machine record label.

    The agreed contractual terms were typical of the music industry. In exchange for the financial support to make, record and promote her subsequent albums and tours, Big Machine held the rights to Swift’s master recordings and associated materials in her first six albums. Her relationship with the label lasted 13 years.

    As a songwriter, Swift retained separate publishing rights to her songs (the music and lyrics) from her first six albums, which she licensed through Sony/ATV Music Publishing.

    In 2018, Swift was reportedly offered to re-sign with Big Machine, in a deal which would involve her “earning” the rights to one original album for each new one she produced.

    Swift did not renew her contract and moved to Republic Records (Universal Music Group), who allow her to own her masters. She also moved to Universal Music Publishing Group for her music publishing.

    Subsequent sales

    In June 2019, Big Machine’s catalogue was sold to Scooter Braun’s Ithaca Holdings, for a reported US$330 million, with US$140 million representing Swift’s catalogue.

    Swift described this as her “worst case scenario”, as she had a tumultuous history of alleged bullying from Braun. She also alleged she found out about the acquisition at the time it was announced to the world, without being given the opportunity to purchase her catalogue.

    Throughout 2019 and 2020 it was reported she attempted to regain ownership, but negotiations fell through.

    In October 2020, Swift’s catalogue was sold to Shamrock Capital, a private equity firm, for an estimated US$300+ million. In recent years, private equity firms have been purchasing music catalogues as profitable long-term financial assets, rather than for artistic or cultural reasons.

    These events led Swift to rerecord her first six albums, branding them “Taylor’s Version”. Four have been released.

    Swift rerecorded her albums, branding them ‘Taylor’s Version’.
    melissamn/Shutterstock

    She was able to create new versions of her songs, with their own intellectual property rights attached.

    As owner of these new masters, she has control over where these songs are used, and she receives a greater portion of the income from the streams, downloads and licensing.

    The decision was enormously successful. Mobilising her fans’ support via social media, they prioritised purchasing “Taylor’s Version” over the original masters, diluting the value of the originals.

    Successful futures

    Swift has repeatedly emphasised the need for artists to retain control over their work and to receive fair compensation. In a 2020 interview she said she believes artists should always own their master records and licence them back to the label for a limited period.

    This would mean the label could monetise, control and manage the recordings for a certain time, but the artist retains the ownership. They eventually gain back full control, rather than handing over permanent rights to the label.

    Swift’s experience has sparked conversations within the industry, prompting emerging artists to approach record labels with caution and advocate for fairer deals and ownership rights. Olivia Rodrigo negotiated her contract with Swift’s saga as a cautionary tale.

    Purchasing her catalogue and masters gives Swift autonomy about how the rights to all of her music is used. Her fans are likely to continue to support her and purchase both the originals and “Taylor’s Version”, so the value of her original albums may rise.

    And, in the long-run, her new acquisition will likely make her much wealthier.

    Wellett Potter does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Taylor Swift now owns all the music she has ever made: a copyright expert breaks it down – https://theconversation.com/taylor-swift-now-owns-all-the-music-she-has-ever-made-a-copyright-expert-breaks-it-down-257965

    MIL OSI – Global Reports

  • MIL-OSI Global: How did humans evolve such rotten genetics?

    Source: The Conversation – UK – By Laurence D. Hurst, Professor of Evolutionary Genetics at The Milner Centre for Evolution, University of Bath

    MaksEvs/Shutterstock

    To Shakespeare’s Hamlet we humans are “the paragon of animals”. But recent advances in genetics are suggesting that humans are far from being evolution’s greatest achievement.

    For example, humans have an exceptionally high proportion of fertilised eggs that have the wrong number of chromosomes and one of the highest rates of harmful genetic mutation.

    In my new book The Evolution of Imperfection I suggest that two features of our biology explain why our genetics are in such a poor state. First, we evolved a lot of our human features when our populations were small and second, we feed our young across a placenta.


    Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.


    Our reproduction is notoriously risky for both mother and embryo. For every child born another two fertilised eggs never made it.

    Most human early embryos have chromosomal problems. For older mothers, these embryos tend to have too many or too few chromosomes due to problems in the process of making eggs with just one copy of each chromosome. Most chromosomally abnormal embryos don’t make it to week six so are never a recognised pregnancy.

    About 15% of recognised pregnancies spontaneously miscarry, usually before week 12, rising to 65% in women over 40. About half of miscarriages are because of chromosomal issues.

    Other mammals have similar chromosome-number problems but with an error rate of about 1% per chromosome. Cows should have 30 chromosomes in sperm or egg but about 30% of their fertilised eggs have odd chromosome numbers.

    Humans with 23 chromosomes should have about 23% of fertilised eggs with the wrong number of chromosomes but our rate is higher in part because we presently reproduce late and chromosomal errors escalate with maternal age.

    Survive that, then gestational diabetes and high blood pressures issues await, most notably pre-eclampsia, potentially lethal to mother and child, affecting about 5% of pregnancies. It is unique to humans.

    Historically, up until about 1800, childbirth was remarkably dangerous with about 1% maternal mortality risk, largely owing to pre-eclampsia, bleeding and infection. In Japanese macaques by contrast, despite offspring also having a large head, maternal mortality isn’t seen. Advances in maternal care have seen current UK maternal mortality rates plummet to 0.01%.

    Many of these problems are contingent on the placenta. Compare us to a kiwi bird that loads its large egg with resources and sits on it, even if it is dead: time and energy wasted. In mammals, if the embryo is not viable, the mother may not even know she had conceived.

    The high rate of chromosomal issues in our early embryos is a mammalian trait connected to the fact that early termination of a pregnancy lessens the costs, meaning less time wasted holding onto a dead embryo and not giving up the resources that are needed for a viable embryo to grow into a baby.

    But reduced costs are not enough to explain why chromosomal problems are so common in mammals.

    During the process of making a fertilisable egg with one copy of each chromosome, a sister cell is produced, called the polar body. It’s there to discard half of the chromosomes. It can “pay” in evolutionary terms for a chromosome to not go to the polar body when it should instead stay behind in the soon to be fertilised egg.

    It forces redirection of resources to viable offspring. This can explain why chromosomal errors are mostly maternal and why, given their lack of ability to redirect saved energy, other vertebrates don’t seem to have embryonic chromosome problems.

    Our problems with gestational diabetes are a consequence of foetuses releasing chemicals from the placenta into the mother’s blood to keep glucose available. The problems with pre-eclampsia are associated with malfunctioning placentas, in part owing to maternal immune rejection of the foetus.

    Regular unprotected sex can protect women against pre-eclampsia by helping the mother become used to paternal proteins. The fact that pre-eclampsia is human-specific may be related to our exceptionally invasive placenta that burrows deep into the uterine lining, possibly required to build our unusually large brains.

    Our other peculiarities are predicted by the most influential evolutionary theory of the last 50 years, the nearly-neutral theory. It states that natural selection is less efficient when a species has few individuals.

    A slightly harmful mutation can be removed from a population if that population is large but can increase in frequency, by chance, if the population is small. Most human-specific features evolved when our population size was around 10,000 in Africa prior to its recent (last 20,000 years) expansion. Minuscule compared to, for example, bacterial populations.

    This explains why we have such a bloated genome. The main job of DNA is to give instructions to our cells about how to make the proteins vital for life.

    That is done by just 1% of our DNA but by 85% of that of our gut-dwelling bacteria Escherichia coli. Some of our DNA is required for other reasons, such as controlling which genes get activated and when. Yet only about 10% of our DNA shows any signs of being useful.

    If you have a small population size, you also have more problems stopping genetical errors like mutations. Although DNA mutations can be beneficial, they are more commonly a curse. They are the basis of genetic diseases, be they complex (such as Crohn’s disease and predispositions to cancer), or owing to single gene effects (like cystic fibrosis and Huntington’s disease).

    We have one of the highest mutation rates of all species. Other species with massive populations have mutation rates over three orders of magnitude lower, another prediction of the nearly-neutral theory.

    A consequence of our high mutation rate is that around 5% of us suffer a “rare” genetic disease.

    Modern medicine may help cure our many ailments, but if we can’t do anything about our mutation rate, we will still get ill.

    Laurence D. Hurst is the author of The Evolution of Imperfection, published by Princeton University Press. This was enabled by funding from The Humboldt Foundation and the European Research Council.

    ref. How did humans evolve such rotten genetics? – https://theconversation.com/how-did-humans-evolve-such-rotten-genetics-255473

    MIL OSI – Global Reports

  • MIL-OSI Global: Trump’s Middle East pivot aims to counter China’s rising influence

    Source: The Conversation – UK – By Maria Papageorgiou, Leverhulme Early Career Researcher, School of Geography, Politics, and Sociology, Newcastle University

    The US president, Donald Trump, claimed he was able to secure deals totalling more than US$2 trillion (£1.5 trillion) for the US on his tour of the Gulf states in May. Trump said “there has never been anything like” the amount of jobs and money these agreements will bring to the US.

    However, providing a lift for the US economy wasn’t the only thing on Trump’s mind. China’s influence in the wider Middle East region is growing fast – so much so that it was even able to mediate a detente between bitter regional rivals Saudi Arabia and Iran in 2023.

    Trump’s attempt to strengthen ties with countries in the Middle East is probably also a deliberate attempt to contain China’s growing regional ambitions.

    China has spent the past two decades building up its economic and political relations with the Middle East. In 2020, it replaced the EU as the largest trading partner to the Gulf Cooperation Council, which includes Bahrain, Kuwait, Oman, Qatar, Saudi Arabia and the United Arab Emirates (UAE). Bilateral trade between them was valued at over US$161 billion (£119 billion).


    Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.


    The Middle East has also become an important partner to China’s sprawling Belt and Road Initiative (BRI). Massive infrastructure projects in the region, such as high-speed railway lines in Saudi Arabia, have provided lucrative opportunities for Chinese companies.

    The total value of Chinese construction and investment deals in the Middle East reached US$39 billion in 2024, the most of any region in the world. That year, the three countries with the highest volume of BRI-related construction contracts and investment were all in the Middle East: Saudi Arabia, Iraq and the UAE.

    China has also strengthened its financial cooperation with Middle Eastern countries, particularly the UAE and Saudi Arabia. As part of China’s efforts to reduce global reliance on the US dollar for trade, it has arranged cross-border trade settlements, currency swap agreements, and is engaging in digital currency collaboration initiatives with these countries.

    American security guarantees have historically fostered an alignment between the Gulf states and the west. The string of agreements Trump signed with countries there reflects an attempt to draw them away from China and back towards Washington’s orbit.

    Countering China

    One of the more significant developments from Trump’s trip was an agreement to deepen US technological cooperation with the UAE, Saudi Arabia and Qatar. The US and UAE announced they would work together to construct the largest AI data centre outside of the US in Abu Dhabi.

    Technology is one of the key areas where China has been trying to assert its influence in the region. Through Beijing’s so-called “Digital Silk Road” initiative, which aims to develop a global digital ecosystem with China at its centre, Chinese firms have secured deals with Middle Eastern countries to provide 5G mobile network technology.

    Chinese tech giants Huawei and Alibaba are also in the process of signing partnerships with telecommunications providers in the region for collaboration and research in cloud computing. These companies have gained traction by aligning closely with national government priorities, such as Saudi Arabia’s initiative to diversify its economy through tech development.

    American companies, including Amazon, Microsoft and Google, have spent years building regional tech ecosystems across the Gulf. Trump is looking to recover this momentum. He was joined in the Middle East by more than 30 leaders of top American companies, who also secured commercial deals with their peers from the Gulf.

    US quantum computing company Quantinuum and Qatari investment firm Al Rabban Capital finalised a joint venture worth up to a US$1 billion. The agreement will see investment in quantum technologies and workforce development in the US and Qatar.

    There are two other areas where Trump is trying to cut China off. American companies and Abu Dhabi’s state-run oil firm agreed a US$60 billion energy partnership. China is heavily dependent on the Middle East for energy, with almost half of the oil it uses coming from the region. Greater alignment with the US could hamper Beijing’s ability to secure the resources it needs.

    Trump also signed a raft of defence deals with Qatar and Saudi Arabia. These included a US$1 billion deal for Qatar to acquire drone defence technology from American aerospace conglomerate Raytheon RTX, and a US$142 billion agreement for the Saudis to buy military equipment from US firms.

    These moves underscore Washington’s intention to limit China’s influence in key defence sectors. China is a key player in the global market for commercial and military drones, providing Saudi Arabia and the UAE with a large share of their combat drones.

    One final aspect of Trump’s trip was his brief meeting with Syria’s interim president Ahmed al-Sharaa. Trump signalled possible sanctions relief, which has since come into effect. This constituted more than a diplomatic thaw.

    With China positioning itself as a regional mediator and Russia struggling with a diminished role following the fall of Bashar al-Assad in Syria, the US is looking to reassert itself as the primary power broker in the region.

    Dr Maria (Mary) Papageorgiou receives funding from the Leverhulme Trust.

    ref. Trump’s Middle East pivot aims to counter China’s rising influence – https://theconversation.com/trumps-middle-east-pivot-aims-to-counter-chinas-rising-influence-257366

    MIL OSI – Global Reports

  • MIL-OSI Global: Gen Z and the sustainability paradox: Why ideals and shopping habits don’t always align

    Source: The Conversation – Canada – By Melise Panetta, Lecturer of Marketing in the Lazaridis School of Business and Economics, Wilfrid Laurier University

    Often praised as the ‘sustainability generation,’ Gen Z has been at the forefront of calls for ethical production, environmental accountability and climate-conscious living. (Shutterstock)

    As the summer shopping season kicks off, all eyes are on Gen Z — those born between 1997 and 2012 and whose purchasing power wields significant influence over market trends.

    Often lauded as the “sustainability generation,” a closer look reveals a complex internal struggle: despite their strong desire for eco-conscious living, many Gen Z consumers find themselves drawn to the allure of fast, affordable, trend-driven consumption.

    This discrepancy between belief and action, known as the “attitude-behaviour gap,” is a defining characteristic of Gen Z consumerism. While it’s not unique to Gen Z, it’s particularly pronounced due to their vocal environmentalism and their immersion in a hyper-consumerist digital world.

    Understanding consumer behaviour at a deeper level means looking past stated preferences and focusing instead on the economic, technological and cultural forces that shape real-world decisions.

    The rise of the eco-conscious Gen Z consumer

    There’s no denying Gen Z’s pronounced environmental awareness compared to other generations.

    Raised in the era of climate crisis and corporate responsibility, they gravitate toward brands that reflect their values. Over 75 per cent say sustainability matters more than brand name, and 81 per cent are willing to pay more for eco-friendly products.

    This isn’t merely performative — Gen Z actively integrates sustainability into their lives. They’re more likely than any other generation to research a brand’s ethics and environmental impact before buying, often using social media to guide decisions.

    More than 70 per cent discover sustainable products via platforms like Instagram and TikTok, fuelling social movements like Who Made My Clothes and supporting businesses like LastObject, a company that uses digital crowdfunding to engage environmentally conscious consumers.

    They’re also behind the rise of the second-hand market, which is expected to hit US$329 billion globally by 2029. With 40 per cent of Gen Z — the highest rate of any age group — shopping resale, platforms like Depop and ThredUp have seen explosive growth.

    Gen Z’s consumer behaviour is also influencing the spending habits of older generations. According to the World Economic Forum, increased spending on sustainable brands by groups like Generation X is being driven, in part, by Gen Z’s values, behaviours and expectations.

    Gen Z’s push for sustainable consumption is shifting the market and everyone in it.

    When values clash with spending habits

    Fast fashion, frictionless e-commerce and the constant churn of social media trends have created a marketplace where sustainable intentions are easily sidelined.

    Viral phenomena like Shein hauls — videos where social media influencers flaunt dozens of ultra-cheap outfits — spotlight the contradiction.

    In the first 19 weeks of 2025 alone, Shein’s app amassed over 54 million downloads, a staggering number that underscores how affordability and instant gratification often win out over sustainability. Built on rapid production and ultra-low prices, Shein’s model encourages frequent, high-volume purchases — the antithesis of the “buy less, buy better” ethos that underpins sustainable consumption.

    And this pattern extends far beyond fashion. The wider consumer landscape rewards speed and low cost at every turn. Gen Z came of age with one-click ordering and next-day delivery — conveniences that are now baseline expectations for shoppers. These days, nearly half of Gen Z consumers prioritize fast shipping, despite its high environmental cost.

    Meanwhile, the social media platforms where they discover new eco-conscious brands are the same ones pushing relentless trend cycles that encourage over-consumption, from gadgets to clothing and lifestyle products.

    Sustainability often comes with a steep price tag, one many young Gen Z consumers simply can’t afford. Brands like Patagonia or Allbirds are aspirational, but in the context of the cost-of-living crisis, fast-fashion giants like Zara, H&M and TJX Companies offer more budget-friendly options.

    Navigating the ‘attitude-behaviour’ gap

    The disconnect between Gen Z’s values and their consumption patterns isn’t about hypocrisy. Rather, it’s about navigating a system where sustainable choices are harder, more expensive and often less visible.

    Gen Z’s struggle shows that living sustainably in a world designed for speed, savings and social validation is an uphill battle — even for the generation most determined to make a difference.

    Bridging this gap demands action on several fronts. For businesses, it means innovating to make sustainable options more affordable and accessible. Transparency in supply chain practices and clear communication about environmental impact are also key to building trust with consumers.

    For Gen Z themselves, transparency about the true cost of consumption is vital. Fostering critical thinking about marketing messages and the impact of social media trends can empower them to make choices that more consistently align with their values.

    As the summer unfolds and consumer spending rises, the choices made by Gen Z will be a significant indicator of our collective path towards a more sustainable economy. Their ideals are a powerful force for change, but translating those ideals into consistent action remains the critical challenge.

    Melise Panetta does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Gen Z and the sustainability paradox: Why ideals and shopping habits don’t always align – https://theconversation.com/gen-z-and-the-sustainability-paradox-why-ideals-and-shopping-habits-dont-always-align-257601

    MIL OSI – Global Reports

  • MIL-OSI Global: A First Nations power authority could transform electricity generation for Indigenous nations

    Source: The Conversation – Canada – By Christina E. Hoicka, Canada Research Chair in Urban Planning for Climate Change, Associate Professor of Geography and Civil Engineering, University of Victoria

    First Nations across British Columbia have developed renewable electricity projects for decades. Yet they’ve experienced significant barriers to implementing, owning and managing their own electricity supply. That’s because there have been few procurement policies in place that require their involvement.

    While municipalities are allowed to own and operate electricity utilities in B.C., First Nations are not. The Declaration on the Rights of Indigenous Peoples Act (DRIPA) in B.C. requires that First Nations are provided with opportunities for economic development without discrimination.

    Many First Nations in B.C. view the development of renewable electricity projects on their lands (like hydro power, solar panels, wind turbines and transmission lines) as a way to achieve social, environmental and economic goals that are important to their community.

    These goals may include powering buildings in the community, creating economic development and local jobs, earning revenue, improving access to affordable and reliable electricity or using less diesel.

    Our new study shares the story of a coalition of First Nations and organizations that advocated for changes to electricity regulations and laws to give Indigenous communities more control to develop renewable electricity projects. Our interviews with knowledge holders from 14 First Nations offer insight into motivations behind their calls for regulatory changes.

    The coalition includes the Clean Energy Association of B.C., New Relationship Trust, Pembina Institute, First Nations Power Authority, Nuu-Chah-Nulth Tribal Council, and the First Nations Clean Energy Working Group.

    Models for a First Nations power authority

    Almost all electricity customers in B.C. are served by BC Hydro, the electric utility owned by the provincial government.

    The coalition argues that applying DRIPA to the electricity sector should allow First Nations to form a First Nations power authority. Such an organization would provide them with control over the development of electricity infrastructure that aligns with their values and would also help B.C. meet its greenhouse gas reduction targets.

    In the Re-Imagining Social Energy Transitions CoLaboratory (ReSET CoLab) at the University of Victoria, we analyzed regulatory documents from the B.C. Utilities Commission, and advocacy documents and presentations for discussion developed by the coalition.

    We identified six proposed First Nations power authority (Indigenous Utility) models:

    A capacity building point-of-contact model streamlines the development of renewable electricity projects to sell power to the provincial utility. For example, the First Nations Power Authority in Saskatchewan was formed for this purpose by SaskPower.

    This would be the most conformative model. It would provide vital networks and connections to First Nations while allowing BC Hydro and the British Columbia Utilities Commission to maintain full control over the electricity sector.

    In the second model, called a “put” contract, a B.C. First Nations Power Authority represents First Nations wishing to develop renewable electricity projects. Whenever the province needs to build new electricity generation projects to meet growing electricity demand, a portion of the new generation is developed by the First Nations authority.

    In the third model, First Nations build and operate electricity transmission and distribution lines to allow remote industrial facilities and communities to connect to the electricity grid. This is called “Industrial Interconnection.”

    For example, the Wataynikaneyap Power Transmission line in Ontario is a 1,800-kilometre line that provides an electricity grid connection for 17 previously remote nations. Twenty-four First Nations own 51 per cent of the line, while private investors own 49 per cent.

    In the fourth model, the B.C. First Nation Power Authority acts as the designated body for various opportunities in the electricity sector, such as the development of electricity transmission, distribution, generation or customer services. This model is referred to as “local or regional ‘ticket’ opportunities.”

    Fifth, the First Nation Power Authority develops renewable electricity projects and distributes electricity from these projects to customers as a retailer, or under an agreement through the BC Hydro electricity grid. For example, Nova Scotia Power’s Green Choice program procures renewable electricity from independent power producers to supply to electricity customers.

    Sixth, new utility is formed in B.C., owned by First Nations, that owns and operates electricity generation, transmission and distribution services and offers standard customer services in a specific region of B.C. (called a “Regional Vertically-Integrated Power Authority”).

    Most of these models would require changes to regulations. The sixth and most transformative model would provide First Nations with full decision-making control over electricity generation, transmission and distribution. It would also give them the ability to sell to customers and require extensive changes in electricity regulation.

    Improving living standards

    First Nations knowledge-holders told us that a lack of reliable power, high electricity rates, lack of control over projects on their traditional lands and the need for resilience in the face of climate events were motivations for taking electricity planning into their own hands.

    They also expressed that varied factors motivate community interest in renewable energy: improving the quality of life for community members; financial independence; mitigating climate change; protecting the environment; reducing diesel use and providing stable and safe power for current and future generations.

    First Nations are already seeking to capitalize on the benefits of renewable energy by developing their own projects within the current regulatory system.

    Most of those we spoke to see a First Nations power authority in B.C. as a means to provide opportunities for economic development without discrimination — and to achieve self-determination, self-reliance and reconciliation by addressing the root causes of some of the colonial injustices they face by obtaining control over the electricity sector on their lands.

    This article was co-authored by David Benton, an adopted member and Clean Energy Project Lead of Gitga’at First Nation and Kayla Klym, a BSc student in Geography at the University of Victoria.

    For this research project, Dr. Christina E. Hoicka received funding from Natural Resources Canada Clean Energy for Rural and Remote Communities Program (CERRC), Capacity Building Stream funding program. The research was conducted in partnership for the Clean Energy Association of British Columbia, and the New Relationship Trust. This work was also supported by the New Frontiers in Research Fund Global NFRFG-2020-00339 and the Canada Research Chair Secretariat CRC-2020-00055.

    Anna Berka is affiliated with Community Power Agency, a not-for-profit workers co-operative working to ensure a fair and accessible energy transition for all.

    Adam J. Regier and Sara Chitsaz do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    ref. A First Nations power authority could transform electricity generation for Indigenous nations – https://theconversation.com/a-first-nations-power-authority-could-transform-electricity-generation-for-indigenous-nations-254982

    MIL OSI – Global Reports

  • MIL-OSI Global: We mapped 18,000 children’s playgrounds and revealed inequality across England

    Source: The Conversation – UK – By Paul Brindley, Senior Lecturer, Department of Landscape Architecture, University of Sheffield

    Daxiao Productions / shutterstock

    Outside of the home, public playgrounds are the most common places for children to play and the fundamental right of every child to play is even recognised in a UN convention. Despite this, there has been very limited research exploring inequality in the provision of playgrounds.

    To help address this, we have analysed data from almost 34,000 playgrounds in England – the largest national dataset on playgrounds yet. In particular, we looked at England’s largest 534 settlements with populations over 15,000 and mapped patterns from the 18,077 children’s playgrounds within them.

    We found substantial inequalities. For example, with two places broadly comparable in population size, one might have five times the number of children per playground.


    Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.


    With the exception of London, deprived settlements in England tend to have fewer, smaller and further-away playgrounds – a serious social justice issue. In London however, relationships were found to be the opposite, with deprived areas tending to have more playgrounds in close proximity.

    There are many different ways to measure the provision of playgrounds, but we used 21 indicators across three domains: the number of playgrounds per child, the size of playgrounds, and their closeness to where children live.

    This ensured our results were not heavily influenced by a single variable, since some settlements excelled in one domain but were lacking in others.

    Winners and losers

    The graph below shows children’s playground provision for major settlements in England:

    More deprived settlements tend to have fewer, smaller playgrounds.
    Brindley & Martin (2025)

    Places on the left of the graph have smaller playgrounds, while in places towards the bottom of the graph kids have to travel further to a playground. Circle size indicates how many playgrounds there are per child.

    Here’s the same graph for boroughs of London, where the relationship is reversed:

    In London, kids in more deprived inner city boroughs have better access to playgrounds.
    Brindley & Martin (2025)

    These are the top settlements in each category:


    Brindley & Martin (2025), CC BY-SA

    And these are the bottom:


    Brindley & Martin (2025), CC BY-SA

    Comparing major settlements, Liverpool has nearly five times more children under 16 per playground than Norwich (1,104 compared to 236). In London, the difference is even greater: the borough of Redbridge has nearly eight times more children per playground than Islington (1,567 v 204).

    In terms of playground size, Leicester dedicates four times more of its urban area to playgrounds than Leeds (0.30% v 0.07%), while Norwich offers seven times more playground space per child than Birmingham (4.2 metres to 0.7 metres). In London, Islington has five times the playground area of Barnet (0.64% of total urban area v 0.13%), and three times more space per child than Redbridge (2.8 metres v 0.9 metres).

    Liverpool has the lowest percentage of children within 100, 300 and 500 metres of playgrounds, with Coventry having the lowest percentage at 800 metres. In contrast, Southampton, Plymouth and Reading have the highest percentages of children living close to playgrounds.

    In London, Redbridge and Kingston upon Thames had the lowest percentages of children living close to playground, while Islington, Tower Hamlets and Hackney had the highest levels of provision. These distance measures will be heavily influenced by population density, especially in London (Redbridge is suburban; Islington is inner city). However, patterns outside of London appear more complex.

    Different solutions for different places

    Places like Norwich, Islington and Milton Keynes fared well across all three domains, while places like Liverpool, Leeds or Stockton-on-Tees did comparably poorly in all three. But most areas fell somewhere in between.

    For example, places such as Portsmouth or Nottingham have good scores for distance but have poor provision in terms of size. They would, therefore, benefit most from expanding existing playgrounds.

    In contrast, playgrounds in Brighton and Lincoln are bigger but tend to be further away. Places like these would benefit from a few new strategically positioned playgrounds to fill in the gaps.

    As with any dataset, there are constraints. In future, we want to incorporate additional data on accessibility for disabled children, and we recognise that playgrounds are just one element across the wider spectrum of places where children play. For instance, children in outer London boroughs with few playgrounds might live nearer to woods or sports fields.

    We also acknowledge that we have no data to monitor the quality of playgrounds. Is a 100 square metre playground filled with interesting and safe features? Or a single worn out slide surrounded by fencing? Ultimately, playground use rather than provision is the most important measure. After all, a bad playground will not make children more active.

    Following the launch of the first all-party parliamentary group on play in May 2025, our work is helping campaigners lobby for a “play sufficiency duty” in England (similar to Scotland and Wales) and a new national play strategy.

    Our hope is that, as people become more aware of the problem, we’ll see new policies and better placemaking for children. Already we are working with Play England (England’s national charity for play) on a “digital dashboard” capable of supporting councils to plan more strategically for play in their local areas.

    The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    ref. We mapped 18,000 children’s playgrounds and revealed inequality across England – https://theconversation.com/we-mapped-18-000-childrens-playgrounds-and-revealed-inequality-across-england-252239

    MIL OSI – Global Reports

  • MIL-OSI Global: Dry mouth, bad breath and tooth damage: the effects Ozempic and Wegovy can have on your mouth

    Source: The Conversation – UK – By Adam Taylor, Professor of Anatomy, Lancaster University

    Bad breath is a commonly reported side-effect of weight loss drugs. antoniodiaz/ Shutterstock

    Ozempic and Wegovy have been hailed as wonder drugs when it comes to weight loss. But as the drug has become more widely used, a number of unintended side-effects have become apparent – with the weight loss drug affecting the appearance of everything from your butt to your feet.

    “Ozempic face” is another commonly reported consequences of using these popular weight loss drugs. This is a sunken or hollowed out appearance the face can take on in people taking weight loss drugs. It can also increase signs of ageing – including lines, wrinkles and sagging skin.

    This happens because the action of semaglutide (the active ingredient in both Ozempic and Wegovy) isn’t localised to act just on the fat in places we don’t want it. Instead, it acts on fat across the whole body – including in the face.

    But it isn’t just the appearance of your face that semaglutide affects. These drugs may also affect the mouth and teeth, too. And these side-effects could potentially lead to lasting damage.


    Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.


    Dry mouth

    Semaglutide effects the salivary glands in the mouth. It does this by reducing saliva production (hyposalivation), which can in turn lead to dry mouth (xerostomia). This means there isn’t enough saliva to keep the mouth wet.

    It isn’t exactly clear why semaglutide has this effect on the salivary glands. But in animal studies of the drug, it appears the drug makes saliva stickier. This means there’s less fluid to moisten the mouth, causing it to dry out.

    GLP-1 receptor agonist drugs (such as semaglutide) can also reduce water intake by affecting areas in the brain responsible for thirst. Low fluid intake further reduces saliva production, and may even cause the saliva to become thick and frothy and the tongue to become sticky.

    Bad breath

    One other commonly reported unwanted effect by semaglutide users is bad breath (halitosis).

    When there’s less saliva flowing through the mouth, this encourages bacteria that contribute to bad breath and the formation of cavities to thrive. These bacterial species include Streptococcus mutans and some strains of Lactobacillus.

    Another species that has been shown to thrive in conditions where saliva is reduced is Porphyromonas gingivalis. This bacteria is a significant contributor to the production of volatile sulphur compounds, which cause the foul odours characteristic of halitosis.

    Another factor that might explain why semaglutide causes bad breath is because less saliva being produced means the tongue isn’t cleaned. This is the same reason why your “morning breath” is so bad, because we naturally produce less saliva at night. This allows bacteria to grow and produce odours. Case report images show some people taking semaglutide have a “furry”-like or coated appearance to their tongue. This indicates a build up of bacteria that contribute to bad breath.

    Some people taking the weight loss drug experience a bacterial buidl-up on their tongue.
    sruilk/ Shutterstock

    Tooth damage

    One of the major side-effects of Ozempic is vomiting. Semaglutide slows how quickly the stomach empties, delaying digestion which can lead to bloating, nausea and vomiting.

    Repeated vomiting can damage the teeth. This is because stomach acid, composed primarily of hydrochloric acid, erodes the enamel of the teeth. Where vomiting occurs over a prolonged period of months and years the more damage will occur. The back surface of the teeth (palatal surface) closest to the tongue are more likely to see damage – and this damage may not be obvious to the sufferer.

    Vomiting also reduces the amount of fluid in the body. When combined with reduced saliva production, this puts the teeth at even greater risk of damage. This is because saliva helps neutralise the acid that causes dental damage.

    Saliva also contributes to the dental pellicle – a thin, protective layer that the saliva forms on the surface of the teeth. It’s thickest on the tongue-facing surface of the bottom row of teeth. In people who produce less saliva, the dental pellicle contains fewer mucins – a type of mucus which helps saliva stick to the teeth.

    Reducing the risk of damage

    If you’re taking semaglutide, there are many things you can do to keep your mouth healthy.

    Drinking water regularly during the day can help to keep the oral surfaces from drying out. This helps maintain your natural oral microbiome, which can reduce the risk of an overgrowth of the bacteria that cause bad breath and tooth damage.

    Drinking plenty of water also enables the body to produce the saliva needed to prevent dry mouth, ideally the recommended daily amount of six to eight glasses. Chewing sugar-free gum is also a sensible option as it helps to encourage saliva production. Swallowing this saliva keeps the valuable fluid within the body. Gums containing eucalyptus may help to prevent halitosis, too.

    There’s some evidence that probiotics may help to alleviate bad breath, at least in the short term. Using a probiotic supplements or consuming probiotic-rich foods (such as yoghurt or kefir) may be a good idea.

    Practising good basic oral hygiene, tooth brushing, reducing acidic foods and sugary drinks and using a mouthwash all help to protect your teeth as well.

    Women are twice as likely to have side-effects when taking GLP-1 receptor agonists – including gastrointestinal symptoms such as vomiting. This may be due to the sex hormones oestrogen and progesterone, which can alter the gut’s sensitivity. To avoid vomiting, try eating smaller meals since the stomach stays fuller for longer while taking semaglutide.

    If you are sick, don’t immediately brush your teeth as this will spread the stomach’s acid over the surface of the teeth and increase the risk of damage. Instead, rinse your mouth out with water or mouthwash to reduce the strength of the acid and wait at least 30 minutes before brushing.

    It isn’t clear how long these side effects last, they’ll likely disappear when the medication is stopped, but any damage to the teeth is permanent. Gastrointestinal side-effects can last a few weeks but usually resolve on their own unless a higher dose is taken.

    Adam Taylor does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Dry mouth, bad breath and tooth damage: the effects Ozempic and Wegovy can have on your mouth – https://theconversation.com/dry-mouth-bad-breath-and-tooth-damage-the-effects-ozempic-and-wegovy-can-have-on-your-mouth-257859

    MIL OSI – Global Reports

  • MIL-OSI Global: Russia has been working on creating drones that ‘call home’, go undercover and start fires. Here’s how they work

    Source: The Conversation – UK – By Marcel Plichta, PhD Candidate in the School of International Relations, University of St Andrews

    Russia launched its largest single drone attack of the war against Ukraine’s cities on June 1. The Ukrainian Air Force reported that they faced 472 unmanned one-way attack (OWA) drones overnight.

    The record may not stand for long. The prior record was on May 26, when Moscow launched some 355 drones. The day before Russia had set a record with 298 Shaheds, which itself surpassed the May 18 tally.

    Russia’s enormous OWA drone attacks came as a surprise to politicians and the general public, but it’s the culmination of years of work by the Russia military. Initially purchased from Iran, Russia began building factories in 2023 to assemble and then manufacture Shaheds (Iranian-designed unmanned drones) in Russia. Greater control over production gave Russia the opportunity to expand the number of Shaheds quickly.


    Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.


    It also helps them gradually upgrade their drones. Investigations into downed Shaheds show that Russia has been coating the drones in carbon, which resists detection by radar by absorbing incoming waves instead of reflecting them back. They have also been adding SIM cards to transmit data back to Russia through mobile networks.

    Shaheds also had their warheads upgraded. On May 20 the Ukrainian media reported that Shaheds were using newer incendiary and fragmentation warheads which start fires and spread large volumes of shrapnel respectively to increase their effectiveness.

    Russia hit Kyiv with its biggest ever drone strike a few days ago.

    These upgrades were simple in order to keep the cost of the drone, its major advantage over a missile, under control. These drones are both inexpensive and long-range.

    This means that an attacker such as Russia can launch hundreds every month at targets across Ukraine with little concern about how many are lost along the way. Meanwhile, the defender is stuck figuring out how to shoot all incoming drones down at a reasonable cost indefinitely.

    The problem is made even more complicated by the fact that air defence systems are sorely needed at the front line to shoot down hostile aircraft, making it a difficult trade-off.

    Adding to the problem is the recent production of decoy Shaheds. While they carry no warhead and pose little threat by themselves, Ukrainian air defence cannot always tell the decoy from the real thing and still need to shoot them down. In late May, Ukrainian officials told the media that up to 40% of incoming Shaheds were decoys.

    Consequently, Russia’s 472-drone attack reflects all of Russia’s innovations so far. These have improved the number of drones that survive, increased lethality, while using decoys alongside armed drones to ensure as many as possible reach their target.

    What are the challenges for Ukraine?

    Ukraine shoots most incoming Shaheds down. Even the 472-drone attack still had 382 claimed interceptions, a rate of 81%. However, the relatively high interception rate disguises the Shahed’s benefits for Russia.

    Shaheds are cheap by military standards, so launching constant attacks is a disproportionate burden for Ukrainian air defence units. Kyiv has mobilised an enormous amount of resources to protect its cities, from mobile units in trucks to counter-Shahed drones that function like a cheaper anti-aircraft missile.

    That said, these systems often have short ranges, which means that the savings per interception are somewhat offset by the need to maintain many hundreds of systems across a country as large as Ukraine. Ukraine also has the option of trying to strike Russia’s Shahed factories, which they have attempted a few times.

    Despite Ukraine’s evolving air defence, Russia still sees military benefits to constant Shahed attacks. In a study I contributed to last year, we found that Russia’s initial OWA drone strategy in 2022 and 2023 did little to force Ukraine to negotiate an end to the war on terms favourable to Russia.

    That may still be the case now, but the volume of drones and the high tempo of attacks means that Russian strategy could well be aimed at systematically exhausting Ukrainian air defence.

    As Ukraine grapples with unpredictable US military support, Kyiv is more vulnerable to running out of ammunition for its more advanced air defence systems. This means that constant Shahed attacks make it more difficult for Ukraine to stop incoming missiles, which carry much larger warheads.

    Ukraine’s drone strike this week.

    Of course, Ukraine has its own versions of the Shahed, which it uses to routinely launch strikes against Russian military and oil facilities. Less is known about Ukraine’s OWA drones, but they often use many similar features to Shaheds such as satellite navigation.




    Read more:
    Ukraine ‘spiderweb’ drone strike fails to register at peace talks as both sides dig in for the long haul


    For Russia’s Vladimir Putin, using Shaheds is not all about military benefit. Politically, he has increasingly used Shahed attacks to project a sense of power to his domestic audiences. On May 9, Russia paraded Shaheds through Moscow’s streets as part of its annual Victory Day celebrations, which had not been done in years past.

    Ukraine has begun employing its own OWA drones as part of the “Spiderweb” operation to attack military and oil infrastructure across Russia.

    Russia’s 472-drone attack is unlikely to remain its largest attack for long. Putin has shown a determination to expand the scale and tempo of its drone campaign and resist Ukaine’s calls for a permanent “ceasefire in the sky”, but this week Ukraine’s drone strategy has shown that prolonging the drone war can also have serious and unexpected effects for Moscow.

    So long as the conflict continues, Ukraine’s defenders will find themselves facing more, and better, drones aimed at their cities. But increasingly it looks like Russia must worry about Ukraine’s drone capabilities too.

    Marcel Plichta works for Grey Dynamics Ltd. as an intelligence instructor.

    ref. Russia has been working on creating drones that ‘call home’, go undercover and start fires. Here’s how they work – https://theconversation.com/russia-has-been-working-on-creating-drones-that-call-home-go-undercover-and-start-fires-heres-how-they-work-257699

    MIL OSI – Global Reports

  • MIL-OSI Global: Children need the freedom to play on driveways and streets again – here’s how to make it happen

    Source: The Conversation – UK – By Debbie Watson, Professor In Child and Family Welfare, University of Bristol

    BearFotos/Shutterstock

    Children no longer play freely in driveways, on their streets or in urban parks and courtyards. In many places, children’s freedom to roam has been diminishing for generations, but the pandemic has hastened the decline of this free play.

    Since the pandemic, children’s physical activity has become ever more structured. It now mostly happens in after-school or sports clubs, while informal, child-led play continues to decline.

    In many cases, children don’t have easy access to purpose-built spaces like playgrounds. They need adults to get them there. Without the use of more informal spaces to spend time with other children, this means they often lack daily opportunities for play.

    Unstructured play happens when children are given the opportunity to behave freely in spaces with other children. They will often need support from adults – such as through supervision – to help them play safely.


    Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.


    Play – and especially unstructured opportunities for play – is essential for children. Beyond providing opportunities for physical activity, play is good for children’s development. It helps them to push boundaries, find ways of exploring friendships and resolving conflicts, and to stretch their imagination and creativity.

    Schools are important for encouraging play. They can, for instance, combine play with potential benefits for physical activity levels, and with compassion for the environment and an interest in climate change and biodiversity.

    But they are not the sole solution. Supporting play needs to reach beyond the school gates.

    Urban play

    The charity Playing Out has been working in Bristol, where we are based, and in many other cities across the UK to champion community-led “play streets”. Residents apply to their local council for temporary road closures, which allows them to let their children play on the street without fearing passing cars. Parents and carers supervise resident children to play outside their houses.

    Finding ways to encourage children to play in places such as driveways, courtyards, and on their streets can also help with their independence in the outdoors. The three of us have worked on a variety of research projects on children’s interaction with the urban environment.

    Lydia is involved with children and families living in an urban area of Bristol, exploring how to get children to play in these urban pockets of space. The “OK to play” project intends to create a toolkit to help families enhance these small threshold areas, such as driveways, into play spaces.

    The experience of COVID lockdowns worldwide emphasised the importance of green spaces and nature for all of us in maintaining good levels of physical and mental health. This was often particularly challenging for children who lived in cities without easy access to gardens or green spaces.

    Debbie has worked with artists and primary-aged children on the “What does nature mean to me” project. The children explored green spaces in Bristol, collecting natural materials for collages as well as painting, drawing and taking photographs.

    The children were fascinated to see that nature resides even in the most urban places. Making art as well as spending time freely in natural spaces gave the children opportunities to explore big ideas: their hopes and fears for the future and what their role might be in the climate crisis.

    Helping play happen

    Adults have a crucial role in making being outside safer for children’s play. What the projects we’ve worked on have in common is willing adults who see the value of unstructured play, who can enthuse children, put in place structures to make being outside safer and support each other in enabling more children to engage in their right to play.

    Unstructured play is important for children’s development.
    MPH Photos/Shutterstock

    If you’re a parent or carer, you can take action. You could start by considering how you prioritise how your children spend their time. This might mean signing up to one less activity class, and instead using that regular time to supervise your children – and perhaps offering to supervise friends or neighbours’ children, too – as they play freely in your driveway, courtyard or other urban pocket.

    Perhaps you could

    ref. Children need the freedom to play on driveways and streets again – here’s how to make it happen – https://theconversation.com/children-need-the-freedom-to-play-on-driveways-and-streets-again-heres-how-to-make-it-happen-254543

    MIL OSI – Global Reports

  • MIL-OSI Global: Damien Hirst at 60: a genius who never stops stretching our understanding of art and life – or a tired trickster ruined by his riches?

    Source: The Conversation – UK – By Daisy Dixon, Lecturer in Philosophy, Cardiff University

    “I’m an artist, I have no idea about money.”

    Damien Hirst is never far from scandal. Perhaps best known for immersing animal corpses into formaldehyde and selling them as art, the “enfant terrible” of the 1990s Young British Artists (YBA) movement seems to court controversy for a living – and has made an extraordinary amount of money in the process. Reputedly worth around £700 million, this working-class lad “easily” topped a recent list of the world’s richest artists.

    Money is at the root of a lot of the questions that hover around Hirst’s legacy to the art world as he reaches his 60th birthday. Few artists have stress-tested the question of artistic value (and price) more than him – not least in his 2007 work For The Love of God: a platinum cast of a human skull encrusted with thousands of flawless diamonds.

    It cost £14 million to produce and had an asking-price of £50 million. Praised by Guardian art critic Jonathan Jones as “the most honest work of art” in its shameless reflection of capitalist consumption, Observer columnist Nick Cohen accused it of not being ironic at all in its supposed critique of the art market – but rather, “rolling in it and loving it”. Hirst himself said of the skull piece: “It’s iconic and ironic. It has the two meanings.”

    Last year, Hirst’s money-related motives were called into question again in an investigation by the Guardian which revealed he had backdated three formaldehyde sculptures to the 1990s when they were, in fact, made in 2017. The report also found he had backdated some of the 10,000 original spot paintings from his NFT project The Currency to 2016, despite them being made between 2018 and 2019.

    Hirst’s company, Science Ltd, defended the artist by reminding critics that his art is conceptual – and that he has always been clear that what matters is “not the physical making of the object or the renewal of its parts, but rather the intention and the idea behind the artwork”. His lawyers pointed out:

    The dating of artworks, and particularly conceptual artworks, is not controlled by any industry standard. Artists are perfectly entitled to be (and often are) inconsistent in their dating of works.

    But some of the art world did not respond kindly to this approach. Writing about Hirst’s “backdating scandal”, New York’s Rehs Galleries asked not only if Hirst could be sued by buyers and investors, but whether he was in creative decline. And Jones accused Hirst of being stuck in the past, calling the Guardian’s findings a “betrayal” for the artist’s admirers which could “threaten to poison Hirst’s whole artistic biography”.


    The Insights section is committed to high-quality longform journalism. Our editors work with academics from many different backgrounds who are tackling a wide range of societal and scientific challenges.


    Ever since Hirst burst on the art scene in the 1990s with his macabre readymades (or “objets trouvé”) of dead animals in vitrines, he has divided art critics and the public alike. He has faced – and deniedmultiple allegations of plagiarism and been censored by animal rights activists, while also being acclaimed as a “genius” and one of the leading global artists of the 20th and 21st centuries. Amid all the eye-watering auction sales, he has donated artworks to numerous charities throughout his career.

    So, was the backdating incident another instance of Hirst mastering the art of the concept – and even offering a sly critique of consumerism and the art world machine, of which he is such a large cog? Or was it really just a big lie by a multi-millionaire artist seeking even more financial gain?

    As philosophers of art, we think our discipline can shed light on these complex questions by exploring the nature of conceptual art, aesthetic deception and the ethics of the art market. As we contemplate the legacy of Hirst at 60, we ask: must artists always be truthful?

    What only the best art can attain

    Hirst had a humble upbringing. Born in the English port city of Bristol in 1959, he was raised in Leeds by his Irish mother, who encouraged him to draw. He never met his father and got in trouble with the police on a few occasions in his youth. His early artistic education was rocky too: he got a grade E in art A-Level and was rejected a handful of times by art schools.

    But as a teenager, he had fallen in love with Francis Bacon’s paintings, later explaining that he admired their visceral expressions of the horror of the fragile body, and that he “went into sculpture directly in reaction … to Bacon’s work”. Hirst would also use his work experience in a morgue to hone his anatomical drawing skills.

    His love of conceptual art blossomed when he began studying fine art at London’s Goldsmiths University in 1986 – taught by art world legends such as Michael Craig-Martin and catching the attention of collector and businessman Charles Saatchi. Craig-Martin had risen to fame for his conceptual artwork An Oak tree (1973), consisting of a glass of water on a pristine shelf with a text asserting that the glass was, in fact, an oak tree. Hirst has described this artwork as “the greatest piece of conceptual sculpture – I still can’t get it out of my head”.

    In 1990, the owner of the Saatchi gallery, Charles Saatchi, attended one of Hirst’s co-curated shows. He reportedly stood staring, mouth agape, at his piece consisting of a rotting cow head being engulfed by maggots, before buying it. It seems a rather apt beginning to their stormy relationship.

    Hirst’s fascination with death culminated in his most notorious work of art, The Physical Impossibility of Death in the Mind of Someone Living (1991) – a dead tiger shark, caught off the coast of Queensland in Australia, preserved in formaldehyde in a glass vitrine.

    We encountered the work, separately and ten years apart, in London and New York. We both felt inclined to dislike and dismiss it. Instead, we were simply overwhelmed. By forcing us to stare death in the face, literally, the work put everything on its edge – awe-inspiring and horrifying, life-affirming and fatal, in your face yet somehow apart and absent.

    Like it or not, Hirst’s shark achieved what only the best art can: jolting us out of our everyday registers – making us confront mortality, the value of life, and the human condition.

    Video: Khan Academy.

    Not everyone agreed, of course. After it was exhibited in the first YBA show at the Saatchi Gallery in 1992, there was a swarm of hate. According to the Stuckist Art Group (an anti-conceptual art movement), a dead shark isn’t art. Of Hirst’s entire oeuvre, the group’s co-founders have said: “They’re bright and they’re zany – but there’s fuck all there at the end of the day.”

    After Hirst won the Turner Prize in 1995 for Mother and Child, Divided (a bisected cow and calf in glass tanks) Conservative politician Norman Tebbit asked whether the art world had “gone stark raving mad”. Art critic Brian Sewell exclaimed that Hirst’s work is “no more interesting than a stuffed pike over a pub door”.

    But Hirst never seemed to care about such criticism as he tackled controversial themes ranging from death, science and religion to the unrelenting power of capitalism. Along the way, he has used his power to criticise the very art world of which he forms such an important part, and from which he has gained such enormous riches.

    You might say his art reached a logical endpoint with The Currency in 2021 – a conceptual experiment in which 10,000 unique, hand-painted spot paintings were reduced to money itself, as they corresponded to 10,000 non-fungible tokens (NFTs). Buyers were given the choice of keeping either the physical or the digital version, while the other would be destroyed. Speaking to the actor and art enthusiast Stephen Fry, Hirst said of these paintings:

    What if I made these and treated them like money? … I’ve never really understood money. All these things – art, money, commerce – they’re all ethereal. It relies not on notebooks or pieces of paper but belief, trust.

    How Hirst makes his art

    It’s not just what Hirst’s art supposedly means that sometimes rocks the boat, but how he makes it.

    While he began his career by personally making and manipulating his chosen artistic materials – from paint and canvas to flies and maggots – he now unapologetically relies on a studio populated by numerous assistants to produce the works that bear his name. It is largely these studio workers who pour the paint on spinning canvases, handle the formaldehyde, construct the glass boxes, and source the dead animals.

    Hirst has fully endorsed the conceptual artist’s mantra of “the art is the idea”. If the artwork is the idea rather than the material object, then it should suffice merely for the artist to think or conceptualise the objects for them to count as his works of art. According to this perspective, exactly who makes the objects which are exhibited, sold and debated in the media is entirely unimportant.

    But to some, this adds to the ways in which they feel deceived or “had” by Hirst. After all, at least in the western artistic tradition, the connection between artist and artwork has for hundreds of years been considered unique, sacred even. If an artist doesn’t actually make the art any more, to what extent can they really be said to be an artist at all?

    Except that, in this respect, Hirst is not particularly unusual. Outsourcing the physical act of making an artwork is almost standard among contemporary artists such as Anish Kapoor, Rachel Whiteread and Jeff Koons – all of whom have long relied on trainee artists, engineers, architects, constructors and more to build their large structural works.

    And while Andy Warhol was the trendsetter in this regard from the early 1960s – calling his studio The Factory for its assembly line-style of production – the practice predates even him by hundreds of years. The great masters of the 16th, 17th and 18th centuries, having acquired sufficient fame and fortune, were rarely the sole creators of their masterpieces.

    The 17th-century Flemish artist Rubens, for example, would often leave the painting of less central or prominent features in his works to his studio assistants – many of whom, including Anthony van Dyck and Jacob Jordaens, went on to highly successful artistic careers of their own. Even 14-year-old Leonardo da Vinci started out as a studio apprentice in the workshop of the Italian sculptor and painter Andrea del Verrocchio.

    Unlike Rubens, however, Hirst now only rarely makes any kind of material contribution to his works, beyond adding his signature. The Currency series involved Hirst merely adding a watermark and signature to the thousands of handmade spot paintings.

    Video: HENI.

    Also, Hirst’s works make no formal recognition of this studio input, whereas for Rubens, the arrangement was fairly transparent. Indeed, the division of labour was sometimes even negotiated with the painting’s buyer – the more a buyer was willing to pay, the more Rubens would paint himself.

    But Hirst makes no secret of his lack of physical involvement in the material process, explaining:

    You have to look at it as if the artist is an architect – we don’t have a problem that great architects don’t actually build the houses … Every single spot painting contains my eye, my hand and my heart.

    Hirst’s social media pages often show the artist arriving at his studio while his team are busy at work. And clearly, not all potential buyers care about his “hands-off approach” – a large part of what they value is, precisely, the signature. In 2020, Hirst told The Idler magazine’s editor Tom Hodgkinson:

    If I couldn’t delegate, I wouldn’t make any work … If I want to paint a spot painting but don’t know how I want it to look, I can go to an assistant … When they ask how you want it to look, you can say: ‘I don’t know, just do it.’ It gives you something to kick against or work against.

    In the past decade, though, Hirst says he has scaled back his studio, admitting his art life felt like it was out of control:

    You start by thinking you’ll get one assistant and before you know it, you’ve got biographers, fire eaters, jugglers, fucking minstrels and lyre players all wandering around.

    The product of a specific place and time

    Hirst disrupts our beliefs about art to an extent matched by few of his contemporaries. Always in the business of fragmenting the already vague expectations of the art market – and wider general public – he continues the trajectory outlined by fellow experimental conceptual artists such as Marcel Duchamp, Joseph Beuys, Adrian Piper, Sol LeWitt, Joseph Kosuth and Yoko Ono – now well over 50 years ago.

    When the making of art moves into this level of abstraction, a historical fact like the precise inception date seems harder to pin down – and it becomes much less clear which aspects of the creative process should determine when the work was “made”.

    Of course, the same question arises outside the confines of this artistic genre. How should we deal with performative arts such as theatre, jazz or opera? Is it all that important to date John Coltrane’s Blue Train to its first recording in 1957, rather than any of the other dates on which the American jazz legend performed it? Surely some aesthetic and artistic qualities are added on each occasion?

    However, art in general, be it Blue Train or one of Hirst’s spot paintings, is always the product of a specific place and time. It is undoubtedly a significant fact about Hirst’s Cain and Abel (1994) – one of the artworks highlighted by the Guardian misdating investigation – that it was “made” in the YBA boom of the 1990s.

    Can we engage with these pieces without bringing knowledge of this fact into our experience of them? Yes. Can we grasp at least some of their wider meaning? Almost certainly. But can we fully appreciate them as cultural objects – defining a precise moment in the evolution of art and society at large, perhaps foreseeing a certain shift in our larger value systems including what art means to us? Maybe not.

    Hirst may well believe he is following a robust and historical line of artistic reasoning, and therefore telling the truth as he sees it. This is certainly the line his lawyers took in their public statement in response to the backdating allegations.

    But there is another possibility we need to consider – one that touches on the worries of some of Hirst’s critics. What if Hirst intentionally misled the public for financial and commercial gain, and that the dating debacle has nothing to do with his cunning conceptual practice?

    Jon Sharples, senior associate at London-based law firm Howard Kennedy – one of the first UK practices to advise on art and cultural property law – observed a few reasons why an artist might deliberately fudge or mislead on the origin of their art:

    The potential for commercial pressure to do so is obvious. If works from a certain period achieve higher market prices than works from other periods, there is a clear incentive to increase the supply of such works to meet the demand for them.

    Kazimir Malevich’s Black Square.
    State Russian Museum/Wikimedia Commons

    Another reason Sharples offered is an art-historical one – to make the artist appear more radical: “In the linear, western conception of art history – in which ‘originality’ is often elevated above all other artistic virtues, and great store is placed in being the ‘first’ artist to arrive at a particular development – artists have sometimes been given to tampering with the historical record.”

    Here, Sharples referenced the famous example of “the father of abstraction”, Russian artist Kazimir Malevich, backdating the first version of his Black Square by two years.

    So, has Hirst just told a big fib about the origins of some of his art?

    Philosophers largely agree that lying involves asserting something you believe to be untrue; speaking seriously but not telling the truth. And most of the time, we all assume that people around us abide by the norm that everyone ought to speak truthfully to each other. If we didn’t believe this, we would barely be able to communicate with one another. Lying involves violating this “truth norm”.

    Yet, the case of art seems to stand in stark contrast to this. When we ask whether an artist has lied as part of their artistic practice, it is often not clear that there is a straightforward truth norm in the art world to be violated: it’s not clear that the artist is speaking ‘seriously’ in the first place.

    I (Daisy) have researched in depth the reasons why lying in the art world is such a tricky business. In many exhibitions, it is the aesthetic experience that is of primary value. If what matters is creating beauty, then straightforward truth is not the point.

    Moreover, even in cases where the art is designed to convey a specific message, it’s tricky to say in what sense they ought to tell “the truth”. Many artworks represent fictional scenarios which needn’t be fully accurate.

    For instance, it was quite acceptable in the 16th century for painters of religious paintings to give central biblical figures inaccurate clothing – and for portrait artists not to paint their sitter’s flaws and blemishes. And in the perplexing art world of the 21st century, many post-1960 artforms are designed to challenge and critique the very nature of truth itself.

    All of which means straightforward “truth games” do not operate as smoothly in the art world as they do in the ordinary world. With its self-reflective and self-critical structure, the art world of today offers a space to think open-endedly and creatively. Do you expect everything you see in an art gallery, or even speeches by conceptual artists, to be straightforwardly “true”? We don’t think so.

    The art world is hardly renowned for its straightforwardly communicated messages. To accuse Hirst of lying assumes he is playing the truth game that the rest of us are signed up to in the first place. And it’s not clear he is.

    Hirst might be closer to a novelist or actor who plays with and explores the very nature of truth and falsehood. In this way, he’s maybe at most a “bullshitter” who doesn’t play – or care for – the truth game at all.

    The real problem?

    But this fascination with Hirst’s dating practices may overlook the more important – if equally complex – problem of how his art works were made, rather than when. Are the ethical concerns about the production of Hirst’s enormous oeuvre the real issue in assessing his legacy as an artist?

    For instance, Hirst has been criticised for treating his staff as “disposable”. During the peak of the COVID pandemic, he laid off 63 of his studio assistants even though his company had reportedly received £15 million of emergency loans from the UK government.

    And while Hirst’s lawyers insist his studios always adhere to health-and-safety regulations, some of the “factory line” workers producing artworks for The Currency were allegedly left with repetitive strain injuries. One artist described their year-long toil as “very, very tedious”. Another commented on the work tables being at a low level, forcing them to constantly bend down.

    Hirst has publicly praised assistants such as the artist Rachel Howard, who he described as “the best person who ever painted spots for me”. Likewise, Howard described working with Hirst as “a very good symbiotic” relationship.

    Another area of enduring controversy is Hirst’s use of animals. In 2017, Artnet magazine estimated that nearly 1 million animals had been killed for his artworks over the years, including 36 farm animals, 685 sea creatures, and 912,005 birds and insects. The same year, Italian animal rights group 100% Animalisti summarised the concerns about animal ethics in Hirst’s art:

    Hirst is famous for exhibiting slain animals … and for the use of thousands of butterflies whose wings are torn and glued on various objects. Death and the taste of the macabre serve to attract attention. Then wealthy collectors such as Saatchi and even the prestigious Sotheby’s artificially inflate the prices of Hirst’s junk. It’s a squalid commercial operation based on death and contempt for living and sentient beings.

    Video: Channel 4 News.

    Indeed, some of Hirst’s macabre formaldehyde pieces are known for rotting a little too much. The Physical Impossibility of Death in the Mind of Someone Living originally deteriorated due to an improper preservation technique, and had to be replaced by another shark caught off the same Australian coast. It’s not clear how many sharks have now been killed – or will need to be killed in the future – to preserve this masterpiece.

    Further concerns have been raised about the environmental ethics of Hirst’s art, including that The Currency project incurred a hefty carbon footprint because of its reliance on blockchain technology. While Hirst used a more environmentally-friendly sidechain to release his NFTs, he still received payment via bitcoin, which has a far higher energy consumption.

    All of this raises wider questions about the art world’s role, for both good and bad, in modern life – from the treatment of workers in the gig economy to the climate emergency, biodiversity and animal rights.

    Traditionally, art historians, critics and investors have championed an artwork’s meaning over any of its moral flaws in its production. But the ethics of artmaking are now being questioned by philosophers such as ourselves, as well as by many influential figures in the art world. Artworks that incur large carbon footprints, cause damage to ecosystems, or use and kill animals, are now considered morally flawed in these ways.

    Philosophers such as Ted Nannicelli argue that these ethical defects can actually diminish the artistic value of the work of art. Meanwhile, artists such as Angela Singer and Ben Rubin and Jen Thorp use their art for animal and eco-activism, while doing no harm to creatures or the ecosystem in the process.

    As we both acknowledge, Hirst’s shark expressed a laudable meaning in an arresting way. But is this enough to excuse the (repeated) killing of this awesome animal? Do we become complicit in its death by praising it as art? It is a question anybody who was impressed by its sheer aesthetic presence all those years ago should ask themselves.

    In this and many other ways, Hirst’s work continues to raise fundamental questions about art – long after it was created, or dated. If nothing else, surely this confirms his enduring position in the British art establishment.

    Damien Hirst’s representatives were contacted about the criticisms of Hirst that are highlighted in this article, but they did not respond by the time of publication.


    For you: more from our Insights series:

    To hear about new Insights articles, join the hundreds of thousands of people who value The Conversation’s evidence-based news. Subscribe to our newsletter.

    Elisabeth Schellekens has received funding from Vetenskapsrådet (Swedish Funding Council) as Principal Investigator for research into Aesthetic Perception and Aesthetic Cognition (2019-22), and an AHRC Innovation Award on Perception and Conceptual Art with Peter Goldie (2003).

    Daisy Dixon does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Damien Hirst at 60: a genius who never stops stretching our understanding of art and life – or a tired trickster ruined by his riches? – https://theconversation.com/damien-hirst-at-60-a-genius-who-never-stops-stretching-our-understanding-of-art-and-life-or-a-tired-trickster-ruined-by-his-riches-257921

    MIL OSI – Global Reports

  • MIL-OSI Global: For Haitian migrants in the Dominican Republic, ‘reproduction is like a death sentence’

    Source: The Conversation – Canada – By Masaya Llavaneras Blanco, Assistant Professor of Development Studies, Huron University College, Western University

    On May 9, Lourdia Jean-Pierre, a 32-year-old Haitian migrant woman, died after giving birth in her rural home in El Ceibo, Dominican Republic. The cause of death was a postpartum hemorrhage, according to a news report in The Haitian Times.

    Despite needing medical attention, Jean-Pierre was reportedly afraid to go to the hospital. Why? She feared being deported.

    Jean-Pierre was not wrong to be afraid. Soon after her death, paramedics arrived with police officers to check on the newborn and detain her husband, Ronald Jean. Jean left the newborn with a relative as he waited to be deported.

    Between April 21 and the end of May this year, 900 lactating or pregnant women were deported from the Dominican to Haiti. They are part of the new, extreme tough-on-immigration policies in the Dominican Republic. In May alone, 22,778 Haitians were deported to Haiti.

    A new wave of mass deportations

    Last October, the Dominican government initiated a new wave of mass deportations as President Luis Abinader ordered a quota of 10,000 Haitians deported per week. On April 6, he announced new extraordinary measures to control immigration.

    The rollout of this policy began on April 21. Migration officials were assigned to work in hospitals and required migrants to show their documents before receiving medical care or face deportation.

    The new protocol does not specify pregnant and breastfeeding women. However, it effectively targets them in hospitals. Evidence of this is the fact that the policy was immediately implemented in the 33 hospitals “that report the largest number of pregnant migrant women — mainly those of Haitian origin.”

    The targeting of pregnant women is not new

    The targeting of pregnant migrants in the DR isn’t new. In September 2021, the Ministry of the Interior and Police announced a protocol to limit pregnant migrant women’s access to health care in the DR.

    Dozens of deportation raids were carried out in maternity wards in the capital and other large urban centres. According to immigration officials, attendance at pre-natal appointments fell by 80 per cent by the end of 2021.

    Deportation raids in maternity wards slowed down between 2022 and 2024, but women were still afraid to go for their check-up appointments. Pre-natal care is essential in preventing maternal deaths.

    According to a media report, the Dominican’s National Health System estimates that Haitian women accounted for 56 per cent of maternal deaths in the first half of 2022.

    No documents, no health care

    There are almost no ways for Haitians in the Dominican Republic to apply for or renew visas. And Dominican consulates in Haiti have been closed since September 2022.

    There is a long history of a lack of documentation among Dominicans of Haitian ancestry, exacerbated by the denationalization of up to 200,000 Dominicans of Haitian ancestry in 2013. That means Dominican-Haitians are also at risk of being deported when accessing health care.

    This happened to Mirryam Ferdinad who, according to community reports, went to a hospital for a programmed Caesarean section and was instead detained in Haina, the country’s largest migrant detention centre. Ferdinad was released one week on Saturday May 31st. Is it possible to add that update with this link? https://www.instagram.com/p/DKWAD44N_N7/?igsh=cXY5a21xY2pud2tp

    Deportations are expected to occur after people recover from their ailments. But human rights organizations report that deportations regularly take place in unsanitary and unsafe conditions, in trucks filled beyond capacity.

    Structural racism

    Elena Lorac, co-founder of Reconocido, an advocacy group of denationalized Dominicans of Haitian descent, said the situation is exacerbated by structural racism.

    Anti-Black racism and anti-Haitianism runs through the politics of the Dominican Republic, whereby Blackness is associated with undesirable cultural and physical traits, and linked to neighbouring Haiti.

    In contrast, DR’s nationalist groups, such as the Antigua Orden Dominicana, emphasize their colonial Spanish roots.

    Reproductive health rights under attack

    Haitian pregnant women are between a rock and a hard place. Hemorrhages and unsafe abortions are among the main causes of maternal mortality. Most of these cases are preventable if pregnant people have access to health services.

    Haiti has the highest maternal mortality in the Western hemisphere.

    Maternal mortality in the DR is lower. But its mistreatment of pregnant migrants, and its criminalization of abortion in all circumstances, pose significant risks for women.

    Haiti: A country in humanitarian crisis

    Deported migrants usually have no family or social networks in the locations they are deported at. And they have limited to no access to health services and social services.

    Dominican-Haitians also get deported because they have no legal documents despite having lived there their whole lives. They often have never been to Haiti, and barely speak Haitian Creole.

    In Haiti, about 40 per cent of primary health care was funded by the now almost completely defunded United States Agency for International development (USAID).

    Though there are some groups supporting deportees, global cuts to humanitarian agencies like the United Nations High Commissioner for Refugees and International Organization for Migration are affecting personnel on the ground. The humanitarian conditions in Haiti are increasingly challenging.

    Financial cuts worsen the extremely precarious living conditions. Nine per cent of the population is internally displaced. More than half the population is expected to experience acute food insecurity by June.

    Protesting violence

    On May 28, 13 organizations led a demonstration in front of the Dominican Republic Health Ministry. Peasant women, domestic workers, artists and feminists demanded an end to deportation raids in maternity wards and the removal of immigration officials from hospitals.

    Sirana Dolis, co-founder of Movement of Dominican-Haitian Women MUDHA, said of the situation:

    “Haitian women and women of Haitian descent are a people who love life, but under these circumstances, reproduction is like a death sentence.”

    Masaya Llavaneras Blanco receives funding from the Social Sciences and Humanities Research Council (SSHRC).

    ref. For Haitian migrants in the Dominican Republic, ‘reproduction is like a death sentence’ – https://theconversation.com/for-haitian-migrants-in-the-dominican-republic-reproduction-is-like-a-death-sentence-257427

    MIL OSI – Global Reports

  • MIL-OSI Global: The atmosphere is getting thirstier and it’s making droughts worse – new study

    Source: The Conversation – UK – By Solomon Gebrechorkos, Reserach Fellow in Climate Change Attribution, University of Oxford

    luchschenF/Shutterstock

    Droughts are becoming more severe and widespread across the globe. But it’s not just changing rainfall patterns that are to blame. The atmosphere is also getting thirstier.

    In a new study published in Nature, my colleagues and I show that this rising “atmospheric thirst” – also known as atmospheric evaporative demand (AED) – is responsible for about 40% of the increase in drought severity over the last four decades (1981-2022).

    Imagine rainfall as income and AED as spending. Even if your income (rainfall) stays the same, your balance goes into deficit if your spending (AED) increases. That’s exactly what’s happening with drought: the atmosphere is demanding more water than the land can afford to lose.

    As the planet warms, this demand grows – drawing more moisture from soils, rivers, lakes, and even plants. With this growing thirst, droughts are getting more severe even where rain hasn’t significantly declined.


    Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.


    The process of AED describes how much water the atmosphere wants from the surface. The hotter, sunnier, windier and drier the air is, the more water it requires – even if there isn’t less rain.

    So even in places where rainfall hasn’t changed much, we’re still seeing worsening droughts. This thirstier atmosphere is drying things out faster and more intensely and introducing more stress when this water is not available.

    Our new analysis reveals that AED doesn’t just make existing droughts worse – it expands the areas affected by drought. From 2018 to 2022, the global land area experiencing drought rose by 74%, and 58% of that expansion was due to increased AED.

    Our study highlights that the year 2022 stood out as the most drought-stricken year in over four decades. More than 30% of the world’s land experienced moderate to extreme drought conditions. In both Europe and east Africa, the drought was especially severe in 2022 – this was driven largely by a sharp increase in AED, which intensified drying even where rainfall hadn’t dropped significantly.

    Crop yields are severely affected by water stress.
    Scott Book/Shutterstock

    In Europe alone, widespread drying had major consequences: reduced river flows hindered hydropower generation, crop yields suffered due to water stress, plus many cities faced water shortages. This put unprecedented pressure on water supply, agriculture and energy sectors, threatening livelihoods and economic stability.

    My team’s new research brings clarity to the dynamics of drought. We used high-quality global climate data, including temperature, wind speed, humidity and solar radiation – these are the key meteorological variables that influence how much water the atmosphere can draw from the land and vegetation. The team combined all these ingredients to measure AED – essentially, how “thirsty” the air is.

    Then, using a widely recognised drought index that includes both rainfall and this atmospheric thirst, we could track when, where and why droughts are getting more severe. With this metric, we can calculate how much of that worsening is due to the atmosphere’s growing thirst.

    The future implications of this increasing atmospheric thirst are huge, especially for regions already vulnerable to drought such as western and eastern Africa, western and south Australia, and the southwestern US where AED was responsible for more than 60% of drought severity over the past two decades.

    Without factoring in AED during drought monitoring and planning, governments and communities may underestimate the true risk they face. With global temperatures expected to rise further, we can expect even more frequent and severe droughts. We need to prepare. That involves understanding and planning for this growing atmospheric thirst.

    Driving drought

    Knowing what is causing droughts in each specific location enables smarter climate adaptation. AED must be a central part of how we monitor, model and plan for drought.

    Identifying the specific drivers of drought is essential for tailoring effective ways to cope with drought. If droughts are mainly due to declining rainfall, then the focus should be on water storage and conservation. But if AED is the main driver – as it is in many places now – then strategies must address evaporative loss (i.e. the amount of water lost from the surface and plants to the atmosphere) and plant water stress. This might involve planting drought-resistant crops, constructing irrigation systems that use water more efficiently, improving soil health or restoring habitats to keep moisture in the land.

    As our research shows, rising AED – driven by global warming – is intensifying drought severity even where rainfall hasn’t declined. Ignoring it means underestimating risk.


    Don’t have time to read about climate change as much as you’d like?

    Get a weekly roundup in your inbox instead. Every Wednesday, The Conversation’s environment editor writes Imagine, a short email that goes a little deeper into just one climate issue. Join the 45,000+ readers who’ve subscribed so far.


    Solomon Gebrechorkos receives funding from the UK Foreign, Commonwealth and Development Office (FCDO; grant no. 201880) and the UK Natural Environment Research Council (NERC; grant no. NE/S017380/1).

    ref. The atmosphere is getting thirstier and it’s making droughts worse – new study – https://theconversation.com/the-atmosphere-is-getting-thirstier-and-its-making-droughts-worse-new-study-258022

    MIL OSI – Global Reports

  • MIL-OSI Global: From sovereignty to sustainability: a brief history of ocean governance

    Source: The Conversation – France – By Kevin Parthenay, Professeur des Universités en science politique, membre de l’Institut Universitaire de France (IUF), Université de Tours

    The United Nations Ocean Conference (UNOC 3) will open in Nice, France, on June 9, 2025. It is the third conference of its kind, following events in New York in 2017 and Lisbon in 2022. Co-hosted by France and Costa Rica, the conference will bring together 150 countries and nearly 30,000 individuals to discuss the sustainable management of our planet’s oceans.

    This event is presented as a pivotal moment, but it is actually part of a significant shift in marine governance that has been going on for decades. While ocean governance was once designed to protect the marine interests of states, nowadays it must also address the numerous climate and environmental challenges facing the oceans.

    Media coverage of this “political moment” however should not overshadow the urgent need to reform the international law applicable to the oceans. Failing that, this summit will risk being nothing more than another platform for vacuous rhetoric.

    To understand what is at stake, it is helpful to begin with a brief historical overview of marine governance.

    The meaning of ocean governance

    Ocean governance changed radically over the past few decades. The focus shifted from the interests of states and the corresponding body of international law, solidified in the 1980s, to a multilateral approach initiated at the end of the Cold War, involving a wide range of actors (international organizations, NGOs, businesses, etc.).

    This governance has gradually moved from a system of obligations pertaining to different marine areas and regimes of sovereignty associated to them (territorial seas, exclusive economic zones (EEZs), and the high seas) to a system that takes into consideration the “health of the oceans.” The aim of this new system is to manage the oceans in line with the sustainable development goals.

    Understanding how this shift occurred can help us grasp what is at stake in Nice. The 1990s were marked by declarations, summits and other global initiatives. However, as evidenced below, the success of these numerous initiatives has so far been limited. This explains why we are now seeing a return to an approach more firmly rooted in international law, as evidenced by the negotiations on the international treaty on plastic pollution, for example.

    The “Constitution of the Seas”

    The law of the sea emerged from the Hague Conference in 1930. However, the structure of marine governance gradually came to be defined in the 1980s, with the adoption of the United Nations Convention on the Law of the Sea (UNCLOS) in 1982.

    UNOC 3 is a direct offshoot of this convention: discussions on sustainable ocean management stem from the limitations of this founding text, often referred to as the “Constitution of the Seas”.

    UNCLOS was adopted in December 1982 at the Montego Bay Convention in Jamaica and came into force in November 1994, following a lengthy process of international negotiations that resulted in 60 states ratifying the text. At the outset, the discussions focused on the interests of developing countries, especially those located along the coast, in the midst of a crisis in multilateralism. The United States managed to exert its influence in this arena without ever officially adopting the Convention. Since then, the convention has been a pillar of marine governance.

    It established new institutions, including the International Seabed Authority, entrusted with the responsibility of regulating the exploitation of mineral resources on the seabed in areas that fall outside the scope of national jurisdiction. UNCLOS is the source of nearly all international case law on the subject.

    Although the convention did define maritime areas and regulate their exploitation, new challenges quickly emerged: on the one hand, the Convention was essentially rendered meaningless by the eleven-year delay between its adoption and implementation. On the other hand, the text also became obsolete due to new developments in the use of the seas, particularly technological advances in fishing and seabed exploitation.

    The early 1990s marked a turning point in the traditional maritime legal order. The management of the seas and oceans came to be viewed within an environmental perspective, a process that was driven by major international conferences and declarations such as the Rio Declaration (1992), the Millennium Declaration (2005), and the Rio+20 Summit (2012). These resulted in the 2030 Agenda and the Sustainable Development Goals (SDGs), the UN’s 17 goals aimed at protecting the planet (with SDG 14, “Life Below Water”, directly addressing issues related to the oceans) and the world’s population by 2030.



    A weekly e-mail in English featuring expertise from scholars and researchers. It provides an introduction to the diversity of research coming out of the continent and considers some of the key issues facing European countries. Get the newsletter!


    The United Nations Conference on Environment and Development (UNCED, or Earth Summit), held in Rio de Janeiro, Brazil, in 1992, ushered in the era of “sustainable development” and, thanks to scientific discoveries made in the previous decade, helped link environmental and maritime issues.

    From 2008 to 2015, environmental issues became more important as evidenced by the regular adoption of environmental and climate resolutions.

    A shift in UN language

    Biodiversity and the sustainable use of the oceans (SDG 14) are the two core themes that became recurring topics in the international agenda since 2015, with ocean-related issues now including items like acidification, plastic pollution and the decline of marine biodiversity.

    The United Nations General Assembly resolution on oceans and the law of the seas (LOS is a particularly useful tool to acknowledge this evolution: drafted annually since 1984, the resolution has covered all aspects of the United Nations maritime regime while reflecting new issues and concerns.

    Some environmental terms were initially absent from the text but have become more prevalent since the 2000s.

    This evolution is also reflected in the choice of words.

    While LOS resolutions from 1984 to 1995 focused mainly on the implementation of the treaty and the economic exploitation of marine resources, more recent resolutions have used terms related to sustainability, ecosystems, and maritime issues.

    Toward a new law of the oceans?

    As awareness of the issues surrounding the oceans and their link to climate change has grown, the oceans gradually became a global “final frontier” in terms of knowledge.

    The types of stakeholders involved in ocean issues have also changed. The expansion of the ocean agenda has been driven by a more “environmentalist” orientation, with scientific communities and environmental NGOs standing at the forefront of this battle. This approach, which represents a shift away from a monopoly held by international law and legal practitioners, clearly is a positive development.

    However, marine governance has so far relied mainly on non-binding declaratory measures (such as the SDGs) and remains ineffective. A cycle of legal consolidation toward a “new law of the oceans” therefore appears to be underway and the challenge is now to supplement international maritime law with a new set of measures. These include:

    Of these agreements, the BBNJ is arguably the most ambitious: since 2004, negotiators have been working toward filling the gaps of the United Nations Convention on the Law of the Sea (UNCLOS) by creating an instrument on marine biodiversity in areas beyond national jurisdiction.

    The agreement addresses two major concerns for states: sovereignty and the equitable distribution of resources.

    Adopted in 2023, this historic agreement has yet to enter into force. For this to happen, sixty ratifications are required and to date, only 29 states have ratified the treaty (including France in February 2025, editor’s note).

    The BBNJ process is therefore at a crossroads and the priority today is not to make new commitments or waste time on complicated high-level declarations, but to address concrete and urgent issues of ocean management, such as the frantic quest for critical minerals launched in the context of the Sino-American rivalry, and exemplified by Donald Trump’s signing of a presidential decree in April 2025 allowing seabed mining – a decision that violates the International Seabed Authority’s well established rules on the exploitation of these deep-sea resources.

    At a time when U.S. unilateralism is leading to a policy of fait accompli, the UNOC 3 should, more than anything and within the framework of multilateralism, consolidate the existing obligations regarding the protection and sustainability of the oceans.

    Kevin Parthenay is a member of the Institut Universitaire de France (IUF).

    Rafael Mesquita ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’a déclaré aucune autre affiliation que son organisme de recherche.

    ref. From sovereignty to sustainability: a brief history of ocean governance – https://theconversation.com/from-sovereignty-to-sustainability-a-brief-history-of-ocean-governance-258200

    MIL OSI – Global Reports

  • MIL-OSI Global: Development finance in a post-aid world: the case for country platforms

    Source: The Conversation – Africa – By Richard Calland, Emeritus Associate Professor in Public Law, UCT. Visiting Adjunct Professor, WITS School of Governance; Director, Africa Programme, University of Cambridge Institute for Sustainability Leadership, University of Cambridge

    With the Trump administration slashing US Agency for International Development budgets and European nations shifting overseas development aid budgets to bolster defence spending, the world has entered a “post-aid era”.

    But there is an opportunity to recast development finance as strategic investment: “country platforms”.

    Country platforms are government-led, nationally owned mechanisms that bring together a country’s climate priorities, investment needs and reform agenda, and align them with the interests of development partners, private investors and implementing agencies. They function as a strategic hub: convening actors, coordinating funding, and curating pipelines of projects for investment.

    Think of them as the opposite of donor-driven fragmentation. Instead of dozens of disconnected projects driven by external priorities, a country platform enables governments to set the agenda and direct finance to where it is needed most. That could be renewable energy, climate-smart agriculture, resilient infrastructure, or nature-based solutions.

    Country platforms are a current fad. They were the talk of the town at the 2025 Spring meetings of multilateral development banks in Washington DC. Will they quickly fade as the next big new idea comes into view? Or can they escape the limitations and failings of the finance and development aid ecosystem?

    The Independent High Level Expert Group on Climate Finance, on which I serve, is striving to find new ways to ramp up finance – both public and private – in quality and quantity. I agree with those who argue that country platforms could be the innovation that unlocks the capital urgently needed to tackle climate overshoot and buttress economic development.

    The model is already being tested. More than ten countries have launched their platforms, and more are in the pipeline.

    For African countries, the opportunity could not be more timely. African governments are racing to deliver their Nationally Determined Contributions. These are the commitments they’ve made to reduce their greenhouse gas emissions as part of climate change mitigation targets set out in the Paris Agreement. Implementing these plans is often being done under severe fiscal constraints.

    At the same time global capital is looking for investment opportunities. But it needs to be convinced that the rewards will outweigh the risks.

    Where it’s being tested

    In Africa, South Africa’s Just Energy Transition Partnership has demonstrated both the potential and the complexity of a country platform. Egypt and Senegal also have country platforms at different stages of implementation. Kenya and Nigeria are exploring similar mechanisms. The African Union’s Climate Change and Resilient Development Strategy calls for country platforms across the continent.

    New entrants can learn from countries that started first.

    But country platforms come in different shapes and sizes according to the context.

    Another promising example is emerging through Mission 300, an initiative of the World Bank and African Development Bank, working with partners like The Rockefeller Foundation, Global Energy Alliance for People and Planet, and Sustainable Energy for All. It aims to connect 300 million people to clean electricity by 2030.

    Central to this initiative are Compact Delivery and Monitoring Units. These are essentially country platforms anchored in electrification. They reflect how a well-structured country platform can make an impact. Twelve African countries are already moving in this direction. All announced their Mission 300 compacts at the Africa Heads of State Summit in Tanzania.

    This growing cohort reflects a continental commitment to putting energy-driven country platforms at the heart of Africa’s development architecture.

    Why now – and why Africa?

    A well-functioning country platform can help in a number of ways.

    Firstly, it can give the political and economic leadership a clear goal. The platform can survive elections and show stability, certainty and transparency to the investment world.

    Secondly, national ownership and strategic alignment can reduce risk and build confidence. That would encourage investment.

    Thirdly, it builds trust among development partners and investors through clear priorities, transparency, and national ownership.

    Fourthly, it moves beyond isolated pilot projects to system-level transformation – meaning structural change. The transition in one sector, energy for example, creates new value chains that create more, better and safer jobs. Country platforms put African governments in charge of their own economic development, not as passive recipients of climate finance.

    The country sets its investment priorities and then the match-making with international climate finance can begin.

    Making it work: what’s needed

    Developing the data on which a country bases its investment and development plans, and blending those with the fiscal, climate and nature data, is complex. For this reason country platforms require investment in institutional capacity, cross-ministerial collaboration, and strong coordination between finance ministries, environment agencies and economic planners. And especially, in leadership capability.

    African countries must take charge of this capacity and capability acceleration.

    Second, development partners can respond by providing money as well as supporting African leadership, aligning with national strategies, and being willing to co-design mechanisms that meet both investor expectations and local realities.

    Capacity is especially crucial given the scale of Africa’s needs. According to the African Development Bank, Africa will require over US$200 billion annually by 2030 to meet its climate goals. Donor aid will provide only a fraction of this. It will require smart, coordinated investment and careful debt management. Country platforms provide the structure to govern the process.

    Seizing the opportunity

    Country platforms represent one of the most promising innovations in climate and development finance architecture. Properly designed and led, they offer African countries the opportunity to take ownership of their climate and development futures – on their own terms.

    Country platforms could be the “buckle” that finally enables the supply and demand sides of climate finance to come together. It will require commitment, strategic and technical capability, and, above all, smart leadership.

    Richard Calland works for the University of Cambridge Institute for Sustainability Leadership. He is also an Emeritus Associate Professor at the University of Cape Town and an Adjunct Visiting Professor at the University of Witwatersrand School of Governance. He serves on the Advisory Council of the Council for the Advancement of the South African Constitution, Chairs of the Board of Sustainability Education and is a member of the Board of Chapter Zero Southern Africa.

    ref. Development finance in a post-aid world: the case for country platforms – https://theconversation.com/development-finance-in-a-post-aid-world-the-case-for-country-platforms-257994

    MIL OSI – Global Reports

  • MIL-OSI Global: Extreme weather’s true damage cost is often a mystery – that’s a problem for understanding storm risk, but it can be fixed

    Source: The Conversation – USA – By John Nielsen-Gammon, Regents Professor of Atmospheric Sciences, Texas A&M University

    Hail can be destructive, yet the cost of the damage often isn’t publicly tracked. NOAA/NSSL

    On Jan. 5, 2025, at about 2:35 in the afternoon, the first severe hailstorm of the season dropped quarter-size hail in Chatham, Mississippi. According to the federal storm events database, there were no injuries, but it caused $10,000 in property damage.

    How do we know the storm caused $10,000 in damage? We don’t.

    That estimate is probably a best guess from someone whose primary job is weather forecasting. Yet these guesses, and thousands like them, form the foundation for publicly available tallies of the costs of severe weather.

    If the damage estimates from hailstorms are consistently lower in one county than the next, potential property buyers might think it’s because there’s less risk of hailstorms. Instead, it might just be because different people are making the estimates.

    Hail damage in Dallas in June 2012.
    Rondo Estrello/Flickr, CC BY-SA

    We are atmospheric scientists at Texas A&M University who lead the Office of the Texas State Climatologist. Through our involvement in state-level planning for weather-related disasters, we have seen county-scale patterns of storm damage over the past 20 years that just didn’t make sense. So, we decided to dig deeper.

    We looked at storm event reports for a mix of seven urban and rural counties in southeast Texas, with populations ranging from 50,000 to 5 million. We included all reported types of extreme weather. We also talked with people from the two National Weather Service offices that cover the area.

    Storm damage investigations vary widely

    Typically, two specific types of extreme weather receive special attention.

    After a tornado, the National Weather Service conducts an on-site damage survey, examining its track and destruction. That survey forms the basis for the official estimate of a tornado’s strength on the enhanced Fujita scale. Weather Service staff are able to make decent damage cost estimates from knowledge of home values in the area.

    They also investigate flash flood damage in detail, and loss information is available from the National Flood Insurance Program, the main source of flood insurance for U.S. homes.

    Tornadoes in May 2025 destroyed homes in communities in several states, including London, Ky.
    AP Photo/Timothy D. Easley

    Most other losses from extreme weather are privately insured, if they’re insured at all.

    Insured loss information is collected by reinsurance companies – the companies that insure the insurance companies – and gets tabulated for major events. Insurance companies use their own detailed information to try to make better decisions on rates than their competitors do, so event-based loss data by county from insurance companies isn’t readily available.

    Losing billion-dollar disaster data

    There’s one big window into how disaster damage has changed over the years in the U.S.

    The National Oceanic and Atmospheric Administration, or NOAA, compiled information for major disasters, including insured losses by state. Bulk data won’t tell communities or counties about their specific risk, but it enabled NOAA to calculate overall damage estimates, which it released as its billion-dollar disasters list.

    From that program, we know that the number and cost of billion-dollar disasters in the United States has increased dramatically in recent years. News articles and even scientific papers often point to climate change as the primary culprit, but a much larger driver has been the increasing number and value of buildings and other types of infrastructure, particularly along hurricane-prone coasts.

    Critics in the past year called for more transparency and vetting of the procedures used to estimate billion-dollar disasters. But that’s not going to happen, because NOAA in May 2025 stopped making billion-dollar disaster estimates and retired its user interface.

    Previous estimates can still be retrieved from NOAA’s online data archive, but by shutting down that program, the window into current and future disaster losses and insurance claims is now closed.

    Emergency managers at the county level also make local damage estimates, but the resources they have available vary widely. They may estimate damages only when the total might be large enough to trigger a disaster declaration that makes relief funds available from the federal government.

    Patching together very rough estimates

    Without insurance data or county estimates, the local offices of the National Weather Service are on their own to estimate losses.

    There is no standard operating procedure that every office must follow. One office might choose to simply not provide damage estimates for any hailstorms because the staff doesn’t see how it could come up with accurate values. Others may make estimates, but with varying methods.

    The result is a patchwork of damage estimates. Accurate values are more likely for rare events that cause extensive damage. Loss estimates from more frequent events that don’t reach a high damage threshold are generally far less reliable.

    The number of severe hail reports in southeast Texas listed in the National Centers for Environmental Information’s storm events database is strongly correlated with population. The county with the most reports and greatest detail in those reports is home to Houston. Hailstorms in the three easternmost counties are rarely associated with damage estimates.
    John Nielsen-Gammon and B.J. Baule

    Do you want to look at local damage trends? Forget about it. For most extreme weather events, estimation methods vary over time and are not documented.

    Do you want to direct funding to help communities improve resilience to natural disasters where the need is greatest? Forget about it. The places experiencing the largest per capita damages depend not just on actual damages but on the different practices of local National Weather Service offices.

    Are you moving to a location that might be vulnerable to extreme weather? Companies are starting to provide localized risk estimates through real estate websites, but the algorithms tend to be proprietary, and there’s no independent validation.

    4 steps to improve disaster data

    We believe a few fixes could make NOAA’s storm events database and the corresponding values in the larger SHELDUS database, managed by Arizona State University, more reliable. Both databases include county-level disasters and loss estimates for some of those disasters.

    First, the National Weather Service could develop standard procedures for local offices for estimating disaster damages.

    Second, additional state support could encourage local emergency managers to make concrete damage estimates from individual events and share them with the National Weather Service. The local emergency manager generally knows the extent of damage much better than a forecaster sitting in an office a few counties away.

    Third, state or federal governments and insurance companies can agree to make public the aggregate loss information at the county level or other scale that doesn’t jeopardize the privacy of their policyholders. If all companies provide this data, there is no competitive disadvantage for doing so.

    Fourth, NOAA could create a small “tiger team” of damage specialists to make well-informed, consistent damage estimates of larger events and train local offices on how to handle the smaller stuff.

    With these processes in place, the U.S. wouldn’t need a billion-dollar disasters program anymore. We’d have reliable information on all the disasters.

    John Nielsen-Gammon receives funding from the National Oceanic and Atmospheric Administration and the State of Texas.

    William Baule receives funding from NOAA, the State of Texas, & the Austin Community Foundation.

    ref. Extreme weather’s true damage cost is often a mystery – that’s a problem for understanding storm risk, but it can be fixed – https://theconversation.com/extreme-weathers-true-damage-cost-is-often-a-mystery-thats-a-problem-for-understanding-storm-risk-but-it-can-be-fixed-257105

    MIL OSI – Global Reports

  • MIL-OSI Global: ‘Loyal to the oil’ – how religion and striking it rich shape Canada’s hockey fandom

    Source: The Conversation – USA – By Cody Musselman, Preceptor, College Writing Program, Harvard University

    Some Edmonton Oilers fans are pinning their Stanley Cup hopes on captain Connor McDavid. AP Photo/Rebecca Blackwell

    Déjà vu is a common occurrence in the world of sports, and the Edmonton Oilers are no strangers to repeat matchups. The Canadian team faced off against the New York Islanders in both 1983 and ’84 for hockey’s biggest prize, the Stanley Cup. In this year’s National Hockey League finals, the Oilers will try to avenge their Game 7 loss to the Florida Panthers in 2024.

    Edmontonians who have been “loyal to the oil,” as fans say, have been waiting for redemption ever since. The Trump administration’s threats toward its northern neighbor has fueled a wave of nationalism, making even more fans eager for a Canadian team to win the Stanley Cup – which has not happened since 1993. With hopes pinned to Edmonton, the finals also brings renewed attention to some of Canada’s biggest exports: hockey and oil.

    Novelist Leslie McFarlane once observed that for Canadians, “hockey is more than a game; it is almost a religion.” Prayers and superstitions abound, from wearing special clothing to fans averting their eyes during penalty shots.

    The Oilers also evoke another aspect of Canadian society that, for some, has almost religious importance: resource extraction. In American and Canadian culture, oil has long been entangled with religion. It’s a national blessing from God, in some people’s eyes, and a means to the “good life” for those who persevere to find it. For many people in communities whose economies center around resource extraction, the possibility of success is valued above its environmental risks.

    We are scholars of religion who study sports and how oil shapes society, or petro-cultures. The Edmonton Oilers showcase a worldview in which triumph, luck and rugged work pay off – beliefs at home on the ice or in the oil field. The Stanley Cup Final offers a glimpse into how the oil industry has helped shaped the religious fervor around Canada’s favorite sport.

    Edmonton Oilers fan Dale Steil’s boots before the team’s playoff game against the Los Angeles Kings on April 26, 2024.
    AP Photo/Tony Gutierrez

    Boomtown

    Edmonton is the capital of Alberta, a province known for its massive oil, gas and oil sands reserves. With five refineries producing an average of 3.8 million barrels a day, oil and gas is Alberta’s biggest industry – and a way of life.

    This is especially true in Edmonton, known as the “Oil Capital of Canada.” Here, oil not only structures the local economy, but it also shapes identities, architecture and everyday experiences.

    Visit the West Edmonton Mall, for example, and you’ll see a statue of three oil workers drilling, reminding shoppers that petroleum is the bedrock of their commerce. Visit the Canadian Energy Museum to learn how oil and gas have remade the region since the late 1940s, and glimpse items such as engraved hard hats and the “Oil Patch Kid,” a spin on the iconic “Cabbage Patch Kids” toys. Tour the Greater Edmonton area and see how pump jacks dot the horizon. Oil is everywhere, shaping futures, fortunes and possibility.

    Pump jacks near Acme, Alberta – a regular sight.
    Michael Interisano/Design Pics Editorial/Universal Images Group via Getty Images

    Set against this backdrop, the Oilers’ name is unsurprising. It is not uncommon, after all, to name teams after local industries. Football’s Pittsburgh Steelers pay homage to the steel mills that once employed much of the team’s fan base. The Tennessee Oilers were originally the Houston Oilers, prompting other Texas teams such as the XFL’s Roughnecks to follow suit. Further north, the name of basketball’s Detroit Pistons references car manufacturing.

    Teams with industry-inspired names play double duty, venerating both a place and a trade. Some fans are not only cheering for the home team, but also cheering for themselves – affirming that their industry and their labor matter.

    Ales Hemsky of the Edmonton Oilers skates out from under the oil derrick for a game at Rexall Place in 2008 in Edmonton, Alberta.
    Andy Devlin/NHLI via Getty Images

    In a TikTok video from last year’s Stanley Cups playoffs, a man overcome with joy at the Oilers’ victory over the Dallas Stars claps his hands and hops around his living room. The caption reads, “My first-generation immigrant oil rig working Filipino father who has never played a second of hockey in his life … happily cheering for the Oilers advancing in the playoffs. Better Bring that cup home for him oily boys.” He appears to be cheering for the Oilers not because they are a hockey team, but because they are an oil team.

    And indeed, the Oilers are an oily team. The Oilers’ Oilfield Network, for example, describes itself as “exclusively promot[ing] companies in the Oil and Gas industry,” allowing leaders to connect “through the power of Oilers hockey.”

    The Oilers’ connection with industry is further underscored by their logos. The current one features a simple drop of oil, but past designs featured machinery gears and an oil worker pulling a lever shaped like a hockey stick.

    Simply put, “Edmonton is all oil,” Oilers goaltender Stuart Skinner shared after defeating the Dallas Stars to win the 2025 Western Conference Final.

    Liquid gold

    There is a long tradition of pairing hockey with oil – and with Canada itself.

    After the British North America Act founded Canada in 1867, the new nation searched for a distinctive identity through sport and other cultural forms.

    Enter hockey. The winter game evolved in Canada from the Gaelic game of “shinty” and the First Nations’ game of lacrosse and soon became part of the glue holding the nation together.

    Ever since, media, politicians, sports groups and major industries have helped fuel fan fervor and promoted hockey as integral to Canada’s rugged frontiersman character.

    The Montreal Amateur Athletic Association posing with the first Stanley Cup in 1893.
    Bruce Bennett Studios via Getty Images Studios/Getty Images

    In 1936, Imperial Oil, one of Canada’s largest petroleum companies, began sponsoring Hockey Night in Canada, a national radio show that reached millions each week. Several years later, Imperial Oil played a major role in bringing the show to television, where the Imperial Oil Choir sang the theme song. Imperial Oil and its gas stations, Esso, also sponsored youth hockey programs across the nation. In 2019, Imperial inked a deal to be the NHL’s “official retail fuel” in Canada.

    Striking it rich

    Connections between hockey and industry in Alberta’s oil country aren’t just about sponsorships. Central to both cultures is the idea of luck – historically, one of the many things it takes to extract fossil fuels. “Striking it rich” in the oil fields has become entangled with the idea of divine providence, especially among the many Christian laborers.

    Philosopher Terra Schwerin Rowe has written about North America’s “petro-theology,” explaining how many perceive oil as a free-flowing gift from God meant to be taken from the Earth – if you can find it.

    A Canadian oil worker kisses his wife and daughter goodbye as he sets off to work in northern Alberta in the 1950s.
    John Chillingworth/Getty Images

    Oil represents fortune, and who wouldn’t want to borrow a bit of that for their team? Sports are thrilling because sometimes talent, team chemistry and the home-field advantage still lose to a stroke of good luck. Oil culture pairs the idea of divine favor with an insistence on rough-and-tumble endurance, similar to hockey.

    Sometimes if you don’t strike it rich the first time, you have to keep on drilling. The next well may be the one to bring wealth. Oil prospectors know this, but so do sports fans who maintain hope season to season.

    Soon fans from around the world will join Edmonton locals in rooting for the Oilers. They’ll throw their hands up in despair if captain Connor McDavid enters the “sin bin” – the penalty box – or dance in celebration to the Oilers’ theme, “La Bamba.” Some of them will be cheering, too, for oil.

    This is an updated version of an article originally published on June 19, 2024.

    The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    ref. ‘Loyal to the oil’ – how religion and striking it rich shape Canada’s hockey fandom – https://theconversation.com/loyal-to-the-oil-how-religion-and-striking-it-rich-shape-canadas-hockey-fandom-258024

    MIL OSI – Global Reports

  • MIL-OSI Global: What a sunny van Gogh painting of ‘The Sower’ tells us about Pope Leo’s message of hope

    Source: The Conversation – USA – By Virginia Raguin, Distinguished Professor of Humanities Emerita, College of the Holy Cross

    Vincent van Gogh’s ‘Sower at Sunset’ painting. Vincent van Gogh/ Kröller-Müller Museum via Wikimedia Commons, CC BY-NC-SA

    In his first general audience in Rome, Pope Leo XIV referred to Vincent van Gogh’s painting “Sower at Sunset” and called it a symbol of hope. A brilliant setting sun illuminates a field as a farmer walks toward the right, sowing seeds.

    Leo referred to Christ’s Parable of the Sower, a story in the Gospel that speaks to the need to do good works. “Every word of the Gospel is like a seed sown in the soil of our lives,” he said, and highlighted that the soil is not only our heart, “but also the world, the community, the church.”

    He noted that “behind the sower, van Gogh painted the grain already ripe,” and Leo called it an image of hope which shows that somehow the seed has borne fruit.

    Van Gogh painted “Sower at Sunset” in 1888, when he was living in Arles in southern France. At the time, he was creating art alongside his friend Paul Gauguin and feeling very happy about the future. The painting reflects his optimism.

    Van Gogh’s inspiration

    In November 1888, van Gogh wrote to his brother Theo, in whom he frequently confided, about “Sower at Sunset.” He described its beautiful colors: “Immense lemon-yellow disc for the sun. Green-yellow sky with pink clouds. The field is violet, the sower and the tree Prussian Blue.”

    ‘The Sower,’ by Jean-François Millet.
    Museum of Fine Arts, Boston via Wikimedia Commons

    Van Gogh’s painting was inspired by French artist Jean-Francois Millet’s 1860 painting, “The Sower.” But he transformed Millet’s composition, in which a dark, isolated figure dominates, and deliberately set the sower in the midst of a landscape transformed by the sun.

    Other artists, including the Norwegian Emanuel Vigeland, explicitly depicted the Parable of the Sower. Vigeland’s series of stained-glass windows in an Oslo church explains each passage’s meaning. As the sower works, some seeds fall by the wayside and the birds immediately eat them, indicating those who hear the word of God but do not listen.

    Norwegian artist Emanuel Vigeland’s ‘Parable of the Sower,’ 1917-19, Lutheran church of Borgestad, near Oslo, Norway.
    Virginia Raguin

    Some seeds fall on stony ground and cannot take root, a symbol of those with little tenacity. Others fall among thorns and are choked. Vigeland juxtaposed a dramatic image of a miser counting piles of money, indicating how the man’s life has become choked by desire for material gain.

    The final passage of the parable states that some seeds fell on good ground and yielded a hundredfold. Vigeland’s depiction shows an image of an abundant harvest of grain next to a man seated on the ground and cradling a child in his lap.

    What it says about Leo

    Van Gogh’s painting corresponds to many of the ideas the new pope expressed in the first days of his papacy. Leo observed: “In the center of the painting is the sun, not the sower, [which reminds us that] it is God who moves history, even if he sometimes seems absent or distant. It is the sun that warms the clods of the earth and ripens the seed.”

    The theme of the dignity of labor is also inherent in the image of the sower being deeply engrossed in physical labor, which relates to the pope’s choice of his name. The pope stated that he took on the name Leo XIV “mainly because Pope Leo XIII in his historic encyclical Rerum Novarum addressed the social question in the context of the first great industrial revolution.” Leo XIII was referring to the social question of economic injustice in the meager rewards for workers even as owners made great profits from the Industrial Revolution.

    The pope saw Van Gogh’s image of the sower, like Vigeland’s, as a message of hope. That message, to him, fits with the theme of hope of The Jubilee Year proclaimed by Leo’s predecessor, Francis. Leo also expressed hope that humans listening to God would embrace service to others.

    Virginia Raguin does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. What a sunny van Gogh painting of ‘The Sower’ tells us about Pope Leo’s message of hope – https://theconversation.com/what-a-sunny-van-gogh-painting-of-the-sower-tells-us-about-pope-leos-message-of-hope-258040

    MIL OSI – Global Reports

  • MIL-OSI Global: 1 in 4 children suffers from chronic pain − school nurses could be key to helping them manage it

    Source: The Conversation – USA – By Natoshia R. Cunningham, Associate Professor of Family Medicine, Michigan State University

    Mental heath approaches beat medication in treating children’s chronic pain. andresr/E+ via Getty Images

    Joint pain, headaches, stomachaches, fibromyalgia – the list sounds like an inventory of ailments that might plague people as they age. Yet these are chronic, painful conditions that frequently affect children.

    People often imagine childhood as a time when the body functions at its best, but about 25% of children experience chronic pain. I was one of them: Starting in elementary school, migraines incapacitated me for hours at a stretch with excruciating pain that made it impossible to go to school, much less talk to friends or have fun.

    As a licensed pediatric pain psychologist, I develop and test psychological care strategies for children who experience chronic pain. Effective treatments exist, but they are often not accessible, particularly for families that don’t live near major medical centers or have adequate health insurance. My colleagues and I are working to change that by training school nurses and other community health providers to deliver such care.

    More than growing pains

    Chronic pain in children is not only widespread but also persistent. Many continue to experience symptoms for years on end. For example, one-third of children with abdominal pain experience symptoms that last into adulthood. Children with chronic pain are also more likely to come from families that have less income, have greater health care barriers, report more safety concerns about their environment and experience greater exposure to violence than those without chronic pain.

    These conditions interfere with daily life. Children with chronic pain miss about 1 in 5 days of school. Consequently, their academics suffer and they are less likely to graduate from high school. Mental health conditions such as anxiety and depression are common.

    Experiencing chronic pain in childhood also puts people at an increased risk for opioid use in adulthood, signaling a major public health concern.

    Chronic pain can derail a child’s daily life.

    Behavioral therapy for pain

    Many adults think nothing of taking medicines such as ibuprofen or acetaminophen for minor aches and pains, but there’s little evidence that pharmacologic treatments work best for children’s chronic pain. Research suggests that such medicines are insufficient for helping children get back to their routines and activities, such as school, sports and hanging out with friends.

    The most studied and perhaps most effective approach for treating chronic pain in children is cognitive behavioral therapy. This modality involves teaching children how pain works in the brain, and also training them on problem solving, relaxation methods such as deep breathing, challenging negative thoughts about pain, and pacing activities to avoid pain flares. Unlike pain medications, which wear off after a few hours, research suggests that cognitive behavioral therapy can have a lasting effect. Kids can get back to doing things they need and want to do, and they often feel better too over the long term.

    My colleagues and I – along with other researchers – have developed and tested cognitive behavioral approaches for children with chronic painful conditions such as functional abdominal pain and childhood-onset lupus. These interventions not only get kids back to their daily lives but also reduce symptoms of anxiety and depression that often accompany children’s pain syndromes.

    To be sure, providing interventions in the form of web-based tools or apps can improve access for children who can’t see a provider. However, we have found that children and their families are more likely to complete the course of treatment with a provider, and that automated self-management tools can complement but not replace care delivered by a provider. In fact, when cognitive behavioral therapy for children’s chronic pain is delivered exclusively through an online tool, only a third of children complete treatment.

    How community providers can fill the gap

    Despite the proven benefits of psychological therapies for children’s pain, few providers are trained to use them. That’s one of the most common barriers to care.

    One potentially untapped resource is school nurses and other specialists who are often the first point of contact for a child with chronic pain, such as social workers and school counselors. Programs already exist to train school providers, including school nurses, in managing children’s mental health, but few of them address chronic pain.

    To fill this gap, my colleagues and I have developed a program to train school nurses and other community health experts to teach children cognitive and behavioral strategies to manage their chronic pain. So far, we have trained approximately 100 school providers across Michigan, who report that the training improves pain symptoms and helps keep children in school. We are also expanding the project to address trauma and other mental health symptoms that commonly occur with chronic pain, and to support providers in discouraging substance use to manage pain in these children.

    Our work suggests that this approach can empower providers to reach children in rural communities and other settings that lack access to care. By training more boots on the ground, we hope to provide children with the pain management tools they need to grow into healthy and thriving adults.

    Natoshia R. Cunningham receives grant funding from the US Department of Defense, the Michigan Health Endowment Fund, and the Childhood Arthritis and Rheumatology Research Alliance-Arthritis Foundation. She was previously funded by the National Institutes of Health, and the Blue Cross Blue Shield Foundation of Michigan.

    ref. 1 in 4 children suffers from chronic pain − school nurses could be key to helping them manage it – https://theconversation.com/1-in-4-children-suffers-from-chronic-pain-school-nurses-could-be-key-to-helping-them-manage-it-251220

    MIL OSI – Global Reports

  • MIL-OSI Global: What is vibe coding? A computer scientist explains what it means to have AI write computer code − and what risks that can entail

    Source: The Conversation – USA – By Chetan Jaiswal, Associate Professor of Computer Science, Quinnipiac University

    Large language model AIs can generate software code based on your prompts. J Studios/DigitalVision via Getty Images

    Whether you’re streaming a show, paying bills online or sending an email, each of these actions relies on computer programs that run behind the scenes. The process of writing computer programs is known as coding. Until recently, most computer code was written, at least originally, by human beings. But with the advent of generative artificial intelligence, that has begun to change.

    Now, just as you can ask ChatGPT to spin up a recipe for a favorite dish or write a sonnet in the style of Lord Byron, you can now ask generative AI tools to write computer code for you. Andrej Karpathy, an OpenAI co-founder who previously led AI efforts at Tesla, recently termed this “vibe coding.”

    For complete beginners or nontechnical dreamers, writing code based on vibes – feelings rather than explicitly defined information – could feel like a superpower. You don’t need to master programming languages or complex data structures. A simple natural language prompt will do the trick.

    How it works

    Vibe coding leans on standard patterns of technical language, which AI systems use to piece together original code from their training data. Any beginner can use an AI assistant such as GitHub Copilot or Cursor Chat, put in a few prompts, and let the system get to work. Here’s an example:

    “Create a lively and interactive visual experience that reacts to music, user interaction or real-time data. Your animation should include smooth transitions and colorful and lively visuals with an engaging flow in the experience. The animation should feel organic and responsive to the music, user interaction or live data and facilitate an experience that is immersive and captivating. Complete this project using JavaScript or React, and allow for easy customization to set the mood for other experiences.”

    But AI tools do this without any real grasp of specific rules, edge cases or security requirements for the software in question. This is a far cry from the processes behind developing production-grade software, which must balance trade-offs between product requirements, speed, scalability, sustainability and security. Skilled engineers write and review the code, run tests and establish safety barriers before going live.

    But while the lack of a structured process saves time and lowers the skills required to code, there are trade-offs. With vibe coding, most of these stress-testing practices go out the window, leaving systems vulnerable to malicious attacks and leaks of personal data.

    And there’s no easy fix: If you don’t understand every – or any – line of code that your AI agent writes, you can’t repair the code when it breaks. Or worse, as some experts have pointed out, you won’t notice when it’s silently failing.

    The AI itself is not equipped to carry out this analysis either. It recognizes what “working” code usually looks like, but it cannot necessarily diagnose or fix deeper problems that the code might cause or exacerbate.

    IBM computer scientist Martin Keen explains the difference between AI programming and traditional programming.

    Why it matters

    Vibe coding could be just a flash-in-the-pan phenomenon that will fizzle before long, but it may also find deeper applications with seasoned programmers. The practice could help skilled software engineers and developers more quickly turn an idea into a viable prototype. It could also enable novice programmers or even amateur coders to experience the power of AI, perhaps motivating them to pursue the discipline more deeply.

    Vibe coding also may signal a shift that could make natural language a more viable tool for developing some computer programs. If so, it would echo early website editing systems known as WYSIWYG editors that promised designers “what you see is what you get,” or “drag-and-drop” website builders that made it easy for anyone with basic computer skills to launch a blog.

    For now, I don’t believe that vibe coding will replace experienced software engineers, developers or computer scientists. The discipline and the art are much more nuanced than what AI can handle, and the risks of passing off “vibe code” as legitimate software are too great.

    But as AI models improve and become more adept at incorporating context and accounting for risk, practices like vibe coding might cause the boundary between AI and human programmer to blur further.

    Chetan Jaiswal does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. What is vibe coding? A computer scientist explains what it means to have AI write computer code − and what risks that can entail – https://theconversation.com/what-is-vibe-coding-a-computer-scientist-explains-what-it-means-to-have-ai-write-computer-code-and-what-risks-that-can-entail-257172

    MIL OSI – Global Reports

  • MIL-OSI Global: Your left and right brain hear language differently − a neuroscientist explains how

    Source: The Conversation – USA – By Hysell V. Oviedo, Assistant Professor of Biomedical Research, Washington University in St. Louis

    How you process language is influenced by how each side of your brain developed in early life. Peter Dazeley/The Image Bank via Getty Images

    Some of the most complex cognitive functions are possible because different sides of your brain control them. Chief among them is speech perception, the ability to interpret language. In people, the speech perception process is typically dominated by the left hemisphere.

    Your brain breaks apart fleeting streams of acoustic information into parallel channels – linguistic, emotional and musical – and acts as a biological multicore processor. Although scientists have recognized this division of cognitive labor for over 160 years, the mechanisms underpinning it remain poorly understood.

    Researchers know that distinct subgroups of neurons must be tuned to different frequencies and timing of sound. In recent decades, studies on animal models, especially in rodents, have confirmed that splitting sound processing across the brain is not uniquely human, opening the door to more closely dissecting how this occurs.

    Yet a central puzzle persists: What makes near-identical regions in opposite hemispheres of the brain process different types of information?

    Answering that question promises broader insight into how experience sculpts neural circuits during critical periods of early development, and why that process is disrupted in neurodevelopmental disorders.

    Timing is everything

    Sensory processing of sounds begins in the cochlea, a part of the inner ear where sound frequencies are converted into electricity and forwarded to the auditory cortex of the brain. Researchers believe that the division of labor across brain hemispheres required to recognize sound patterns begins in this region.

    For more than a decade, my work as a neuroscientist has focused on the auditory cortex. My lab has shown that mice process sound differently in the left and right hemispheres of their brains, and we have worked to tease apart the underlying circuitry.

    For example, we’ve found the left side of the brain has more focused, specialized connections that may help detect key features of speech, such as distinguishing one word from another. Meanwhile, the right side is more broadly connected, suited for processing melodies and the intonation of speech.

    Sound information moves through the cochlea to the brain.
    Jonathan E. Peelle, CC BY-SA

    We tackled the question of how these left-right differences in hearing develop in our latest work, and our results underscore the adage that timing is everything.

    We tracked how neural circuits in the left and right auditory cortex develop from early life to adulthood. To do this, we recorded electrical signals in mouse brains to observe how the auditory cortex matures and to see how sound experiences shape its structure.

    Surprisingly, we found that the right hemisphere consistently outpaced the left in development, showing more rapid growth and refinement. This suggests there are critical windows of development – brief periods when the brain is especially adaptive and sensitive to environmental sound – specific to each hemisphere that occur at different times.

    To test the consequences of this asynchrony, we exposed young mice to specific tones during these sensitive periods. In adulthood, we found that where sound is processed in their brains was permanently skewed. Animals that heard tones during the right hemisphere’s earlier critical window had an overrepresentation of those frequencies mapped in the right auditory cortex.

    Adding yet another layer of complexity, we found that these critical windows vary by sex. The right hemisphere critical window opens earlier in female mice, and the left hemisphere window opens just days later. In contrast, male mice had a very sensitive right hemisphere critical window, but no detectable window on the left. This points to the elusive role sex may play in brain plasticity.

    Our findings provide a new way to understand how different hemispheres of the brain process sound and why this might vary for different people. They also provide evidence that parallel areas of the brain are not interchangeable: the brain can encode the same sound in radically different ways, depending on when it occurs and which hemisphere is primed to receive it.

    Speech and neurodevelopment

    The division of labor between brain hemispheres is a hallmark of many human cognitive functions, especially language. This is often disrupted in neuropsychiatric conditions such as autism and schizophrenia.

    Reduced language information encoding in the left hemisphere is a strong indication of auditory hallucinations in schizophrenia. And a shift from left- to right-hemisphere language processing is characteristic of autism, where language development is often impaired.

    Children with certain neurodevelopmental conditions may have trouble processing speech.
    Towfiqu Ahamed/iStock via Getty Images Plus

    Strikingly, the right hemisphere of people with autism seems to respond earlier to sound than the left hemisphere, echoing the accelerated right-side maturation we saw in our study on mice. Our findings suggest that this early dominance of the right hemisphere in encoding sound information might amplify its control of auditory processing, deepening the imbalance between hemispheres.

    These insights deepen our understanding of how language-related areas in the brain typically develop and can help scientists design earlier and more targeted treatments to support early speech, especially for children with neurodevelopmental language disorders.

    Hysell V Oviedo receives funding from NIH.

    ref. Your left and right brain hear language differently − a neuroscientist explains how – https://theconversation.com/your-left-and-right-brain-hear-language-differently-a-neuroscientist-explains-how-257436

    MIL OSI – Global Reports

  • MIL-OSI Global: Memories of the good parts of using drugs can keep people hooked − altering the neurons that store them could help treat addiction

    Source: The Conversation – USA – By Ana Clara Bobadilla, Assistant Professor of Biomedical Sciences, Colorado State University

    Your memories are likely stored in ensembles of neurons that fire together. PASIEKA/Science Photo Library via Getty Images

    Everyday human behavior is guided and shaped by the search for rewards. This includes eating tasty meals, drinking something refreshing, sexual activity and nurturing children. Many of these behaviors are needed for survival. But in some instances, this search for rewards can pose a significant threat to survival.

    People rely on memories of rewards to function and survive. Associated with positive experiences, these memories provide context for evaluating present and future choices. For example, if foods high in sugar are associated with a positive experience, this can reinforce the behavior of eating the food that provided the reward. Similarly, a flavorful meal at a specific restaurant increases the likelihood you’ll become a returning costumer.

    A deeper understanding of how reward memories work and interact with each other is critical to informing the choices you make and to treating disorders where seeking rewards has become problematic. Eliminating all reward seeking would negatively affect behaviors essential for survival, such as eating and reproducing. But if you can specifically target reward memories linked to different drugs, this could help reduce their abuse.

    I am a behavioral neuroscientist studying addiction, and my team is interested in how reward memories are formed and processed in the brain. We study how memories linked to natural rewards such as food, water and sex differ from those linked with rewards from drugs such as fentanyl and cocaine.

    Understanding the differences between these types of rewards and how memories of different drugs interact may lead to more effective treatments for addiction.

    What is memory?

    To study reward memories, it is important to understand the neurobiology of memory, or how the brain remembers things.

    In 1904, evolutionary zoologist Richard Semon introduced the term engram to describe the physical representation of a memory – also called its trace – that forms in the brain after an experience. Later, psychologist Donald Hebb hypothesized that interconnected brain cells that are active at the same time during an experience form a physical ensemble that make up a memory.

    In the past decade, neuroscientists have developed new tools that support the idea that neuronal ensembles, or small populations of brain cells that are activated at the same time, are likely the physical representation of memory. How new memories recruit neurons into ensembles is not fully understood, but the plasticity of neurons – their ability to change their connections with each other – seems to play a major role.

    Memories are physically stored in your brain.

    Research on neuronal ensembles has transformed how scientists understand learning and memory. Researchers can now create artificial memories, activate positive memories to counteract negative feelings, and alter how memories are linked. All these experiments on altering memory have been conducted on animal models, since the technology required to apply these techniques to humans is not yet available.

    To create artificial memories, for instance, researchers can mark a neuronal ensemble associated with a specific environment A in genetically modified mice. They can then activate those neurons when exposing the mice to a foot shock in a different environment B. Later, the mice showed increased freezing behavior in environment A, though they never received a shock in that space. By activating the memory of environment A during the foot shock, mice created a false memory that the foot shock was associated with that space.

    Treating substance use disorders

    Neuronal ensembles hold untapped promise for the study and treatment of substance use disorders and other reward-related disorders. These include those involving a deficit in their ability to experience reward, such as gambling disorder, eating disorders and depression.

    Natural rewards – food, water, sex and nurturing – induce pleasurable feelings that reinforce the behavior that elicits that reward. This is known as positive reinforcement, a strategy often used in everyday life; think training a dog with treats, or using sticker charts for potty training.

    Research has linked positive and negative experiences with neuronal ensembles: exploratory and social behaviors, fear, and feeding. In substance use disorders, a drug can induce both pleasant and unpleasant feelings. For example, cocaine induces an intense rush or high, but the crash induced by the drug wearing off causes irritability and lethargy. These feelings reinforce drug use at the expense of essential behaviors that ensure survival, such as eating, sleeping or maintaining social networks and relationships.

    Neuronal ensembles may also play a causal role in the development of multiple aspects of substance use disorders, including drug taking, drug craving and seeking behaviors, increased sensitivity to certain drugs and relapse.

    How drug memory changes the brain

    Similarly to how any memory is stored in the brain as a neuronal ensemble, drug memories are carried in specific neuronal ensembles and activated during drug-related behaviors.

    Fundamental questions remain about how neuronal ensembles encode drug-related memories. Because the processing centers for drug rewards and natural rewards mostly overlap in the brain, it is challenging to develop treatments that target only drug reward seeking. Emerging treatments for addiction, such as certain types of brain stimulation, are not specific enough to differentiate between drug or natural reward pathways.

    Discovering how particular drugs of abuse affect genes, cells and neuronal circuits can help researchers develop new treatments for substance use disorders without altering the natural reward-seeking behaviors essential for survival.

    The reward of drug use can be hard to disentangle from the rewards of eating, drinking and other activities necessary for survivial.
    Kseniya Ovchinnikova/Moment via Getty Images

    For example, about 72% of people suffering from substance use disorders report using multiple substances, frequently together. To better understand how polysubstance use affects the brain, my team tags neurons active during drug-related behaviors in genetically modified mice. This allows us to map and compare the neurons carrying reward-related memories for one drug with the neurons associated with another drug. In this way, we can study how the brain represents and stores memories when mice are exposed to cocaine and fentanyl – two substances people in the U.S. are increasingly taking together – and how different brain regions communicate this information with each other.

    To dissect exactly how drugs of abuse hijack the brain’s natural reward system, my team is comparing how seeking different types of rewards changes the neurons carrying reward memories. For example, we have previously shown that the network of cells carrying the memory of seeking cocaine are mostly distinct from those linked to seeking sugar.

    Based on this work, we are currently using fruit fly models to analyze the genetic activity of the neuronal ensemble linked to seeking cocaine. This will allow us to better identify which genes could be potential targets to reduce the activity of that neuronal ensemble and treat substance use disorder.

    Psychedelics and addiction

    Drug-related intrusive thoughts and fixed behavioral patterns – meaning actions that are repeatedly taken regardless of negative consequences – are common symptoms of substance abuse that lead to the formation of harmful neural pathways in the brain. Psychedelics may be able to help reform these pathways by triggering an overall “system reboot” of the brain.

    Several clinical trials point to the potential of psychedelics to treat tobacco, alcohol and opioid use disorders, with early results showing increased abstinence and reduced drug cravings.

    My lab is currently examining how psilocin – the active metabolite of the psychedelic psilocybin – affects the drug-related memories of mice. Our research focuses on two questions. First, can psilocin alter drug seeking and intake in fentanyl addiction? And second, what type of memory does psilocin create in the brain, and could it alter prior cocaine memories?

    Reward memories both help people survive and lead to substance use disorders. Delving into the intricate mechanisms of how the brain remembers rewards at the cellular and genetic levels can help researchers and doctors better treat addiction without altering the reward pathways needed for survival.

    Ana Clara Bobadilla receives funding from the National Institute on Drug Abuse and the Brain & Behavior Research Foundation.

    ref. Memories of the good parts of using drugs can keep people hooked − altering the neurons that store them could help treat addiction – https://theconversation.com/memories-of-the-good-parts-of-using-drugs-can-keep-people-hooked-altering-the-neurons-that-store-them-could-help-treat-addiction-245529

    MIL OSI – Global Reports

  • MIL-OSI Global: Google searches for information about cancer lead to targeted ads from alternative clinics

    Source: The Conversation – Canada – By Alessandro Marcon, Senior Research Associate at the Health Law Institute, University of Alberta

    Online searches for health information can pull up misleading ads. (S. Ghassimi), CC BY

    More than 80 per cent of online searches are now performed with Google. But there’s an insidious element to the world’s most popular search engine. As companies compete for the advertising spaces that accompany search query results, users seeking critical health information can be exposed to dangerous and exploitative misinformation.




    Read more:
    Why we fall for fake health information – and how it spreads faster than facts


    In 2024, North Americans overwhelmingly used Google for news and information on politics, celebrities, entertainment and topical events like natural disasters. Health-related queries are also popular: nearly 70 per cent of the Canadian public use online searches for health information.

    Google is the world’s most popular search engine.
    (Shutterstock)

    Online searches

    The phrases or questions contained in online searches serve as valuable data. They can inform epidemiological surveillance and provide insight into popular global and regional trends.

    These data also hold immense value for online marketing teams, tracking who is searching for what, where and when. In addition to search tracking, however, queries now are used for online advertising. It’s a reality that raises serious ethical, regulatory and public health issues.

    Before the internet, key advertising spaces existed in magazines and newspapers, on highway billboards and time slots between radio and television programming. Advertising is so lucrative that a 30-second time slot during the Super Bowl now costs upwards of US$8 million.

    Online, fixed slots have now been replaced by targeted advertisements to accompany search results, determined by search queries entered by users.

    Highly coveted spots

    Like a Super Bowl ad, advertising on Google’s first page results is highly coveted.

    Obtaining the rights to these space requires companies to outbid one another to win the ads spaces determined by search terms — an advertiser can purchase ad space from Google associated with a specific phrase or keyword.

    Companies with snack products, for example, may compete for their sponsored content to appear when individuals search for “Super Bowl party snacks,” “new chip flavours” or “chip and dip ideas.”

    As harmless and obvious — and perhaps even inevitable — as this marketing approach may seem, the practice is problematic when industry targets personal, sensitive and critical health terms — which is exactly what our research uncovered.

    Searches for cancer, exploitative ads

    Using the AI-driven marketing platform SemRush, we analyzed the search terms purchased for advertising by notorious alternative cancer clinics in Tijuana, Mexico and Arizona. We determined what queries were targeted and how much was spent on acquiring the advertising space matching these queries.

    We also assessed whether this spending increased traffic to their clinic websites. Our results showed that over roughly one decade, these clinics paid over an estimated US$15 million to purchase the ad spaces for thousands of search words and phrases.

    These search queries related to cancer prognosis and diagnosis, treatment options including alternative treatments and cancer types including late-stage cancer. In sum, the advertising strategy generated more than 6.5 million website visits for alternative cancer clinics.

    Alternative cancer treatments can interfere with the success of medical treatments.
    (Shutterstock)

    Negative health impacts

    Unfortunately, the success of these alternative clinics’ marketing strategies is nothing short of a disaster for the public’s health and well-being. Alternative cancer treatments are associated with an increased risk of death and offer false hope for those suffering from end-stage cancer.

    These ineffective and oftentimes dangerous treatments can financially exploit patients, disrupt end-of-life planning and interfere with evidence-based cancer or palliative treatments.

    Google is therefore enabling an advertising option that contributes to the harmful spread of inaccurate and damaging cancer misinformation that can directly lead to detrimental health-related actions.

    Protection from deception

    Our research focused entirely on the cancer context and analyzed the targeted search query approach of problematic clinics in two specific locations. It is imaginable — indeed very probable — that this approach is deployed in other health contexts and beyond.

    Google does have and enforce policies to protect users from deceptive advertising content. But there is little oversight regarding how advertisers may exploit its keyword ad matching features.

    It’s imperative that Google take action to restrict its ads mechanism from being used in this exploitative manner. Search results could give prominence only to websites supported by accurate scientific evidence. Google could prohibit the advertising purchase of ostensibly controversial search terms. This would include personal, sensitive queries from vulnerable groups, including patients suffering from cancer and other life-threatening ailments.

    Google and other social media platforms benefit financially from misinformation. It is up to these companies to decide if human health and well-being is more valuable than these financial gains. It is up to all of us to advocate for those harmed by dangerous misinformation.

    Alessandro Marcon works at the University of Alberta’s Health Law Institute, which has received funding related to this project from CIHR.

    Marco Zenone is the recipient of the Banting Postdoctoral Fellowship from the Canadian Institutes of Health Research.

    ref. Google searches for information about cancer lead to targeted ads from alternative clinics – https://theconversation.com/google-searches-for-information-about-cancer-lead-to-targeted-ads-from-alternative-clinics-255372

    MIL OSI – Global Reports