Category: Global

  • MIL-OSI Global: Tory MPs have accidentally knocked out their own man – and reminded voters why they lost the last election

    Source: The Conversation – UK – By Ben Worthy, Lecturer in Politics, Birkbeck, University of London

    The Conservative party leadership ballot is a private affair. The MPs don’t have to reveal who they voted for if they don’t want to. And given how badly they appear to have bungled their final round of voting in this contest, it seems unlikely we’ll ever know what really happened.

    James Cleverly was the firm favourite among MPs, and yet an attempt to manoeuvre him into the final two against the candidate his supporters felt most sure of beating in the final run-off, when party members vote, seems to have backfired.

    It would appear Cleverly and his supporters forgot Lyndon B. Johnson’s first rule of politics – learn to count. As a result, party members now have a choice between two rightwing candidates, Robert Jenrick and Kemi Badenoch. Both are popular among members but less electable and palatable for the wider public. The debacle has exposed (not for the first time) the problems with the electoral system.

    Cleverly was seen as the unifier of the party, with the ministerial experience and communication skills to help with a transformation. He had wowed party conference with a well-calibrated speech hinting that the party needed to “normalise” to regain trust. Yet his record leaves questions as to exactly how good his communication skills are in reality. He had made several “jokes”, which were not jokes at all – just offensive comments – and reportedly described his own government’s immigration policy as “batshit”.

    A Telegraph article just before his shock loss in the parliamentary party vote feared he would “sign the death warrant” of the party as a “middle-of-the-road bluffer who tickles the tummies of members of the parliamentary party by flattering them that their historic defeat was not so bad after all”. Yet judging by the audible gasps when the result was announced, Tory MPs were shocked at how they had messed the vote up. Both the Liberal Democrats and Labour reacted with glee at the news.

    Tory MPs react to the news that they’ve inadvertently knocked out their favourite candidate.

    The final two

    Badenoch has less ministerial experience than Cleverly but is loved by the Tory party as a battler and is now the favourite to win. The same “death warrant” article called Badenoch a “Warrior Queen”, but that cuts both ways. Badenoch, by channelling her inner Thatcher, is pitching herself as a fighter taking on the forces of reaction within and without. But, to quote another Tory, the Duke Of Wellington, Thatcher would only fight battles she knew she could win. Badenoch’s battle seem rather less focused, and her war on the forces of woke now includes new mothers and civil servants (10% of whom, in her view, should be in prison).

    Another recent article, this time in the Guardian spoke of how “she often finds it hard to get through an interview without patronising or arguing with the presenter in a manner that reinforces claims she’s divisive and abrasive”. At the same time, her attempt to tell “hard truths” saw her publishing a lengthy pamphlet featuring some triangles – seemingly explaining electoral realignment – which no one could understand. Not ideal attributes for a leader.

    So far in this contest, Jenrick’s most notable interventions have been to grandstand about the European Court of Human Rights (ECHR), compete to be toughest on immigration, and (and we need to follow the logic slowly here) argue that the ECHR is causing UK special forces to kill instead of capture terrorists. Jenrick is the living embodiment of the old Groucho Marx joke “those are my principles, and if you don’t like them…well, I have others”. He has made either a Damascene or cynical journey from squishy centre to hard right just ahead of this contest. What does he really believe? No one is sure.

    The reasons for the Tories’ recent catastrophic election loss are in plain sight. Voters saw the Conservative governments as a toxic combination of poor delivery, scandals and being out of touch. The 2024 defeat was a combination of Boris Johnson’s immorality and Liz Truss’s incompetence. Rishi Sunak then finally fractured his own coalition with a self-defeating immigration policy. None of the candidates have addressed the reasons for the loss and the final two are evidently still in denial.

    But it is the Tory members who are voting here. Their version of events is that disunity and a failure to deliver on immigration lost them power. Members may well be torn, as political scientist Tim Bale points out, between values and electability – though with Cleverly out, this latter may be a problem.

    Peering through the fog of the contest, there are two things which are very likely. First, Johnson’s shifting of the party to the right, and his closer alignment of the Tory party with the remnants of UKIP is now more evident, and will be further deepened by whoever wins. While Badenoch and Jenrick differ on whether they should beat or join Reform, the Tory party is now on the latter’s territory. There is unlikely to be any Tory “hard truths” to address the electorate’s loss of trust in the party, but instead the talking points will be culture wars, immigration, and leaving the ECHR.

    Second, as a result, the party will move further from the centre ground, and away from the average voter, and their concerns. The mess the parliamentary party has made of the contest and the long shadow of dysfunctional leadership have served only to remind voters of the reasons why the party was thrown out of office in July. Peering through his snazzy new glasses, Starmer can see his bad week just got a lot better.

    The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    ref. Tory MPs have accidentally knocked out their own man – and reminded voters why they lost the last election – https://theconversation.com/tory-mps-have-accidentally-knocked-out-their-own-man-and-reminded-voters-why-they-lost-the-last-election-240983

    MIL OSI – Global Reports

  • MIL-OSI Global: Dark energy: could the mysterious force seen as constant actually vary over cosmic time?

    Source: The Conversation – UK – By Robert Nichol, Pro Vice-Chancellor and Executive Dean, University of Surrey

    Globular cluster NGC 2005. ESA/Hubble & Nasa, F. Niederhofer, L. Girardi, CC BY-SA

    As I finished my PhD in 1992, the universe was full of mystery – we didn’t even know exactly what it is made of. One could argue that cosmologists had made little progress in our understanding of these basic facts since the discovery of the cosmic microwave background (CMB), the afterglow of the Big Bang, in the 1960s.

    I left the UK after my doctoral studies to begin a research career in the US, where I was lucky to be recruited to work on a new experiment called the Sloan Digital Sky Survey (SDSS). This new survey embraced advances in digital technologies with the ambition of measuring the “redshifts” (how light becomes more red if a source appears to move away from you) of a million galaxies.

    These redshifts were then used to measure distances, and allowed cosmologists to map the three-dimensional structure of the universe.

    One cosmic puzzle in the 1980s, based on the pioneering CfA Redshift Survey of Margaret Geller and John Huchra, was the significant lumpiness of galaxies, and therefore matter, in our cosmic neighbourhood. Galaxies were clustered together across a wide range of scales, with evidence for coherent “superclusters” of galaxies spanning over 30 million light years in length.


    This article is part of our series Cosmology in crisis? which uncovers the greatest problems facing cosmologists today – and discusses the implications of solving them.


    It was important to know how such superclusters could have formed from the smooth CMB, as it would tell us the total amount of matter in the universe and, more intriguingly, what that matter was made of. That was assuming the only force in play was gravity.

    By the end of the first phase of the SDSS, we had achieved our goal of a million redshifts. This data was used to discover many superclusters across the universe, including the amazing “Sloan Great Wall”, which remains one of the largest known coherent structures in the universe, over a billion light years in length.

    Type 1A supernova remnant.
    Nasa/CXC/U.Texas

    I am lucky to have lived through this amazing era of cosmic discovery around the turn of the century. Surveys like SDSS, combined with new observations of the CMB and searches for distant exploding stars known as Type Ia Supernovae (SNeIa), coincided to deliver an emphatic answer to the question: “What is the universe made of?”

    The discovery of dark energy

    From 1999 to 2004, the cosmological community came together to agree that the universe was 5% normal (baryonic) matter, 25% dark matter (unknown, invisible matter), and 70% “dark energy” (an expansive force) – essentially a cosmological constant, which was first postulated by Einstein. The discovery that the universe was dominated by this constant energy shocked everyone, especially as Einstein had called the cosmological constant his “biggest blunder”.

    Today, cosmologists still agree this is the most likely make-up of our universe. But observational cosmologists like me have refined our measurements of these cosmic variables significantly – reducing the errors on these quantities.

    The latest numbers from the Dark Energy Survey (DES) indicate that 31.5% of the universe is matter (a combination of dark and normal), with the remainder being dark energy assuming a cosmological constant. The error on this measurement is just 3%.

    Knowing these numbers to higher precision will hopefully help cosmologists understand why the universe is like this. Why would we expect to have 70% of the universe today as “dark” (can’t be seen via electromagnetic radiation) and not associated with “matter” like everything else in the universe?

    The origin of this dark energy remains the biggest challenge to physics, even after 20 years of intense study.

    Intriguing measurements

    Like me, a few cosmologists have become distracted by other problems over the last two decades. However, 2024 could be the start of a new era of discovery. This year, cosmologists published new results based on two of our best cosmological probes.

    The first probe consists of exploding stars dubbed “SNeIa”. As these stars have a narrow range of masses, their explosions can be well calibrated, giving cosmologists a predictable brightness that can be seen far away. By comparing the known brightness of these SNeIa to their redshifts, we can determine the expansion history of the universe. These objects were, in fact, critical for discovering that the expansion of our universe is accelerating.

    The second probe works by looking at Baryon Acoustic Oscillations (BAO) – relics of predictable sound waves in the plasma (charged gas) of the early universe, before the CMB. These are now frozen into the large-scale structure of galaxies around us. Like SNeIa, their predictable size can be compared with their observed size today to measure the expansion history of the universe.

    Recently, DES reported its final SNeIa results from over a decade of work, detecting and characterising many thousands of supernova events. While these SNeIa results are consistent with the orthodox view that the universe is dominated by a cosmological constant, they do leave open the tantalising possibility of new physics – namely, that the dark energy could be varying with cosmic time.

    That said, scientists are trained to be sceptical, and there are many reasons to distrust a single experiment, single observation, or even a single set of cosmologists!

    Cosmologists now go to extraordinary lengths to “blind” their results from themselves during analysis of the data, only revealing the answer at the last moment. This blinding is done to avoid unconscious human biases affecting the work, which could possibly encourage people to get the answer they believe they should see.

    This is why repeatability of results is at the heart of all science. In cosmology, we cherish the need for multiple experiments checking and challenging each other.

    The second result to turn heads was the first BAO measurements from the Dark Energy Spectroscopic Instrument (DESI), successor to the SDSS. The first DESI map of the cosmos is deeper and denser than the original SDSS. Its first BAO results are intriguing – the data alone is still consistent with a cosmological constant, but with hints of a possible time-varying dark energy when combined with other data sources.

    DESI in the dome of the Nicholas U. Mayall 4-meter Telescope at the Kitt Peak National Observatory.
    wikipedia, CC BY-SA

    In particular, when DESI analyses the combination of its BAO results with the final DES SNeIa data, the significance of a time-varying dark energy increases to 3.9 sigma (a measure of how unusual a set of data is if a hypothesis is true) – only 0.6% chance of being a statistical fluke.

    Most of us would take such odds, but scientists have been hurt before by systematic errors within their data that can mimic such statistical certainty. Particle physicists therefore demand a discovery standard of 5 sigma for any claims of new physics – or less than a one in a million chance of being wrong!

    As scientists will say: “Extraordinary claims require extraordinary evidence.”

    Mindboggling implications

    Are we entering a new era of cosmological discovery? If so, what would it mean?

    The answer to my first question is probably yes. The next few years will be fun for cosmologists, with new data and results due from the European Space Agency’s Euclid mission. Launched last year, it is already scanning the sky with unprecedented accuracy.

    Likewise, DESI will get more and better data, while the European Southern Observatory starts its own massive redshift survey in 2025. Then you have the Rubin Observatory in Chile coming online soon. Combining these datasets should prove beyond doubt if dark energy varies with cosmic time.

    If it does, it implies there is less dark energy now than in the past. This could be caused by many things but, interestingly, it could signify the end of a present, accelerated phase of the expansion of the universe.

    It also implies that dark energy is probably not a cosmological constant thought to be due to the background energy associated with empty space. According to quantum mechanics, empty space isn’t really empty, with particles popping in and out of existence creating something we call “vacuum energy”. Ironically, predictions of this vacuum energy do not agree with our cosmological observations by many orders of magnitude.

    So, if we did discover that dark energy varies over time, it might explain why observations are at odds with quantum mechanics, which is an extremely well-tested theory. This would suggest the assumption in the standard model of cosmology, that dark energy is constant, needs a rethink. Such a realisation may help solve other mysteries about the universe – or pose new ones.

    In short, the new cosmological observations coming this decade will stimulate a new era of physical thinking. Congratulations to my younger cosmologists: it is your era to have fun.




    Read more:
    The earliest galaxies formed amazingly fast after the Big Bang. Do they break the universe or change its age?





    Read more:
    Astronomers can’t agree on how fast the universe is expanding. New approaches are aiming to break the impasse





    Read more:
    The universe is smoother than the standard model of cosmology suggests – so is the theory broken?





    Read more:
    Cosmology is at a tipping point – we may be on the verge of discovering new physics


    Robert Nichol receives funding from STFC for work on 4MOST.

    ref. Dark energy: could the mysterious force seen as constant actually vary over cosmic time? – https://theconversation.com/dark-energy-could-the-mysterious-force-seen-as-constant-actually-vary-over-cosmic-time-238247

    MIL OSI – Global Reports

  • MIL-OSI Global: Charging, not range, is becoming a top concern for electric car drivers

    Source: The Conversation – USA – By Alan Jenn, Associate Professional Researcher in Transportation, University of California, Davis

    A Nissan Leaf charges at a station in Pasadena, Calif., on Sept. 23, 2024. Mario Tama/Getty Images

    The Biden administration is using tax credits, regulations and federal investments to shift drivers toward electric vehicles. But drivers will make the switch only if they are confident they can find reliable charging when and where they need it.

    Over the past four years, the number of public charging ports across the U.S. has doubled. As of August 2024, the nation had 192,000 publicly available charging ports and was adding about 1,000 public chargers weekly. Infrastructure rarely expands at such a fast rate.

    Agencies are allocating billions of dollars authorized through the 2021 Bipartisan Infrastructure Law for building charging infrastructure. This expansion is making long-distance EV travel more practical. It also makes EV ownership more feasible for people who can’t charge at home, such as some apartment dwellers.

    Charging technology is also improving. Speeds are now reaching up to 350 kilowatts – fast enough to charge a standard electric car in less than 10 minutes. The industry has also begun to shift to a standard called ISO 15118, which governs the interface between EVs and the power grid.

    This standard enables a plug-and-charge system: Just plug in the charger and you’re done, without contending with apps or multiple payment systems. Many existing chargers can be retrofitted to it, rather than needing to install totally new chargers.

    Tesla’s decision to open its reliable Supercharger network to non-Tesla vehicles promises to further expand access to fast chargers, although this shift is proceeding slowly.

    Severed cable on a vandalized EV charger in the Tarzana neighborhood of Los Angeles on May 16, 2024.
    Patrick T. Fallon/AFP via Getty Images

    As a researcher studying adoption of EVs, I’m encouraged by these advancements. But there’s still a need to make the charging experience more reliable and accessible for everyone. Stories of charging woes abound online and are a popular focus for EV critics. Here are the key issues drivers are confronting.

    Broken, slow or inaccessible

    Although EV charging infrastructure has improved in the past several years, reliability is still a critical issue. For example, a 2022 study by researchers at the University of California, Berkeley, found that nearly 30% of public non-Tesla fast chargers in the Bay Area didn’t work. A national study in 2023 that used artificial intelligence models to analyze driver reviews of EV charging stations reached a similar result.

    These findings highlight the need for more robust maintenance and monitoring systems across charging networks. Federal guidelines require that chargers must have an average annual “uptime,” or functional time, greater than 97%, but this metric is not always as clear-cut as it sounds. While many charging-point operators report high uptime percentages, their figures often exclude factors such as slow charging speeds or incomplete charges that degrade users’ experience.

    Cars waiting to charge at a center in San Diego.
    Gil Tal, CC BY-ND

    Many drivers complain about throttling – chargers that dispense electricity at less than the maximum rate the car is capable of accepting, so the car charges more slowly than expected. Sometimes this is normal: Cars will charge more slowly as their battery gets closer to full in order to avoid damaging the battery. Other factors can include weather conditions and the number of other vehicles simultaneously using the charging station.

    Drivers’ issues with chargers involve more than just uptime. Technical barriers, such as payment processing and vehicle-charger communication, sometimes can prevent a charge from starting or completing.

    To ensure that all EVs can charge smoothly at any network, groups such as the National Charging Experience Consortium and CharIN are bringing automakers, charging providers and national laboratories together to address these issues.

    Other obstacles are more local, such as long lines at charging stations and chargers that are blocked by parked cars, snowbanks or other obstacles. Finding vehicles with internal combustion engines parked in EV charger spots is common enough that it has a name: getting ICEd. There’s a clear need for more comprehensive solutions to help the charging experience keep pace with demand for EVs.

    A Wall Street Journal tech columnist finds abundant chargers – with abundant challenges – in Los Angeles.

    A street-level view

    At the University of California, Davis, we are working with the California Energy Commission to understand the range of charging obstacles that EV drivers face. As part of a three-year study, we are sending undergraduate students out to test thousands of chargers across the entire state of California.

    So far, our results show that just over 70% of charge attempts have succeeded. Many issues have caused failed charges, including traffic congestion at charging stations, damaged or offline chargers, difficulty using navigation apps to find charging stations, and malfunctioning chargers.

    Quantity and quality both matter

    As federal investments continue to pour money into EV charging, our findings indicate that it’s important to use these resources not only to expand the network but also to improve the user experience at every step.

    Areas for improvement include stricter oversight of charger maintenance; more robust uptime requirements that reflect real-world performance; and better collaboration between automakers, charging-point operators and software providers to ensure that vehicles and chargers can work together seamlessly.

    The future of EV adoption depends not just on how many chargers are available, but on how reliable and easy they are to use. By addressing specific pain points that drivers face, policymakers and industry leaders can create a charging ecosystem that truly supports the needs of all EV drivers. Reliability is key to unlocking widespread confidence in the EV charging infrastructure and ensuring that it can keep pace with the growing number of electric vehicles on the road.

    Alan Jenn receives funding from the California Energy Commission and is a participant in the National Charging Experience Consortium (ChargeX)

    ref. Charging, not range, is becoming a top concern for electric car drivers – https://theconversation.com/charging-not-range-is-becoming-a-top-concern-for-electric-car-drivers-240496

    MIL OSI – Global Reports

  • MIL-OSI Global: Medicare vs. Medicare Advantage: sales pitches are often from biased sources, the choices can be overwhelming and impartial help is not equally available to all

    Source: The Conversation – USA – By Grace McCormack, Postdoctoral researcher of Health Policy and Economics, University of Southern California

    It can take a lot of effort to understand the many different Medicare choices. Halfpoint Images/Moment via Getty Images

    The 67 million Americans eligible for Medicare make an important decision every October: Should they make changes in their Medicare health insurance plans for the next calendar year?

    The decision is complicated. Medicare has an enormous variety of coverage options, with large and varying implications for people’s health and finances, both as beneficiaries and taxpayers. And the decision is consequential – some choices lock beneficiaries out of traditional Medicare.

    Beneficiaries choose an insurance plan when they turn 65 or become eligible based on qualifying chronic conditions or disabilities. After the initial sign-up, most beneficiaries can make changes only during the open enrollment period each fall.

    The 2024 open enrollment period, which runs from Oct. 14 to Dec. 7, marks an opportunity to reassess options. Given the complicated nature of Medicare and the scarcity of unbiased advisers, however, finding reliable information and understanding the options available can be challenging.

    We are health care policy experts who study Medicare, and even we find it complicated. One of us recently helped a relative enroll in Medicare for the first time. She’s healthy, has access to health insurance through her employer and doesn’t regularly take prescription drugs. Even in this straightforward scenario, the number of choices were overwhelming.

    The stakes of these choices are even higher for people managing multiple chronic conditions. There is help available for beneficiaries, but we have found that there is considerable room for improvement – especially in making help available for everyone who needs it.

    The choice is complex, especially when you are signing up for the first time and if you are eligible for both Medicare and Medicaid. Insurers often engage in aggressive and sometimes deceptive advertising and outreach through brokers and agents. Choose unbiased resources to guide you through the process, like http://www.shiphelp.org. Make sure to start before your 65th birthday for initial sign-up, look out for yearly plan changes, and start well before the Dec. 7 deadline for any plan changes.

    2 paths with many decisions

    Within Medicare, beneficiaries have a choice between two very different programs. They can enroll in either traditional Medicare, which is administered by the government, or one of the Medicare Advantage plans offered by private insurance companies.

    Within each program are dozens of further choices.

    Traditional Medicare is a nationally uniform cost-sharing plan for medical services that allows people to choose their providers for most types of medical care, usually without prior authorization. Deductibles for 2024 are US$1,632 for hospital costs and $240 for outpatient and medical costs. Patients also have to chip in starting on Day 61 for a hospital stay and Day 21 for a skilled nursing facility stay. This percentage is known as coinsurance. After the yearly deductible, Medicare pays 80% of outpatient and medical costs, leaving the person with a 20% copayment. Traditional Medicare’s basic plan, known as Part A and Part B, also has no out-of-pocket maximum.

    Traditional Medicare starts with Medicare parts A and B.
    Bill Oxford/iStock via Getty Images

    People enrolled in traditional Medicare can also purchase supplemental coverage from a private insurance company, known as Part D, for drugs. And they can purchase supplemental coverage, known as Medigap, to lower or eliminate their deductibles, coinsurance and copayments, cap costs for Parts A and B, and add an emergency foreign travel benefit.

    Part D plans cover prescription drug costs for about $0 to $100 a month. People with lower incomes may get extra financial help by signing up for the Medicare program Part D Extra Help or state-sponsored pharmaceutical assistance programs.

    There are 10 standardized Medigap plans, also known as Medicare supplement plans. Depending on the plan, and the person’s gender, location and smoking status, Medigap typically costs from about $30 to $400 a month when a beneficiary first enrolls in Medicare.

    The Medicare Advantage program allows private insurers to bundle everything together and offers many enrollment options. Compared with traditional Medicare, Medicare Advantage plans typically offer lower out-of-pocket costs. They often bundle supplemental coverage for hearing, vision and dental, which is not part of traditional Medicare.

    But Medicare Advantage plans also limit provider networks, meaning that people who are enrolled in them can see only certain providers without paying extra. In comparison to traditional Medicare, Medicare Advantage enrollees on average go to lower-quality hospitals, nursing facilities, and home health agencies but see higher-quality primary care doctors.

    Medicare Advantage plans also often require prior authorization – often for important services such as stays at skilled nursing facilities, home health services and dialysis.

    Choice overload

    Understanding the tradeoffs between premiums, health care access and out-of-pocket health care costs can be overwhelming.

    Turning 65 begins the process of taking one of two major paths, which each have a thicket of health care choices.
    Rika Kanaoka/USC Schaeffer Center for Health Policy & Economics

    Though options vary by county, the typical Medicare beneficiary can choose between as many as 10 Medigap plans and 21 standalone Part D plans, or an average of 43 Medicare Advantage plans. People who are eligible for both Medicare and Medicaid, or have certain chronic conditions, or are in a long-term care facility have additional types of Medicare Advantage plans known as Special Needs Plans to choose among.

    Medicare Advantage plans can vary in terms of networks, benefits and use of prior authorization.

    Different Medicare Advantage plans have varying and large impacts on enrollee health, including dramatic differences in mortality rates. Researchers found a 16% difference per year between the best and worst Medicare Advantage plans, meaning that for every 100 people in the worst plans who die within a year, they would expect only 84 people to die within that year if all had been enrolled in the best plans instead. They also found plans that cost more had lower mortality rates, but plans that had higher federal quality ratings – known as “star ratings” – did not necessarily have lower mortality rates.

    The quality of different Medicare Advantage plans, however, can be difficult for potential enrollees to assess. The federal plan finder website lists available plans and publishes a quality rating of one to five stars for each plan. But in practice, these star ratings don’t necessarily correspond to better enrollee experiences or meaningful differences in quality.

    Online provider networks can also contain errors or include providers who are no longer seeing new patients, making it hard for people to choose plans that give them access to the providers they prefer.

    While many Medicare Advantage plans boast about their supplemental benefits , such as vision and dental coverage, it’s often difficult to understand how generous this supplemental coverage is. For instance, while most Medicare Advantage plans offer supplemental dental benefits, cost-sharing and coverage can vary. Some plans don’t cover services such as extractions and endodontics, which includes root canals. Most plans that cover these more extensive dental services require some combination of coinsurance, copayments and annual limits.

    Even when information is fully available, mistakes are likely.

    Part D beneficiaries often fail to accurately evaluate premiums and expected out-of-pocket costs when making their enrollment decisions. Past work suggests that many beneficiaries have difficulty processing the proliferation of options. A person’s relationship with health care providers, financial situation and preferences are key considerations. The consequences of enrolling in one plan or another can be difficult to determine.

    The trap: Locked out

    At 65, when most beneficiaries first enroll in Medicare, federal regulations guarantee that anyone can get Medigap coverage. During this initial sign-up, beneficiaries can’t be charged a higher premium based on their health.

    Older Americans who enroll in a Medicare Advantage plan but then want to switch back to traditional Medicare after more than a year has passed lose that guarantee. This can effectively lock them out of enrolling in supplemental Medigap insurance, making the initial decision a one-way street.

    For the initial sign-up, Medigap plans are “guaranteed issue,” meaning the plan must cover preexisting health conditions without a waiting period and must allow anyone to enroll, regardless of health. They also must be “community rated,” meaning that the cost of a plan can’t rise because of age or illness, although it can go up due to other factors such as inflation.

    People who enroll in traditional Medicare and a supplemental Medigap plan at 65 can expect to continue paying community-rated premiums as long as they remain enrolled, regardless of what happens to their health.

    In most states, however, people who switch from Medicare Advantage to traditional Medicare don’t have as many protections. Most state regulations permit plans to deny coverage, impose waiting periods or charge higher Medigap premiums based on their expected health costs. Only Connecticut, Maine, Massachusetts and New York guarantee that people can get Medigap plans after the initial sign-up period.

    Deceptive advertising

    Information about Medicare coverage and assistance choosing a plan is available but varies in quality and completeness. Older Americans are bombarded with ads for Medicare Advantage plans that they may not be eligible for and that include misleading statements about benefits.

    A November 2022 report from the U.S. Senate Committee on Finance found deceptive and aggressive sales and marketing tactics, including mailed brochures that implied government endorsement, telemarketers who called up to 20 times a day, and salespeople who approached older adults in the grocery store to ask about their insurance coverage.

    The Department of Health and Human Services tightened rules for 2024, requiring third-party marketers to include federal resources about Medicare, including the website and toll-free phone number, and limiting the number of contacts from marketers.

    Although the government has the authority to review marketing materials, enforcement is partially dependent on whether complaints are filed. Complaints can be filed with the federal government’s Senior Medicare Patrol, a federally funded program that prevents and addresses unethical Medicare activities.

    Meanwhile, the number of people enrolled in Medicare Advantage plans has grown rapidly, doubling since 2010 and accounting for more than half of all Medicare beneficiaries by 2023.

    Nearly one-third of Medicare beneficiaries seek information from an insurance broker. Brokers sell health insurance plans from multiple companies. However, because they receive payment from plans in exchange for sales, and because they are unlikely to sell every option, a plan recommended by a broker may not meet a person’s needs.

    Help is out there − but falls short

    An alternative source of information is the federal government. It offers three sources of information to assist people with choosing one of these plans: 1-800-Medicare, medicare.gov and the State Health Insurance Assistance Program, also known as SHIP.

    The SHIP program combats misleading Medicare advertising and deceptive brokers by connecting eligible Americans with counselors by phone or in person to help them choose plans. Many people say they prefer meeting in person with a counselor over phone or internet support. SHIP staff say they often help people understand what’s in Medicare Advantage ads and disenroll from plans they were directed to by brokers.

    Telephone SHIP services are available nationally, but one of us and our colleagues have found that in-person SHIP services are not available in some areas. We tabulated areas by ZIP code in 27 states and found that although more than half of the locations had a SHIP site within the county, areas without a SHIP site included a larger proportion of people with low incomes.

    Virtual services are an option that’s particularly useful in rural areas and for people with limited mobility or little access to transportation, but they require online access. Virtual and in-person services, where both a beneficiary and a counselor can look at the same computer screen, are especially useful for looking through complex coverage options.

    We also interviewed SHIP counselors and coordinators from across the U.S.

    As one SHIP coordinator noted, many people are not aware of all their coverage options. For instance, one beneficiary told a coordinator, “I’ve been on Medicaid and I’m aging out of Medicaid. And I don’t have a lot of money. And now I have to pay for my insurance?” As it turned out, the beneficiary was eligible for both Medicaid and Medicare because of their income, and so had to pay less than they thought.

    The interviews made clear that many people are not aware that Medicare Advantage ads and insurance brokers may be biased. One counselor said, “There’s a lot of backing (beneficiaries) off the ledge, if you will, thanks to those TV commercials.”

    Many SHIP staff counselors said they would benefit from additional training on coverage options, including for people who are eligible for both Medicare and Medicaid. The SHIP program relies heavily on volunteers, and there is often greater demand for services than the available volunteers can offer. Additional counselors would help meet needs for complex coverage decisions.

    The key to making a good Medicare coverage decision is to use the help available and weigh your costs, access to health providers, current health and medication needs, and also consider how your health and medication needs might change as time goes on.

    This article is part of an occasional series examining the U.S. Medicare system.

    Grace McCormack receives funding from the Commonwealth Fund and Arnold Ventures.

    Melissa Garrido receives funding from Commonwealth Fund, the Laura and John Arnold Foundation, and the National Institutes of Health for Medicare-related research, including research discussed in this piece.

    ref. Medicare vs. Medicare Advantage: sales pitches are often from biased sources, the choices can be overwhelming and impartial help is not equally available to all – https://theconversation.com/medicare-vs-medicare-advantage-sales-pitches-are-often-from-biased-sources-the-choices-can-be-overwhelming-and-impartial-help-is-not-equally-available-to-all-236635

    MIL OSI – Global Reports

  • MIL-OSI Global: Why Trump accuses people of wrongdoing he himself committed − an explanation of projection

    Source: The Conversation – USA – By April Johnson, Associate Professor of Political Science, Kennesaw State University

    Donald Trump accuses others of acts he has done at an Oct. 3, 2024, rally in Michigan. AP Photo/Carlos Osorio

    Donald Trump has a particular formula he uses to convey messages to his supporters and opponents alike: He highlights others’ wrongdoings even though he has committed similar acts himself.

    On Oct. 3, 2024, Trump accused the Biden administration of spending Federal Emergency Management Agency funds – money meant for disaster relief – on services for immigrants. Biden did no such thing, but Trump did during his time in the White House, including to pay for additional detention space.

    This is not the first time he has accused someone of something he had done or would do in the future. In 2016, Trump criticized opponent Hillary Clinton’s use of an unsecured personal email server while secretary of state as “extreme carelessness with classified material.” But once he was elected, Trump continued to use his unsecured personal cellphone while in office. And he has been criminally charged with illegally keeping classified government documents after he left office and storing them in his bedroom, bathroom and other places at his Mar-a-Lago estate.

    After complaining about how Hillary Clinton handled classified documents, Donald Trump stored national secrets in a bathroom.
    Justice Department via AP

    More recently, the Secret Service arrested a man with a rifle who was allegedly planning to shoot Trump during a round of golf. In the wake of this event, Trump accused Democrats of using “inflammatory language” that stokes the fires of political violence. Meanwhile, Trump himself has a long history of making inflammatory remarks that could potentially incite violence.

    As a scholar of both politics and psychology, I’m familiar with the psychological strategies candidates use to persuade the public to support them and to cast their rivals in a negative light. This strategy Trump has used repeatedly is called “projection.” It’s a tactic people use to lessen their own faults by calling out these faults in others.

    Projection abounds

    There are plenty of examples. During his Sept. 10, 2024, debate with Vice President Kamala Harris, Trump claimed that Democrats were responsible for the July 13 assassination attempt against him. “I probably took a bullet to the head because of the things that they say about me,” he declared.

    Earlier in the debate he had falsely accused immigrants in Springfield, Ohio, of eating other people’s pets – a statement that sparked bomb threats and prompted the city’s mayor to declare a state of emergency.

    Similarly, congressional investigators and federal prosecutors have found that Trump’s remarks called thousands of people to Washington, D.C., on Jan. 6, 2021, encouraging them to violently storm the Capitol in order to stop the counting of electoral votes.

    Trump isn’t the only politician who uses projection. His running mate, JD Vance, claimed “the rejection of the American family is perhaps the most pernicious and the most evil thing the left has done in this country.” Critics quickly pointed out that his own family has a history of dysfunction and drug addiction.

    Projection happens on both sides of the political aisle. In reference to Trump’s proposed 10% tariff on all imported goods, the Harris campaign launched social media efforts to condemn the so-called “Trump tequila tax.” While Harris frames this proposal as a sales tax that would devastate middle-class families, she deflects from the fact that inflation has made middle-class life more expensive since she and President Joe Biden took office.

    How it works

    Projection is one example of unconscious psychological processes called defense mechanisms. Some people find it hard to accept criticism or believe information that they wish were not true. So they seek – and then provide – another explanation for the difference between what’s happening in the world and what’s happening in their minds.

    In general, this is called “motivated reasoning,” which is an umbrella phrase used to describe the array of mental gymnastics people use to reconcile their views with reality.

    Some examples include seeking out information that confirms their beliefs, dismissing factual claims or creating alternate explanations. For example, a smoker might downplay or simply avoid information related to the link between smoking and lung cancer, or perhaps tell themselves that they don’t smoke as much as they actually do.

    Motivated reasoning is not unique to politics. It can be a challenging concept to consider because people tend to think they are fully in control of their decision-making abilities and that they are capable of objectively processing political information. The evidence is clear, however, that there are unconscious thought processes at work, too.

    Influencing the audience

    Audiences are also susceptible to unconscious psychological dynamics. Research has found that over time, people’s minds subconsciously attach emotions to concepts, names or phrases. So someone might have a particular emotional reaction to the words “gun control,” “Ron DeSantis” or “tax relief.”

    And people’s minds also unconsciously create defenses for those seemingly automatic emotions. When a person’s emotions and defenses are questioned, a phenomenon called the “backfire effect” can occur, in which the process of controlling, correcting or counteracting mistaken beliefs ends up reinforcing the person’s beliefs rather than changing them.

    For instance, some people may find it hard to believe that the candidate they prefer – whom they believe to be the best person for the job – truly lost an election. So they seek another explanation and accept explanations that justify their beliefs. Perhaps they choose to believe, even in the absence of evidence, that the race was rigged or that many fraudulent votes were cast. And when evidence to the contrary is offered, they insist their views are correct.

    Vice President Kamala Harris has campaigned with Liz Cheney, right, a prominent Republican who formerly served in Congress.
    AP Photo/Mark Schiefelbein

    A way out

    Fortunately, research shows specific ways to reduce people’s reliance on these automatic psychological processes, including reiterating and providing details of objective facts and – importantly – attempting to correct untruths via a trusted source from the same political party.

    For instance, challenges to Democrats’ belief that the Trump-affiliated conservative agenda called Project 2025 is “dangerous” would be more effective coming from a Democrat than from a Republican.

    Similarly, a counter to Trump’s claim that the international community is headed toward World War III with Democrats in the White House would be stronger coming from one of Trump’s fellow Republicans. And certainly, statements that Trump “can never be trusted with power again” carries more weight when it comes from the lips of former Republican Vice President Dick Cheney than from any member of the Democratic Party.

    Critiques from within a candidate’s own party are not out of the question. But they are certainly improbable given the hotly charged climate that is election season 2024.

    April Johnson does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Why Trump accuses people of wrongdoing he himself committed − an explanation of projection – https://theconversation.com/why-trump-accuses-people-of-wrongdoing-he-himself-committed-an-explanation-of-projection-237912

    MIL OSI – Global Reports

  • MIL-OSI Global: LGBTQ rights: Where do Harris and Trump stand?

    Source: The Conversation – USA – By Marie-Amelie George, Associate Professor of Law, Wake Forest University

    The Republican Party and Democratic Party offer voters starkly different visions of LGBTQ rights in America. Douglas Rissing via Getty Images

    Polls show that LGBTQ rights will likely factor into most Americans’ pick for president this November as they choose between former Republican President Donald Trump and Vice President Kamala Harris, a Democrat.

    A March 2024 survey by independent pollster PRRI found that 68% of voters will take LGBTQ rights into consideration at the polls. Fully 30% stated that they would vote only for a candidate who shares their views on the issue.

    It is no coincidence, then, that LGBTQ rights issues feature prominently in the party platforms.

    The Republican Party’s electoral promises include cutting existing federal funding for gender-affirming care and restricting transgender students’ participation in sports. Meanwhile, the Democratic Party platform proposes to outlaw discrimination against LGBTQ people, including passing the Equality Act, which would prohibit discrimination based on sexual orientation and gender identity in housing, health care and public accommodations.

    As a legal scholar who has written extensively on the history of LGBTQ rights, I have seen that the clearest indication of how a politician will act once in office is not what they promise on the campaign trail. Instead, it’s what they have done in the past.

    Let’s examine their records.

    Trump restricted some LGBTQ rights

    Trump and his running mate, U.S. Sen. JD Vance of Ohio, are both relatively new to politics, so their records on LGBTQ rights issues are slim.

    Trump enacted two policies restricting LGBTQ rights early in his one term in office. The first was his 2017 executive order Promoting Free Speech and Religious Liberty, which reinforced that federal law must respect conscience-based objections to comply with the First Amendment. This order indirectly imperiled LGBTQ rights because many LGBTQ rights battles are fought over whether conservative Christian businesses run afoul of anti-discrimination laws when they refuse to serve same-sex couples.

    A few months later, Trump banned transgender individuals from serving in the U.S. armed forces. He ultimately revoked the directive, implementing instead a new policy that allowed existing transgender soldiers to remain in the military but barred new transgender recruits from enlisting.

    Vance has opposed trans rights

    Vance, a one-term senator, has accrued a record of trying to roll back the rights of transgender Americans during his short time in public office.

    Between 2023 and 2024, Vance introduced or sponsored five bills opposing trans rights. One seeks to restrict gender-affirming care for minors by imposing criminal sanctions on doctors who perform such surgeries; another aims to do the same by exposing physicians to civil liability for either prescribing gender affirming hormones or performing surgeries.

    JD Vance has made rolling back the rights of transgender Americans a centerpiece of his short congressional career.
    Christian Monterrosa/AFP via Getty Images

    Another Vance bill would expand health care workers’ ability to make conscience-based objections to transgender rights. One more would amend Title IX, which prohibits discrimination based on sex in education, to limit transgender student participation in athletics.

    Vance has also tried to pass legislation that would stop the Department of State from issuing passports with an unspecified “X” gender designation, a policy that launched in 2021. Gender-neutral passports allow transgender, intersex and nonbinary individuals to carry identity documents that reflect their gender identity and avoid what can be significant problems getting through airport security with misgendered IDs.

    Congress has not voted on any of these proposals.

    A ‘legislative priority’ for Harris

    Harris and her vice presidential pick, Minnesota Gov. Tim Walz, have both made LGBTQ rights a legislative priority throughout their long political careers.

    Harris initially took public office in 2003 as San Francisco’s district attorney. In that role, she established a hate crimes unit that prosecuted violence against LGBTQ youth in schools. She also trained prosecutors nationwide to counter the “gay panic” and “trans panic” defenses in court, which is when lawyers attempt to justify violence as a fear-based reaction to the victim’s sexual orientation or gender identity.

    Harris was elected California’s attorney general in 2011 and declined to defend the state’s ban on same-sex marriage when opponents challenged the law’s constitutionality before the U.S. Supreme Court. She also joined amicus briefs supporting transgender bathroom access after North Carolina barred transgender people from using bathrooms that did not match the gender on their ID.

    Harris, however, did not unequivocally champion LGBTQ rights. In 2015, she opposed two prisoners’ request for urgent gender-confirmation surgery. She has since called for a “better understanding” of transgender health needs.

    As a U.S. senator from 2017 to 2021, Harris sponsored bills proposing to better address distinct LGBTQ issues in health care and the criminal justice system. She also sponsored five Senate bills to prohibit discrimination based on sexual orientation and gender identity in employment, housing and public accommodations. Other bills she sponsored focused on LGBTQ youth, aiming to prohibit discrimination in child welfare programs and barring federal funds from supporting so-called conversion therapy of LGBTQ teens.

    The Senate did not vote on any of these bills.

    As vice president, Harris has been part of what advocates describe as the most pro-LGBTQ administration in U.S. history.

    Since 2021, President Joe Biden has issued multiple executive orders to combat discrimination against the LGBTQ community, including by eliminating the Trump-era restrictions on transgender military service. Biden also signed into law the Respect for Marriage Act, which changed the federal definition of marriage from “a man and a woman” to “two individuals.” The statute ensures that the federal government would continue to recognize same-sex unions if the Supreme Court ever reversed its decision to legalize marriage equality.

    Walz: Ally in the statehouse

    Harris’ vice-presidential pick has a similarly extensive record backing LGBTQ rights.

    As a U.S. representative from 2007 to 2019, Walz supported efforts to grant federal benefits to same-sex couples before marriage equality became federal law. He also co-sponsored many of the House versions of the same bills as Harris.

    As Minnesota’s governor, Walz has issued several executive orders promoting LGBTQ inclusion and equity and banned conversion therapy for minors. He also declared Minnesota as a “trans refuge state” that will not enforce laws interfering with children’s access to gender-affirming care.

    Walz signs a law in 2023 that declares Minnesota to be a refuge for people traveling for gender-affirming medical care.
    Glen Stubbe/Star Tribune via Getty Images)

    Starkly different records

    If elected, Trump has promised to cut federal funds for public schools that “push … gender ideology” and “keep men out of women’s sports.” Harris pledges to “defend the freedom to love who you love openly and with pride.”

    As citizens head to the polls in November, they can be confident that, on this topic at least, the candidates mean what they say.

    Marie-Amelie George does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. LGBTQ rights: Where do Harris and Trump stand? – https://theconversation.com/lgbtq-rights-where-do-harris-and-trump-stand-237298

    MIL OSI – Global Reports

  • MIL-OSI Global: Caitlin Clark, Christine Brennan and how racial stereotypes persist in the media’s WNBA coverage

    Source: The Conversation – USA – By Molly Yanity, Professor and Director of Sports media and Communication, University of Rhode Island

    Indiana Fever guard Caitlin Clark, right, scrambles for a loose ball against Connecticut Sun guard DiJonai Carrington during a game on Aug. 28, 2024. Brian Spurlock/Icon Sportswire via Getty Images

    The “Caitlin Clark effect,” or the impact on women’s basketball from a ponytailed rookie phenomenon from America’s heartland, is real: The 2024 WNBA season shattered viewership, attendance and merchandise sales records.

    Clark, however, didn’t get a chance to compete for a league title.

    The Connecticut Sun eliminated Clark’s team, the Indiana Fever, in the first round of the playoffs with a two-game sweep, ending her record rookie-of-the-year campaign.

    And it may be just the latest chapter in a complicated saga steeped in race.

    During the first game of the series, the fingers of Sun guard DiJonai Carrington hit Clark in the eye as Carrington followed through on a block attempt of a Clark shot.

    During the next day’s media availability, USA Today columnist Christine Brennan recorded and posted an exchange between herself and Carrington.

    In the brief clip, the veteran sports writer asks Carrington, who is Black, if she purposely hit Clark in the eye during the previous night’s game. Though Carrington insisted she didn’t intentionally hit Clark, Brennan persisted, asking the guard if she and a teammate had laughed about the incident. The questions sparked social media outrage, statements from the players union and the league, media personalities weighing in and more.

    Hit the pause button here.

    As a longtime sports writer who has covered the WNBA – and as a journalism scholar who studies women’s sports and fandom – I’ll concede that Brennan’s line of questioning seems, on its face, like business as usual in sports journalism.

    After all, haven’t most baseball fans seen a scribe ask a pitcher if he intentionally beaned a batter?

    But Brennan’s questions were not asked in a vacuum. The emergence of a young, white superstar from the heartland has caused many new WNBA fans to pick sides that fall along racial lines. Brennan’s critics claim she was pushing a line of questioning that has dogged Black athletes for decades: that they are aggressive and undisciplined.

    Because of that, her defense of her questions – and her unwillingness to acknowledge the complexities – has left this professor disappointed in one of her journalistic heroes.

    Brennan and much of the mainstream sports media, particularly those who cover professional women’s basketball, still seem to have a racial blind spot.

    The emergence of a Black, queer league

    When the WNBA launched in 1997 in the wake of the success of the 1996 Olympic gold-medal-winning U.S. women’s basketball team, it did so under the watch of the NBA.

    The NBA set out to market its new product, in part, to a white, heterosexual fan base.

    The plan didn’t take hold.

    While the league experienced fits and starts in attendance and TV ratings over its lifetime, the demographic makeup of its players is undeniable: The WNBA is, by and large, a Black, queer league.

    In 2020, the Women’s National Basketball Players Association reported that 83% of its members were people of color, with 67% self-reporting as “Black/African-American.” While gender and sexual identity hasn’t been officially reported, a “substantial proportion,” the WNBPA reported, identify as LBGTQ+.

    In 2020, the league’s diversity was celebrated as players competed in a “bubble” in Bradenton, Florida, due to the COVID-19 pandemic. They protested racial injustice, helped unseat a U.S. senator who also owned Atlanta’s WNBA franchise, and urged voters to oust former President Donald Trump from the White House.

    Racial tensions bubble to the surface

    In the middle of it all, the WNBA has more eyeballs on it than ever before. And, without mincing words, the fan base has “gotten whiter” since Clark’s debut this past summer, as The Wall Street Journal pointed out in July. Those white viewers of college women’s basketball have emphatically turned their attention to the pro game, in large part due to Clark’s popularity at the University of Iowa.

    Money is also pouring into the league through a lucrative media rights deal and new sponsorship partners.

    While the rising tide following Clark’s transition to the WNBA is certainly lifting all boats, it is also bringing detritus to the surface in the form of racist jeers from the stands and on social media.

    After the Sun dispatched the Fever, All-WNBA forward Alyssa Thomas, who seldom speaks beyond soundbites, said in a postgame news conference: “I think in my 11-year career I’ve never experienced the racial comments from the Indiana Fever fan base. … I’ve never been called the things that I’ve been called on social media, and there’s no place for it.”

    Echoes of Bird and Magic

    In “Manufacturing Consent,” a seminal work about the U.S. news business, Edward Herman and Noam Chomsky argued that media in capitalist environments do not exist to impartially report the news, but to reinforce dominant narratives of the time, even if they are false. Most journalists, they theorized, work to support the status quo.

    In sports, you sometimes see that come to light through what media scholars call “the stereotypical narrative” – a style of reporting and writing that relies on old tropes.

    Scholars who study sports media have found that reporters routinely fall back on racial stereotypes. For example, coverage of Black quarterbacks in the NFL as less intelligent and more innately gifted would go on to hinder the progress of Black quarterbacks.

    Magic Johnson defends a shot by Larry Bird during the 1985 NBA Finals.
    Bob Riha, Jr./Getty Images

    In Brennan’s coverage of the Carrington-Clark incident, there appear to be echoes of the way the media covered Los Angeles Lakers point guard Magic Johnson and Boston Celtics forward Larry Bird in the 1980s.

    The battles between two of the sport’s greatest players – one Black, the other white – was a windfall for the NBA, lifting the league into financial sustainability.

    But to many reporters who leaned on the dominant narrative of the time, the two stars also served as stand-ins for the racial tensions of the post-civil rights era. During the 1980s, Bird and Magic didn’t simply hoop; they were the “embodiments of their races and living symbols of how blacks and whites lived in America,” as scholars Patrick Ferrucci and Earnest Perry wrote.

    The media gatekeepers of the Magic-Bird era often relied on racial stereotypes that ultimately distorted both athletes.

    For example, early in their careers, Bird and Johnson received different journalistic treatment. In Ferrucci and Perry’s article, they explain how coverage of Bird “fit the dominant narrative of the time perfectly … exhibiting a hardworking and intelligent game that succeeded despite a lack of athletic prowess.” When the “flashy” Lakers and Johnson won, they wrote, it was because of “superior skill.”

    When they lost to Bird’s Celtics, they were “outworked.”

    Framing matters

    Let’s go back to Brennan.

    Few have done more for young women in the sports media industry than Brennan. In time, energy and money, she has mentored and supported young women trying to break into the field. She has used her platform to expand the coverage of women’s sports.

    Brennan defended herself in a lengthy interview on the podcast “Good Game with Sarah Spain”:

    “I think [critics are] missing the fact of what I’m trying to do, what I am doing, what I understand clearly as a journalist, asking questions and putting things out there so that athletes can then have an opportunity to answer issues that are being discussed or out there.”

    I don’t think Brennan asking Carrington about the foul was problematic. Persisting with the narrative was.

    Leaning into racial stereotypes is not simply about the language used anymore. Brennan’s video of her persistent line of questioning pitted Carrington against Clark. It could be argued that it used the stereotype of the overly physical, aggressive Black athlete, as well.

    At best, Brennan has a blind spot to the strain racism is putting on Black athletes today – particularly in the WNBA. At worst, she is digging in on that tired trope.

    A blind spot can be addressed and seen. An unacknowledged racist narrative, however, will persist.

    Molly Yanity does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Caitlin Clark, Christine Brennan and how racial stereotypes persist in the media’s WNBA coverage – https://theconversation.com/caitlin-clark-christine-brennan-and-how-racial-stereotypes-persist-in-the-medias-wnba-coverage-240272

    MIL OSI – Global Reports

  • MIL-OSI Global: AI was central to two of 2024’s Nobel prize categories. It’s a sign of things to come

    Source: The Conversation – UK – By Nello Cristianini, Professor of Artificial Intelligence, University of Bath

    The 2024 Nobel Prizes in physics and chemistry have given us a glimpse of the future of science. Artificial intelligence (AI) was central to the discoveries honoured by both awards. You have to wonder what Alfred Nobel, who founded the prizes, would think of it all.

    We are certain to see many more Nobel medals handed to researchers who made use of AI tools. As this happens, we may find the scientific methods honoured by the Nobel committee depart from straightforward categories like “physics”, “chemistry” and “physiology or medicine”.

    We may also see the scientific backgrounds of recipients retain a looser connection with these categories. This year’s physics prize was awarded to the American John Hopfield, at Princeton University, and British-born Geoffrey Hinton, from the University of Toronto. While Hopfield is a physicist, Hinton studied experimental psychology before gravitating to AI.

    The chemistry prize was shared between biochemist David Baker, from the University of Washington, and the computer scientists Demis Hassabis and John Jumper, who are both at Google DeepMind in the UK.

    There is a close connection between the AI-based advances honoured in the physics and chemistry categories. Hinton helped develop an approach used by DeepMind to make its breakthrough in predicting the shapes of proteins.

    The physics laureates, Hinton in particular, laid the foundations of the powerful field known as machine learning. This is a subset of AI that’s concerned with algorithms, sets of rules for performing specific computational tasks.

    Hopfield’s work is not particularly in use today, but the backpropagation algorithm (co-invented by Hinton) has had a tremendous impact on many different sciences and technologies. This is concerned with neural networks, a model of computing that mimics the human brain’s structure and function to process data. Backpropagation allows scientists to “train” enormous neural networks. While the Nobel committee did its best to connect this influential algorithm to physics, it’s fair to say that the link is not a direct one.

    Proteins can now be quickly designed to counter viruses.
    Radoxist Studio / Shutterstock

    Training a machine-learning system involves exposing it to vast amounts of data, often from the internet. Hinton’s advance ultimately enabled the training of systems such as GPT (the technology behind ChatGPT), and the AI algorithms AlphaGo and AlphaFold, developed by Google DeepMind. So, backpropagation’s impact has been enormous.

    DeepMind’s AlphaFold 2 solved a 50-year-old problem: predicting the complex structures of proteins from their molecular building blocks, amino acids.

    Every two years, since 1994, scientists have been holding a contest to find the best ways to predict protein structures and shapes from the sequences of their amino acids. The competition is called Critical Assessment of Structure Prediction (CASP).

    For the past few contests, CASP winners have used some version of DeepMind’s AlphaFold. There is, therefore, a direct line to be drawn from Hinton’s backpropagation to Google DeepMind’s AlphaFold 2 breakthrough.

    David Baker used a computer program called Rosetta to achieve the difficult feat of building new kinds of proteins. Both Baker’s and DeepMind’s approaches hold enormous potential for future applications.

    Attributing credit has always been controversial aspect of the Nobel prizes. A maximum of three researchers can share a Nobel. But big advances in science are collaborative. Scientific papers may have 10, 20, 30 authors or more. More than one team might contribute to the discoveries honoured by the Nobel committee.

    This year we may have further discussions about the attribution of the research on backpropagation algorithm, which has been claimed by various researchers, as well as for the general attribution of a discovery to a field like physics.

    We now have a new dimension to the attribution problem. It’s increasingly unclear whether we will always be able to distinguish between the contributions of human scientists and those of their artificial collaborators – the AI tools that are already helping push forward the boundaries of our knowledge.

    In the future, could we see machines take the place of scientists, with humans being consigned to a supporting role? If so, perhaps the AI tool will get the main Nobel prize with humans needing their own category.

    Nello Cristianini is affiliated with the University of Bath, and the author of two books that cover the topics of this article, The Shortcut (CRC Press, 2023) and Machina Sapiens (Mulino, 2024).

    ref. AI was central to two of 2024’s Nobel prize categories. It’s a sign of things to come – https://theconversation.com/ai-was-central-to-two-of-2024s-nobel-prize-categories-its-a-sign-of-things-to-come-240954

    MIL OSI – Global Reports

  • MIL-OSI Global: Dark energy: could the mysterious force we think of as constant actually vary over cosmic time?

    Source: The Conversation – UK – By Robert Nichol, Pro Vice-Chancellor and Executive Dean, University of Surrey

    Globular cluster NGC 2005. ESA/Hubble & Nasa, F. Niederhofer, L. Girardi, CC BY-SA

    As I finished my PhD in 1992, the universe was full of mystery – we didn’t even know exactly what it is made of. One could argue that cosmologists had made little progress in our understanding of these basic facts since the discovery of the cosmic microwave background (CMB), the afterglow of the Big Bang, in the 1960s.

    I left the UK after my doctoral studies to begin a research career in the US, where I was lucky to be recruited to work on a new experiment called the Sloan Digital Sky Survey (SDSS). This new survey embraced advances in digital technologies with the ambition of measuring the “redshifts” (how light becomes more red if a source appears to move away from you) of a million galaxies.

    These redshifts were then used to measure distances, and allowed cosmologists to map the three-dimensional structure of the universe.

    One cosmic puzzle in the 1980s, based on the pioneering CfA Redshift Survey of Margaret Geller and John Huchra, was the significant lumpiness of galaxies, and therefore matter, in our cosmic neighbourhood. Galaxies were clustered together across a wide range of scales, with evidence for coherent “superclusters” of galaxies spanning over 30 million light years in length.


    This article is part of our series Cosmology in crisis? which uncovers the greatest problems facing cosmologists today – and discusses the implications of solving them.


    It was important to know how such superclusters could have formed from the smooth CMB, as it would tell us the total amount of matter in the universe and, more intriguingly, what that matter was made of. That was assuming the only force in play was gravity.

    By the end of the first phase of the SDSS, we had achieved our goal of a million redshifts. This data was used to discover many superclusters across the universe, including the amazing “Sloan Great Wall”, which remains one of the largest known coherent structures in the universe, over a billion light years in length.

    Type 1A supernova remnant.
    Nasa/CXC/U.Texas

    I am lucky to have lived through this amazing era of cosmic discovery around the turn of the century. Surveys like SDSS, combined with new observations of the CMB and searches for distant exploding stars known as Type Ia Supernovae (SNeIa), coincided to deliver an emphatic answer to the question: “What is the universe made of?”

    The discovery of dark energy

    From 1999 to 2004, the cosmological community came together to agree that the universe was 5% normal (baryonic) matter, 25% dark matter (unknown, invisible matter), and 70% “dark energy” (an expansive force) – essentially a cosmological constant, which was first postulated by Einstein. The discovery that the universe was dominated by this constant energy shocked everyone, especially as Einstein had called the cosmological constant his “biggest blunder”.

    Today, cosmologists still agree this is the most likely make-up of our universe. But observational cosmologists like me have refined our measurements of these cosmic variables significantly – reducing the errors on these quantities.

    The latest numbers from the Dark Energy Survey (DES) indicate that 31.5% of the universe is matter (a combination of dark and normal), with the remainder being dark energy assuming a cosmological constant. The error on this measurement is just 3%.

    Knowing these numbers to higher precision will hopefully help cosmologists understand why the universe is like this. Why would we expect to have 70% of the universe today as “dark” (can’t be seen via electromagnetic radiation) and not associated with “matter” like everything else in the universe?

    The origin of this dark energy remains the biggest challenge to physics, even after 20 years of intense study.

    Intriguing measurements

    Like me, a few cosmologists have become distracted by other problems over the last two decades. However, 2024 could be the start of a new era of discovery. This year, cosmologists published new results based on two of our best cosmological probes.

    The first probe consists of exploding stars dubbed “SNeIa”. As these stars have a narrow range of masses, their explosions can be well calibrated, giving cosmologists a predictable brightness that can be seen far away. By comparing the known brightness of these SNeIa to their redshifts, we can determine the expansion history of the universe. These objects were, in fact, critical for discovering that the expansion of our universe is accelerating.

    The second probe works by looking at Baryon Acoustic Oscillations (BAO) – relics of predictable sound waves in the plasma (charged gas) of the early universe, before the CMB. These are now frozen into the large-scale structure of galaxies around us. Like SNeIa, their predictable size can be compared with their observed size today to measure the expansion history of the universe.

    Recently, DES reported its final SNeIa results from over a decade of work, detecting and characterising many thousands of supernova events. While these SNeIa results are consistent with the orthodox view that the universe is dominated by a cosmological constant, they do leave open the tantalising possibility of new physics – namely, that the dark energy could be varying with cosmic time.

    That said, scientists are trained to be sceptical, and there are many reasons to distrust a single experiment, single observation, or even a single set of cosmologists!

    Cosmologists now go to extraordinary lengths to “blind” their results from themselves during analysis of the data, only revealing the answer at the last moment. This blinding is done to avoid unconscious human biases affecting the work, which could possibly encourage people to get the answer they believe they should see.

    This is why repeatability of results is at the heart of all science. In cosmology, we cherish the need for multiple experiments checking and challenging each other.

    The second result to turn heads was the first BAO measurements from the Dark Energy Spectroscopic Instrument (DESI), successor to the SDSS. The first DESI map of the cosmos is deeper and denser than the original SDSS. Its first BAO results are intriguing – the data alone is still consistent with a cosmological constant, but with hints of a possible time-varying dark energy when combined with other data sources.

    DESI in the dome of the Nicholas U. Mayall 4-meter Telescope at the Kitt Peak National Observatory.
    wikipedia, CC BY-SA

    In particular, when DESI analyses the combination of its BAO results with the final DES SNeIa data, the significance of a time-varying dark energy increases to 3.9 sigma (a measure of how unusual a set of data is if a hypothesis is true) – only 0.6% chance of being a statistical fluke.

    Most of us would take such odds, but scientists have been hurt before by systematic errors within their data that can mimic such statistical certainty. Particle physicists therefore demand a discovery standard of 5 sigma for any claims of new physics – or less than a one in a million chance of being wrong!

    As scientists will say: “Extraordinary claims require extraordinary evidence.”

    Mindboggling implications

    Are we entering a new era of cosmological discovery? If so, what would it mean?

    The answer to my first question is probably yes. The next few years will be fun for cosmologists, with new data and results due from the European Space Agency’s Euclid mission. Launched last year, it is already scanning the sky with unprecedented accuracy.

    Likewise, DESI will get more and better data, while the European Southern Observatory starts its own massive redshift survey in 2025. Then you have the Rubin Observatory in Chile coming online soon. Combining these datasets should prove beyond doubt if dark energy varies with cosmic time.

    If it does, it implies there is less dark energy now than in the past. This could be caused by many things but, interestingly, it could signify the end of a present, accelerated phase of the expansion of the universe.

    It also implies that dark energy is probably not a cosmological constant thought to be due to the background energy associated with empty space. According to quantum mechanics, empty space isn’t really empty, with particles popping in and out of existence creating something we call “vacuum energy”. Ironically, predictions of this vacuum energy do not agree with our cosmological observations by many orders of magnitude.

    So, if we did discover that dark energy varies over time, it might explain why observations are at odds with quantum mechanics, which is an extremely well-tested theory. This would suggest the assumption in the standard model of cosmology, that dark energy is constant, needs a rethink. Such a realisation may help solve other mysteries about the universe – or pose new ones.

    In short, the new cosmological observations coming this decade will stimulate a new era of physical thinking. Congratulations to my younger cosmologists: it is your era to have fun.




    Read more:
    The earliest galaxies formed amazingly fast after the Big Bang. Do they break the universe or change its age?





    Read more:
    Astronomers can’t agree on how fast the universe is expanding. New approaches are aiming to break the impasse





    Read more:
    The universe is smoother than the standard model of cosmology suggests – so is the theory broken?





    Read more:
    Cosmology is at a tipping point – we may be on the verge of discovering new physics


    Robert Nichol receives funding from STFC for work on 4MOST.

    ref. Dark energy: could the mysterious force we think of as constant actually vary over cosmic time? – https://theconversation.com/dark-energy-could-the-mysterious-force-we-think-of-as-constant-actually-vary-over-cosmic-time-238247

    MIL OSI – Global Reports

  • MIL-OSI Global: The vote in Pennsylvania could decide the US election – it’s a battle for the suburbs

    Source: The Conversation – UK – By Thomas Gift, Associate Professor and Director of the Centre on US Politics, UCL

    Pennsylvania has many slogans and nicknames. “The Keystone State.” “State of Independence.” “Home of beer, chocolate, and liberty and Taylor Swift.” And now: “centre of the political universe”.

    According to recent analysis by political statistician Nate Silver, how Pennsylvania swings on November 5 is likely to determine the next leader of the free world. If Kamala Harris wins the state, her odds of taking the White House reach 91%. If Trump wins, his odds skyrocket to 96%.

    That’s how much Pennsylvania’s 19 electoral votes matter (270 are needed to win the Electoral College), and how much the state is a bellwether nationally for how each candidate is performing with “must-win” voters.

    Nearly every statewide poll conducted in Pennsylvania (PA) in the last month shows a statistical tie in the presidential contest. FiveThirtyEight forecasts in its simulations that Harris would win the state 54 times out of 100 elections and Trump 46 times, meaning the state is a virtual toss-up.

    In 2016, Trump pulled off a narrow upset in PA, defeating Democrat Hillary Clinton 48.2 to 47.5%. The victory cracked the crucial “Blue Wall,” alongside Michigan and Wisconsin, which paved Trump’s path to the White House. In 2020, President Joe Biden, thanks partly to touting his family’s roots in the working-class city of Scranton, beat Trump in Pennsylvania 50 to 48.8%. In the last 10 elections, Pennsylvania has selected the eventual occupant of the Oval Office eight times.


    The world is watching as the US election campaign unfolds. Sign up to join us at a special Conversation event on October 17. Expert panellists will discuss with the audience the upcoming election and its possible fallout.


    Beyond the race for the White House, arguably there’s nowhere else with a more high-stakes race. Most notably, incumbent Democratic Senator Bob Casey has been exchanging barbs with Republican challenger Dave McCormick in an election that could tip the balance of the US Congress.

    Bellwether state

    Democratic political strategist James Carville once quipped that Pennsylvania is Philadelphia and Pittsburgh, with Alabama in between. Today, one could say it’s the Land of Walmart, Tractor Supply Co. and Fox News v the Land of Starbucks, Lululemon stores and MSNBC.

    Zooming out, an electoral map of the state looks a lot like that of the country: vast swaths of Republican red in the rural, central parts of the state, and dashes of Democratic deep blue in the east and the west denoting its population centres.

    Pennsylvania reflects the political realignment of both the Democratic and Republican parties in the last decade plus. Predominantly white, blue-collar Americans have gravitated to the Republican party. Meanwhile affluent urbanites have remade the Democratic party, formerly a base for the working class, into the party of the college educated and those who are less likely to be religious. But the Democrats still pick up 49% of the non-college educated and their share of the suburban vote has been rising.

    Neither presidential candidate, however, is writing off key constituencies in PA. The Harris team has opened up 50 headquarters across Pennsylvania in an effort to make inroads in conservative, rural communities. Meanwhile, Trump has made a major play for Black voters and had looked like he was on track to win the highest support from Black voters of any Republican presidential candidate in history.

    Particularly up for grabs are moderate suburbanites, such as those on Philadelphia’s “Main Line” (an area of well-off suburbs) and in upscale outskirts of the state capital of Harrisburg, who tend to be more liberal on social issues and conservative on economic issues.

    Democrats have a slight edge in overall registration numbers in PA, at 44% compared to Republicans at 40% (12% of Pennsylvanians identify as independents). However, the registration advantage for Democrats is the thinnest it’s been in decades.

    Big spending and big issues

    As 2024’s biggest electoral prize, no state has been bombarded with more cash and attention than PA. Harris and Trump have criss-crossed the state for months at locations such as the Pennsylvania Farm Show Complex (a huge agricultural showground) and at union rallies.

    Harris and her allies have spent US$21.2 million (£16.9 million) on political ads in Pennsylvania (that’s three times what they’ve spent in Georgia, twice what they’ve spent in Michigan and 18 times what they’ve spent in North Carolina). To match, Trump and his allies have doled out $20.9 million in PA (twice what they’ve spent in Georgia, three times than they’ve spent in Michigan and eight times what they’ve spent in North Carolina).

    Dollars have funnelled into negative ads galore on the many issues that Americans more broadly face, including inflation and the cost of living crisis, crime, abortion and immigration. The war in Ukraine has featured as an especially central issue for Pennsylvania’s large Polish community in an attempt by the Democrats to harness historic fears about Russia.

    No topic, however, has sparked more controversy than fracking, the process of extracting oil and gas from underground rock. PA has become a national leader in fracking, triggering outrage among environmentalists, even as advocates tout the industry as an enormous wealth and job creator for the state.

    Harris, who declared as a Democratic presidential primary candidate in 2019 that: “There’s no question I’m in favor of banning fracking,” now says “let me be absolutely clear, as I’ve been when I said it back in 2020, I will not ban fracking”. Trump has unequivocally championed fracking as part of his “drill, baby, drill” message on lowering prices and creating domestic energy independence.

    What’s in store

    If Pennsylvania’s presidential race is anywhere near as tight as the polls suggest, a winner might not be announced in Pennsylvania, or the country, on election night. With the counting of absentee and overseas ballots (and the possibility of a recount), the process could drag on for days, if not weeks.

    That’s one reason why both sides are already “lawyered-up” in anticipation of litigious combat. In 2020, the US Supreme Court declined to intervene in a case in Pennsylvania that tested rules surrounding the timing of when mail-in votes could still be counted. However, other aspects of electoral protocols or the integrity of ballots could again be challenged.

    Already in 2024, Pennsylvania has been politically consequential. The first assassination attempt of Trump occurred in the tiny town of Butler, PA. Harris’s decision to snub popular state governor Josh Shapiro as her running mate also raised concerns, and could lead to considerable second-guessing if she loses PA and the presidency. Pennsylvania also hosted the one (and likely only) debate between Harris and Trump.

    Whether Harris or Trump ends up as president will depend on whether their political stars align. Either way, those stars revolve around Pennsylvania, the centre of the political universe.

    Thomas Gift does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. The vote in Pennsylvania could decide the US election – it’s a battle for the suburbs – https://theconversation.com/the-vote-in-pennsylvania-could-decide-the-us-election-its-a-battle-for-the-suburbs-240587

    MIL OSI – Global Reports

  • MIL-OSI Global: Slow-moving sloths will struggle to adapt quickly to climate change – new study

    Source: The Conversation – UK – By Heather Ewart, Postdoctoral Researcher, Evolutionary Biology, University of Manchester

    Conservation biologist Rebecca Cliffe fits an accelerometer backpack to a wild three-fingered sloth to measure its movement. The Sloth Conservation Foundation, CC BY-NC-ND

    Sloths are more vulnerable to the rising temperatures associated with climate change than other mammals, due to their unique physiology.

    In a new study, my colleagues and I found that sloths’ ability to adapt to warming temperatures varies between the cooler, high-altitude and warmer, low-altitude forests of Costa Rica.

    Unlike most mammals, sloths do not actively regulate their body temperature. Like reptiles, they rely heavily on ambient temperature to do so. This affects all aspects of their survival, including digestion, metabolism and movement. Combined with their extremely low-calorie, relatively inflexible leaf-based diet, these traits mean sloths have much less energy at their disposal than most other mammals.

    As sloth body temperatures become hotter with rising temperatures, their metabolic rate increases. But those with sharply increasing metabolic rates are at risk of lower survival rates when temperatures rise, compared with other sloths.

    The author, Heather Ewart, returns a wild three-fingered sloth back to its point of capture following the application of a GPS tracking collar and accelerometer.
    Heather Ewart, CC BY-NC-ND

    Together with colleagues, including the founder of UK-based Sloth Conservation Foundation Rebecca Cliffe, I found that their degree of vulnerability depends on the altitude of the forests where each sloth originates from.

    We calculated the metabolic rates of high- and low-altitude sloths across a range of temperatures using a method called respirometery. This involves putting a sloth in a large, closed box (comfortably) to measure how much oxygen it consumes at each temperature within an allotted time period.

    Lowland sloths were able to slow their metabolic rate when temperatures became too hot. This is an important survival mechanism that may benefit these populations as climate change continues.

    Highland sloths were unable to slow their metabolic rate, which increased with temperature and became critical above 32°C. Highland sloths are at another disadvantage – cooler, high-altitude forests tend to be smaller due to the slower growth rate of trees at higher elevations coupled with habitat loss. Highland sloths are therefore much less able to migrate and are more restricted than lowland sloths.

    Sloths can’t adapt their metabolism quickly so are at risk from rising temperatures.
    Rebecca Cliffe, CC BY-NC-ND

    Sloths with higher metabolic rates use more energy, so they need to eat more food to produce more energy. However, due to their extremely slow rates of food intake and digestion, sloths take much longer to process food into energy than other mammals. Essentially, sloths cannot simply eat more food to match their energy requirements or achieve “energy balance” – the state where calories consumed equals calories burnt through physical activity.

    Combined with inflexible migration options, the restricted metabolism of highland sloths makes them especially vulnerable to climate change. However, while lowland sloths appear to have more flexible metabolic responses to warming temperatures, they won’t be able to escape the effects of climate change if temperature increases are too extreme, putting their survival at risk as well.

    There is a considerable lack of data on the current status and abundance of sloths. No comprehensive, long-term population monitoring has been conducted at a scale that reflects the true challenges sloths face.

    Conserving cooler microclimates

    My team of ecologists, who have been studying sloth behaviour and abundance across Costa Rica for 15 years, are concerned about how sloths are being affected by climate change. Areas once highly populated are now devoid of sloths, driven primarily by habitat loss and fragmentation resulting from extensive destruction of rainforests.

    Costa Rica has transformed into a predominantly urban society over the past 40 years, with its urban footprint increasing by 112%. In the Talamanca province, where our team currently tracks wild sloths, urban sprawl has increased substantially with an estimated 3,000 sloths lost annually. Electrocution is one of the leading causes of admissions to wild animal sanctuaries in Costa Rica, partly because sloths use power lines to cross between fragmented forests in certain places.

    A two-fingered sloth uses power lines over a busy road to move between trees.
    Heather Ewart, CC BY-NC-ND

    Both native sloth species of Costa Rica are now listed as conservation concerns. Globally, an estimated 40% of all sloth species are threatened with extinction. Climate change poses a serious threat – and sloth conservation efforts need to take this into account. We predict that rising temperatures will have devastating consequences for sloths’ ability to maintain their energy balance and survive.

    Sloth conservation is crucial, as they play a vital role in keeping the rainforest ecosystem healthy. Sloths are herbivores (plant eaters) that help regulate plant growth and recycle nutrients. They are an integral part of the food web, hosting a diverse ecosystem of unique organisms in their fur and serving as prey for other animals, such as ocelots and jaguars.

    Protecting sloths is an incredibly complex challenge. Right now, natural habitats must be preserved and restored to support cooler microclimates. Particularly in vulnerable high-altitude regions, remaining forest fragments should be reconnected by building wildlife corridors – strips of natural habitat that connect fragmented areas and allow animals to move more easily.

    Sloth conservation is challenging.
    Katarzyna Przygodzka/Shutterstock

    Sloth conservation can only be achieved by addressing the root issue: climate change. A global, coordinated effort is required, with strict adherence to international climate accords such as the Paris agreement to limit global warming to below 1.5°C and prevent irreversible damage to rainforests.

    If climate change continues unchecked, sloths won’t be able to migrate like other species. Once their environment becomes too hot, their survival is unlikely. Sloth conservation is directly linked to the actions humanity now takes to preserve our planet.



    Don’t have time to read about climate change as much as you’d like?

    Get a weekly roundup in your inbox instead. Every Wednesday, The Conversation’s environment editor writes Imagine, a short email that goes a little deeper into just one climate issue. Join the 35,000+ readers who’ve subscribed so far.


    Heather Ewart does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Slow-moving sloths will struggle to adapt quickly to climate change – new study – https://theconversation.com/slow-moving-sloths-will-struggle-to-adapt-quickly-to-climate-change-new-study-240052

    MIL OSI – Global Reports

  • MIL-OSI Global: Evacuating in disasters like Hurricane Milton isn’t simple – there are reasons people stay in harm’s way, and it’s not just stubbornness

    Source: The Conversation – USA – By Carson MacPherson-Krutsky, Research Associate, Natural Hazards Center, University of Colorado Boulder

    Evacuation is more difficult for people with health and mobility issues. Ted Richardson/For The Washington Post via Getty Images

    As Hurricane Milton roared ashore near Sarasota, Florida, tens of thousands of people were in evacuation shelters. Hundreds of thousands more had fled coastal regions ahead of the storm, crowding highways headed north and south as their counties issued evacuation orders.

    But not everyone left, despite dire warnings about a hurricane that had been one of the strongest on record two days earlier.

    As Milton’s rain and storm surge flooded neighborhoods late on Oct. 9, 2024, 911 calls poured in. In Tampa’s Hillsborough County, more than 500 people had to be rescued, including a dozen people trapped in a flooding home after a tree crashed though the roof at the height of the storm.

    In Plant City, 20 miles inland from Tampa, at least 35 people had been rescued by dawn, City Manager Bill McDaniel said. While the storm wasn’t as extreme as feared, McDaniel said his city had flooded in places and to levels he had never seen. Traffic signals were out. Power lines and trees were down. The sewage plant had been inundated, affecting the public water supply.

    Evacuating might seem like the obvious move when a major hurricane is bearing down on your region, but that choice is not always as easy as it may seem.

    Evacuating from a hurricane requires money, planning, the ability to leave and, importantly, a belief that evacuating is better than staying put.

    I recently examined years of research on what motivates people to leave or seek shelter during hurricanes as part of a project with the Federal Emergency Management Agency and the Natural Hazards Center. I found three main reasons that people didn’t leave.

    Evacuating can be expensive

    Evacuating requires transportation, money, a place to stay, the ability to take off work days ahead of a storm and other resources that many people do not have.

    With 1 in 9 Americans facing poverty today, many have limited evacuation options. During Hurricane Katrina in 2005, for example, many residents did not own vehicles and couldn’t reach evacuation buses. That left them stranded in the face of a deadly hurricane. Nearly 1,400 people died in the storm, many of them in flooded homes.

    When millions of people are under evacuation orders, logistical issues also arise.

    Two days ahead of landfall, Milton was a Category 5 hurricane. About 5 million people were under evacuation orders, and highways were crowded.

    Gas shortages and traffic jams can leave people stranded on highways and unable to find shelter before the storm hits. This happened during Hurricane Floyd in 1999 as 2 million Floridians tried to evacuate.

    People who experienced past evacuations or saw news video of congested highways ahead of Hurricane Milton might not leave for fear of getting stuck.

    Health, pets and being physically able to leave

    The logistics of evacuating are even more challenging for people who are disabled or in nursing homes. Additionally, people who are incarcerated may have no choice in the matter – and the justice system may have few options for moving them.

    Evacuating nursing homes, people with disabilities or prison populations is complex. Many shelters are not set up to accommodate their needs. In one example during Hurricane Floyd, a disabled person arrived at a shelter, but the hallways were too narrow for their wheelchair, so they were restricted to a cot for the duration of their stay. Moving people whose health is fragile, and doing so under stressful conditions, can also worsen health problems, leaving nursing home staff to make difficult decisions.

    At least 700 people stayed in chairs or on air mattresses at River Ridge Middle/High School in New Port Richey, Fla., during Hurricane Milton.
    AP Photo/Mike Carlson

    But failing to evacuate can also be deadly. During Hurricane Irma in 2017, seven nursing home residents died in the rising heat after their facility lost power near Fort Lauderdale, Florida. In some cases, public water systems are shut down or become contaminated. And flooding can create several health hazards, including the risk of infectious diseases.

    In a study of 291 long-term care facilities in Florida, 81% sheltered residents in place during the 2004 hurricane season because they had limited transportation options and faced issues finding places for residents to go.

    Some shelters allow small pets, but many don’t. This high school-turned-shelter in New Port Richey, Fla., had 283 registered pets.
    AP Photo/Mike Carlson

    People with pets face another difficult choice – some choose to stay at home for fear of leaving their pet behind. Studies have found that pet owners are significantly less likely to evacuate than others because of difficulties transporting pets and finding shelters that will take them. In destructive storms, it can be days to weeks before people can return home.

    Risk perception can also get in the way

    People’s perceptions of risk can also prevent them from leaving.

    A series of studies show that women and minorities take hurricane risks more seriously than other groups and are more likely to evacuate or go to shelters. One study found that women are almost twice as likely than men to evacuate when given a mandatory evacuation order.

    If people have experienced a hurricane before that didn’t do significant damage, they may perceive the risks of a coming storm to be lower and not leave.

    Video from across Florida after Hurricane Milton shows flooding around homes, trees down and other damage. At least five people died in the storm, and more than 3 million homes lost power.

    In my review of research, I found that many people who didn’t evacuate had reservations about going to shelters and preferred to stay home or with family or friends. Shelter conditions were sometimes poor, overcrowded or lacked privacy.

    People had fears about safety and whether shelter environments could meet their needs. For example, religious minorities were not sure whether shelters would be clean, safe, have private places for religious practice, and food options consistent with faith practices. Diabetics and people with young children also had concerns about finding appropriate food in shelters.

    How to improve evacuations for the future

    There are ways leaders can reduce the barriers to evacuation and shelter use. For example:

    • Building more shelters able to withstand hurricane force winds can create safe havens for people without transportation or who are unable to leave their jobs in time to evacuate.

    • Arranging more shelters and transportation able to accommodate people with disabilities and those with special needs, such as nursing home residents, can help protect vulnerable populations.

    • Opening shelters to accommodate pets with their owners can also increase the likelihood that pet owners will evacuate.

    • Public education can be improved so people know their options. Clearer risk communication on how these storms are different than past ones and what people are likely to experience can also help people make informed decisions.

    • Being prepared saves lives. Many areas would benefit from better advance planning that takes into account the needs of large, diverse populations and can ensure populations have ways to evacuate to safety.

    Carson MacPherson-Krutsky works for the Natural Hazards Center (NHC) at the University of Colorado Boulder. She receives grant and contract funding for her work at NHC through the National Science Foundation, the U.S. Army Corps of Engineers, the Federal Emergency Management Agency, and other funders.

    ref. Evacuating in disasters like Hurricane Milton isn’t simple – there are reasons people stay in harm’s way, and it’s not just stubbornness – https://theconversation.com/evacuating-in-disasters-like-hurricane-milton-isnt-simple-there-are-reasons-people-stay-in-harms-way-and-its-not-just-stubbornness-240869

    MIL OSI – Global Reports

  • MIL-OSI Global: Evacuating in disasters like Hurricane Milton isn’t simple – there are reasons people stay in harm’s way, and not just stubbornness

    Source: The Conversation – USA – By Carson MacPherson-Krutsky, Research Associate, Natural Hazards Center, University of Colorado Boulder

    Evacuation is more difficult for people with health and mobility issues. Ted Richardson/For The Washington Post via Getty Images

    As Hurricane Milton roared ashore near Sarasota, Florida, tens of thousands of people were in evacuation shelters. Hundreds of thousands more had fled coastal regions ahead of the storm, crowding highways headed north and south as their counties issued evacuation orders.

    But not everyone left, despite dire warnings about a hurricane that had been one of the strongest on record two days earlier.

    As Milton’s rain and storm surge flooded neighborhoods late on Oct. 9, 2024, 911 calls poured in. More than 500 people were rescued in Tampa’s Hillsborough County. Tampa police helped more than a dozen adults and children from a flooding home after a tree crashed though the roof at the height of the storm.

    In Plant City, 20 miles inland from Tampa, at least 35 people had been rescued by dawn, City Manager Bill McDaniel said. While the storm wasn’t as extreme as feared, he said his city had flooded in places and to levels he had never seen. Traffic signals were out. Power lines and trees were down. The sewage plant had been inundated, affecting the public water supply.

    Evacuating might seem like the obvious move when a major hurricane is bearing down on your region, but that choice is not always as easy as it may seem.

    Evacuating from a hurricane requires money, planning, the ability to leave and, importantly, a belief that evacuating is better than staying put.

    I recently examined years of research on what motivates people to leave or seek shelter during hurricanes as part of a project with the Federal Emergency Management Agency and the Natural Hazards Center. I found three main reasons that people didn’t leave.

    Evacuating can be expensive

    Evacuating requires a car, gas money, a place to stay, the ability to take off work days ahead of a storm and other resources that many people do not have.

    With 1 in 9 Americans facing poverty today, many have limited evacuation options. During Hurricane Katrina in 2005, for example, many residents did not own vehicles and couldn’t reach evacuation buses. That left them stranded in the face of a deadly hurricane. Nearly 1,400 people died in the storm, many of them in flooded homes.

    When millions of people are under evacuation orders, logistical issues also arise.

    Two days ahead of landfall, Milton was a Category 5 hurricane. About 5 million people were under evacuation orders, and highways were crowded.

    Gas shortages and traffic jams can leave people stranded on highways and unable to find shelter before the storm hits. This happened during Hurricane Floyd in 1999 as 2 million Floridians tried to evacuate.

    People who experienced past evacuations or saw news video of congested highways ahead of Hurricane Milton might not leave for fear of getting stuck.

    Health, pets and being physically able to leave

    The logistics of evacuating are even more challenging for people who are disabled or in nursing homes. Additionally, people who are incarcerated may have no choice in the matter – and the justice system may have few options for moving them.

    Evacuating nursing homes, people with disabilities or prison populations is complex. Many shelters are not set up to accommodate their needs. In one example during Hurricane Floyd, a disabled person arrived at a shelter, but the hallways were too narrow for their wheelchair, so they were restricted to a cot for the duration of their stay. Moving people whose health is fragile, and doing so under stressful conditions, can also worsen health problems, leaving nursing home staff to make difficult decisions.

    At least 700 people stayed in chairs or on air mattresses at River Ridge Middle/High School in New Port Richey, Fla., during Hurricane Milton.
    AP Photo/Mike Carlson

    But failing to evacuate can also be deadly. During Hurricane Irma in 2017, seven nursing home residents died in the rising heat after their facility lost power near Fort Lauderdale, Florida. In some cases, public water systems are shut down or become contaminated. And flooding can create several health hazards, including the risk of infectious diseases.

    In a study of 291 long-term care facilities in Florida, 81% sheltered residents in place during the 2004 hurricane season because they had limited transportation options and faced issues finding places for residents to go.

    Some shelters allow small pets, but many don’t. This high school-turned-shelter in New Port Richey, Fla., had 283 registered pets.
    AP Photo/Mike Carlson

    People with pets face another difficult choice – some choose to stay at home for fear of leaving their pet behind. Studies have found that pet owners are significantly less likely to evacuate than others because of difficulties transporting pets and finding shelters that will take them. In destructive storms, it can be days to weeks before people can return home.

    Risk perception can also get in the way

    People’s perceptions of risk can also prevent them from leaving.

    A series of studies show that women and minorities take hurricane risks more seriously than other groups and are more likely to evacuate or go to shelters. One study found that women are almost twice as likely than men to evacuate when given a mandatory evacuation order.

    If people have experienced a hurricane before that didn’t do significant damage, they may perceive the risks of a coming storm to be lower and not leave.

    Video from across Florida after Hurricane Milton shows flooding around homes, trees down and other damage. At least five people died in the storm, and more than 3 million homes lost power.

    In my review of research, I found that many people who didn’t evacuate had reservations about going to shelters and preferred to stay home or with family or friends. Shelter conditions were sometimes poor, overcrowded or lacked privacy.

    People had fears about safety and whether shelter environments could meet their needs. For example, religious minorities were not sure whether shelters would be clean, safe, have private places for religious practice, and food options consistent with faith practices. Diabetics and people with young children also had concerns about finding appropriate food in shelters.

    How to improve evacuations for the future

    There are ways leaders can reduce the barriers to evacuation and shelter use. For example:

    • Building more shelters able to withstand hurricane force winds can create safe havens for people without transportation or who are unable to leave their jobs in time to evacuate.

    • Arranging more shelters and transportation able to accommodate people with disabilities and those with special needs, such as nursing home residents, can help protect vulnerable populations.

    • Opening shelters to accommodate pets with their owners can also increase the likelihood that pet owners will evacuate.

    • Public education can be improved so people know their options. Clearer risk communication on how these storms are different than past ones and what people are likely to experience can also help people make informed decisions.

    • Being prepared saves lives. Many areas would benefit from better advance planning that takes into account the needs of large, diverse populations and can ensure populations have ways to evacuate to safety.

    Carson MacPherson-Krutsky works for the Natural Hazards Center (NHC) at the University of Colorado Boulder. She receives grant and contract funding for her work at NHC through the National Science Foundation, the U.S. Army Corps of Engineers, the Federal Emergency Management Agency, and other funders.

    ref. Evacuating in disasters like Hurricane Milton isn’t simple – there are reasons people stay in harm’s way, and not just stubbornness – https://theconversation.com/evacuating-in-disasters-like-hurricane-milton-isnt-simple-there-are-reasons-people-stay-in-harms-way-and-not-just-stubbornness-240869

    MIL OSI – Global Reports

  • MIL-OSI Global: US inflation rate fell to 2.4% in September − here’s what that means for interest rates and markets

    Source: The Conversation – USA – By Jason Reed, Associate Teaching Professor of Finance, University of Notre Dame

    All eyes on the CPI. Sila Damrongsaringkan/Getty Images Plus

    It wasn’t that long ago that the Federal Reserve, the central bank for the United States, was worrying that annual inflation would surpass 9% in the middle of 2022. The U.S. economy hadn’t seen prices rise that fast since the 1980s, and most everyone feared that a series of interest rate hikes would plunge the economy into a recession.

    What a difference two years can make.

    Inflation cooled to 2.4% in September 2024, according to consumer price index data released by the Labor Department on Oct. 10. That’s down from 2.5% the previous month and in line with market expectations of 2.3% to 2.4%. The inflation rate peaked at 8.9% in June 2022 – a 41-year high.

    The news brings the Fed – and its chair, Jerome Powell – much closer to reaching its 2% inflation target. It also marks the fourth straight month that year-over-year price changes have been below 3% and the third consecutive month of declining inflation rates.

    Speaking as an economist and finance professor, I think this could be a big deal for the Federal Reserve, which next meets – and could again cut interest rates – in November.

    Fodder for another rate cut?

    The Fed has what’s called a dual mandate: It pursues both low inflation and stable employment, two goals that can sometimes be at odds. Cutting interest rates can help employment but worsen inflation, while hiking them can do the opposite.

    Since inflation started to take off during the COVID-19 pandemic, Fed officials have emphasized that their job isn’t done until price increases are back down to the 2% target.

    But in light of recent labor market news, Powell and his colleagues have changed their messaging a bit. This indicates that the upside risks of inflation are lower than the risks associated with a weakening labor market.

    And in September, the Fed slashed the federal funds rate by 0.5 percentage point, or 50 basis points – the first cut since it began hiking rates in March 2022. The move came as unemployment had ticked up to 4.3% in July, job openings plummeted and broader labor markets weakened.

    Increasingly optimistic markets

    Equity markets rallied on the news of the September rate cut. Investors believe reductions in the federal funds rate, which is a prime rate that helps to dictate mortgage rates, auto loans, credit card rates and home equity lines of credit, will spur increases in investment and consumption, guiding the economy to a so-called soft landing instead of a recession.

    After that meeting, most members of the Federal Reserve Board indicated they would also favor cutting rates by 25 basis points at each of their upcoming November and December meetings.

    Between today’s inflation news and the unexpectedly sunny jobs report on Oct. 4, investors and markets have a lot of news to digest as they consider what path interest rates will take in the months ahead. Many continue to believe that we may well see two 25-basis-point cuts by the end of 2024 – and so do I.

    Jason Reed does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. US inflation rate fell to 2.4% in September − here’s what that means for interest rates and markets – https://theconversation.com/us-inflation-rate-fell-to-2-4-in-september-heres-what-that-means-for-interest-rates-and-markets-240872

    MIL OSI – Global Reports

  • MIL-OSI Global: Humanity’s future depends on our ability to live in harmony with nature

    Source: The Conversation – Canada – By Liette Vasseur, Professor, Biological Sciences, Brock University

    The world is facing multiple — potentially catastrophic — crises, including inequality, poverty, food insecurity, climate change and biodiversity loss. These issues are interconnected and require systemic solutions, as changes in one system affects others.

    However, human systems have largely failed to acknowledge their connection to ecological systems. Most modern societies have dominating and exploitative relationships with nature, which are underpinned by imperialist and dualistic thinking that divides living beings into racial, gender, class or species hierarchies.

    Our current mindset, with its focus on competition, growth and profit, has been a key contributor to social and ecological crises. Even more alarming is that this mindset has depleted nature to the point that it may soon fail to sustain human and non-human lives entirely.

    Sustainable and equitable well-being

    Policies for future survival and prosperity must address the interconnected crises affecting the world today. These challenges are pushing social and economic systems beyond their sustainable limits.

    While current sustainability efforts, such as those outlined in Earth for All: A Survival Guide for Humanity — a collaboration between scientists and economists from around the world — and the United Nations’ Pact for the Future offer pathways for action, they often fall short. These initiatives, though well-intentioned, remain rooted in a business-as-usual approach.




    Read more:
    Have we reached the end of nature? Our relationship with the environment is in crisis


    This isn’t enough. What’s needed is a transformative shift in how we interact with the natural world. A reciprocal relationship between humans and nature, where humans give back to the environment as much as we take, is essential. Sustainable and equitable well-being must be placed at the centre of human societies.

    Central to this transformation is the need to ensure good lives for all while staying within the Earth’s planetary boundaries. These boundaries are the limits within which humanity can safely operate without causing irreversible environmental harm. This will require a new economic mindset that enables people to live with nature, instead of destroying it.

    Change is daunting, but possible

    Though the scale of change needed may seem daunting, it’s achievable and already in motion in some places. In many communities around the world, like Puget Sound on the northwestern coast of Washington state, people are already living in ways that allow humans and ecosystems to flourish.

    In other regions, like Ecuador and Sumas First Nation, new possibilities are emerging for building human societies that operate within the planetary boundaries. Humans are exceptionally adaptable and have the advantage of foresight and the ability to transform entire systems through ethical collaboration.

    Individual action is one necessary element to accelerate this shift. Change often starts small, with individuals and small groups adjusting their lives. But while personal choices do matter, individuals must also push for systemic changes in their communities, organizations, and broader society.

    To make nature-connected living more widely accessible, collaborative, equitable and intentional efforts are needed. This involves intercultural communication, collaboration and open dialogue to ensure diverse perspectives are considered in decision-making processes.

    Thoughtfully considering the direct and indirect impacts of our action, including the immediate and long-term consequences of any decisions, will create more equitable and sustainable systems.

    People looking to create meaningful change should seek to support a range of groups and organizations dedicated to environmental and social justice. This includes Indigenous leaders and treaty protocols, local authorities, environmental advocacy groups, community organizations or labour unions. A good example of this is the work being done by the UNESCO-recognized biosphere reserves.

    Alternative ways of knowing

    The problems facing the world today are vast and multifaceted, and need to be addressed in multiple ways. Both formal knowledge, like scientific research, and informal knowledge, through the Two-Eyed Seeing principle have roles to play in fostering more equitable nature-human relationships.

    Although western scientific knowledge is often centred in evidence based discussion, many valuable solutions stem from alternative ways of knowing, such as Indigenous ecological knowledge. By welcoming and supporting diverse knowledge holders in creating solutions, we can expand the range of approaches, successes and failures from which humanity can learn.

    Creativity — the essence of adaptability — flourishes when different knowledge systems are woven together. However, this must be done ethically and involve consensual and collaborative exchanges to ensure no knowledge system is exploited or undervalued. We must be careful to avoid repeating the mistakes of imperialism and domination that have created our current planetary crises.

    In addition to rethinking how we approach knowledge, rebuilding strong, interconnected relationships between humans and nature also means rethinking our technological systems.

    Technological innovation has been used to exploit the Earth for short-term gains, but it also holds great potential for positive change. It can either maintain or disrupt the status quo, depending on how we use it.

    To build healthier relationships between people and nature, human societies need to adopt a systems thinking approach. This approach looks at the bigger picture, considering the ecological, cultural, political and social aspects of technology in an integrated manner. It ensures that innovation is guided by principles of sustainability and equity.

    What the future holds

    The future will bring massive changes to Earth’s natural environments, accompanied by shocks to political economic and social systems. The survival of human and non-human beings depends on our ability to plan for these challenges.

    Climate change, biodiversity loss and resource depletion are not isolated problems — they are part of an interconnected web of crises that demand urgent and comprehensive action.

    Incremental approaches are not enough to address the scale of these looming threats. Purposefully co-ordinated actions are needed to shift the current trajectory away from exploitation to one of mutual benefit for humans and the natural world.

    What is needed is radical transformation aimed at creating just and flourishing relationships between nature and humanity for the benefit of all current and future life on Earth.

    Christie Manning, Associate Professor of Environmental Studies at Macalester College; Jacqueline Corbett, Professor of Information Systems, Université Laval; and Simone Bignall, Senior Researcher at the University of Technology Sydney, co-authored this article.

    Liette Vasseur receives funding from New Frontiers Research Program Exploration program in Canada.

    Anders Hayden and Mike Jones do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    ref. Humanity’s future depends on our ability to live in harmony with nature – https://theconversation.com/humanitys-future-depends-on-our-ability-to-live-in-harmony-with-nature-233042

    MIL OSI – Global Reports

  • MIL-OSI Global: ‘Cajun Navy’ volunteers who participate in search-and-rescue operations after hurricanes are forming long-lasting organizations

    Source: The Conversation – USA – By Kyle Breen, Assistant Professor of Sociology, Texas A&M International University

    Volunteers with Savage Freedoms Relief Operation coordinates aid in Swannanoa, on Oct. 7, 2024, after Hurricane Helene severely damaged the North Carolina town. Allison Joyce/AFP via Getty Images

    The volunteers who take part in search-and-rescue operations and then support disaster survivors belong to organizations that have become more formal and established over the past decade. That’s what we found after spending more than four years volunteering alongside eight of these groups to better understand their role and the motivations of the people who participate in these efforts.

    We did this research as part of a larger team of sociologists, an urban planning scholar and emergency management specialists. All of us worked alongside civilian volunteer search-and-rescue groups from Louisiana and Texas between 2017 and 2022 during and after many hurricanes, including Harvey and Laura, the winter storm known as Uri and other major disasters.

    While we volunteered with these organizations, we observed them in action and interviewed their leaders and volunteers to learn why they were making the time and taking personal risks to save others. Many cited their personal values, expressed their need to belong to a group, and said it had helped them find a sense of purpose. Others shared that they were motivated by their personal circumstances and experiences or feelings of guilt, or that this kind of volunteering gave them a deep sense of satisfaction.

    “I lost everything I owned in Katrina. They deemed my family’s property uninhabitable,” said a boater we’ll call Dylan to protect his anonymity. “I can’t sit here after knowing what it is to lose everything.”

    Some volunteers said that one reason why they have repeatedly done this work is to counter stereotypes about people who engage in these efforts. When he’s heard people say, “Oh you’re just out there, doing it for the spotlight,” said Roger, he told us he wants to respond by saying, “Yeah, dude. If you flood, call me, I’ll come get you.”

    While the organizations we researched were based in Louisiana and Texas, the volunteers who participate in these efforts come from across the U.S. and, in some cases, other countries. One volunteer we met was from the United Kingdom.

    After Hurricane Helene destroyed roads in western North Carolina, rescue squads delivered aid by donkey and helicopter.

    Why it matters

    Since Hurricane Katrina struck the Gulf Coast in 2005, volunteers have been participating in search-and-rescue efforts after big disasters – especially in that region. But these volunteers come from all over.

    Many of these groups are known as “Cajun Navy” organizations. Whether or not these organizations use the Cajun Navy branding in their names they share, a common mission of helping others in emergencies.

    These volunteers aren’t just operating boats and helicopters. Others serve as dispatchers, handle logistics, and run social media operations.

    Over time, some of the organizations have begun to team up with local emergency responders, signing memorandums of understanding with them. They partner with government agencies while assisting in disaster response and relief efforts, but they primarily operate with autonomy and are able to travel where they perceive the need is greatest.

    This kind of group tends to dissolve after a disaster is over, instead of evolving into an established nonprofit.

    But many of the eight groups we studied have become nonprofits or are in the process of doing so.

    How we do our work

    We were able to do this research by becoming volunteers ourselves. We took part in dispatch operations on the ground and remotely, and we supported logistics planning. We also observed and, in some cases, participated in search-and-rescue training and operations in the water and on land.

    The Research Brief is a short take about interesting academic work.

    Kyle Breen received funding from the National Science Foundation for this research. He currently holds funding from the Social Sciences and Humanities Research Council of Canada and the Canadian Institutes of Health Research for other research projects.

    J. Carlee Purdum received funding from The National Science Foundation for this research and for other ongoing projects.

    ref. ‘Cajun Navy’ volunteers who participate in search-and-rescue operations after hurricanes are forming long-lasting organizations – https://theconversation.com/cajun-navy-volunteers-who-participate-in-search-and-rescue-operations-after-hurricanes-are-forming-long-lasting-organizations-240769

    MIL OSI – Global Reports

  • MIL-OSI Global: How the ‘social cost of carbon’ measurement can hide economic inequalities and mask climate suffering

    Source: The Conversation – Canada – By Majid Hashemi, Adjunct assistant professor, Economics Department, Queen’s University, Ontario

    The social cost of carbon (SCC) is an essential tool for climate decision-making around the world. SCC is essentially a large cost-benefit calculation that helps policymakers compare the benefits of reducing carbon dioxide (CO2) emissions to the society-wide costs of continued use.

    The “right” SCC has long been an open debate, with several studies attempting to estimate it using a range of methods. In fact, there are more than 323 studies that provide varying SCC estimates in one form or another.

    Most studies focus on the global level working with aggregate SCC values from countries around the world. This global value, however, hides an important nuance. When one looks at individual SCC values at the country level a clear picture emerges. Poorer countries have proportionally lower SCCs than richer ones.




    Read more:
    Don’t applaud the COP28 climate summit’s loss and damage fund deal just yet – here’s what’s missing


    To put this in context, the United States Environmental Protection Agency (EPA) recommends a global social cost of carbon at US$208 per ton of CO2 for 2024 (average of recent studies).

    The Government of Canada uses the same EPA value after exchange rate. When this global estimate (i.e., the aggregate damages to the entire planet) is broken down to country-specific estimates (i.e., the damages to a particular country), it reveals SCCs of less than US$1 for poor countries.

    Does this imply that poorer countries bear lower costs due to climate change impacts? Not at all, in fact the reality is quite the opposite. Studies reveal that the damages associated with climate change are proportionally higher for lower-income countries. These damages are often hidden in SCC values in ways that reveal much about the inequalities of our modern world.

    Why is the social cost of carbon lower?

    The answer is the modelling approach.

    To estimate the social cost of carbon, a complicated model integrates multidisciplinary scientific evidence into a single framework to analyze climate change damages. These models incorporate “damage functions” that account for various pathways through which climate change impacts societies.

    Pathways include some of the things that we can measure, such as reduced agricultural productivity, increased energy expenditures for space heating and cooling, flood-related property damages and premature death due to extreme temperatures and weather events.

    Despite the comprehensive nature of these climate damage models, a critical disparity remains. The monetary value of damages is significantly smaller in poorer countries than in richer ones. Again, this does not mean the impacts are less severe; instead, it reflects the lower overall economic value of losses in these regions because of their lower overall income levels.

    One of the three studies referenced by the U.S. EPA’s guidance on SCC finds climate-change-related agriculture damages and premature deaths account for 45 per cent and 49 per cent of the total global damages, respectively. In poorer countries these percentages are likely much lower given both a comparatively undervalued agricultural sector and lower ability to pay for life saving equipment.

    Simply put, extreme global economic inequality hides the very real losses and damages experienced by many in poorer countries. This is because the comparative wealth gap between them and richer countries results in a lower relative SCC value.

    What does this mean?

    To a national policymaker, an almost zero SCC means that climate change-related projects will likely compete neck-and-neck with basic-needs projects (e.g., addressing malnutrition). From the global perspective, this leaves poorer countries with little incentive to allocate resources to the fight against climate change. Poor countries may even see their investments in such efforts as nothing more than donations to richer countries.

    Indeed, from such a simple SCC-based perspective any CO2 emissions reduction step a poorer country takes could result in a higher SCC value in richer countries — a value which they are likely to receive very little of. What can be done to address this imbalance?




    Read more:
    How COP28 failed the world’s small islands


    One proposed solution has been to use the differences in SCC values between poorer and richer countries to inform international climate negotiations on the implied historical responsibility and liability, commonly known as the loss and damage funds.

    Additionally, international development assistance to climate adaptation funds should be more equitably aligned with SCC imbalances to ensure that richer countries — which will benefit more from emission reduction efforts — help bear the burden in supporting poorer countries’ adaptation and mitigation efforts.

    While methods for estimating SCC values have become more sophisticated in recent years, addressing the global-versus-country-specific imbalance requires a combination of financial transfers and practical co-operation between richer and poorer nations. This will help ensure that the costs and benefits of global CO2 emissions reductions are shared more equally, accounting for both ethical and economic considerations.

    Majid Hashemi does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. How the ‘social cost of carbon’ measurement can hide economic inequalities and mask climate suffering – https://theconversation.com/how-the-social-cost-of-carbon-measurement-can-hide-economic-inequalities-and-mask-climate-suffering-233041

    MIL OSI – Global Reports

  • MIL-OSI Global: Hurricane Milton: Flooded industrial sites and toxic chemical releases are a silent, growing threat

    Source: The Conversation – USA – By James R. Elliott, Professor of Sociology, Rice University

    An industrial storage tank overturned by Hurricane Helene in Asheville, N.C., shows the power of fast-moving floodwater. Sean Rayford/Getty Images

    Hundreds of industrial facilities with toxic pollutants are in Hurricane Milton’s path as it heads toward Florida, less than two weeks after Hurricane Helene flooded communities across the Southeast.

    Milton, expected to make landfall as a major hurricane late on Oct. 9, is bearing down on boat and spa factories along Florida’s west-central coast, along with the rubber, plastics and fiberglass manufacturers that supply them. Many of these facilities use tens of thousands of registered contaminants each year, including toluene, styrene and other chemicals known to have adverse effects on the central nervous system with prolonged exposure.

    Farther inland, hundreds more manufacturers that use and house hazardous chemicals onsite lie along the Interstate 4 and Interstate 75 corridors and their feeder roads. And many are in the path of the storm’s intense winds and heavy rainfall.

    Black dots indicate facilities in EPA’s 2022 Toxic Release Inventory within Hurricane Milton’s projected impact zone.
    Rice University Center for Coastal Futures and Adaptive Resilience, CC BY-ND

    Helene’s heavy rainfall in late September 2024 flooded industrial sites across the Southeast. A retired nuclear power plant just south of Cedar Key, Florida, was flooded by Helene’s storm surge.

    In disasters like these, the industrial damage can unfold over days, and residents may not hear about releases of toxic chemicals into water or the air until days or weeks later, if they find out at all.

    Yet pollution releases are common.

    After Hurricane Ian broadsided Florida’s western coast in 2022, runoff that included hazardous materials from damaged storage tanks and local fertilizer mining facilities, in addition to millions of gallons of wastewater, was visible from space, spilling across the coastal wetlands into the Gulf of Mexico. A year earlier, Hurricane Ida triggered more than 2,000 reported chemical spills.

    During Hurricane Harvey in 2017, floodwater surrounded chemical facilities near Houston. Some caught fire as cooling systems failed, releasing huge volumes or pollutants into the air. Emergency responders and residents, who didn’t know what risks they might face, blamed the chemicals for causing respiratory illnesses.

    Many types of toxic material can spread, settle and change the long-term health and environmental safety of surrounding communities – often with little notice to residents. Our team of environmental sociologists and anthropologists has mapped hazardous industrial sites across the country and paired them with hurricanes’ projected impact maps to help communities hold nearby facilities accountable.

    Major polluters on Gulf Coast at high risk”

    The risks from industrial facilities are most obvious along the U.S. Gulf Coast, where many major petrochemical complexes are clustered in harm’s way. These refineries, factories and storage facilities are often built along rivers or bays for easy shipping access.

    But those rivers can also bring storm surge flooding that can raise the ocean by several feet during hurricanes. The storm surge from Helene was over 10 feet above ground level in Florida’s Big Bend and over 6 feet in Tampa Bay. With Milton, forecasters warning of a 10- to 15-foot storm surge at Tampa Bay.

    A boom surrounds flooded railcars to try to contain leaks at a chemical plant in Braithwaite, La., after Hurricane Isaac in 2012.
    AP Photo/David J. Phillip

    A recent study found evidence of two to three times more pollution releases during hurricanes in the Gulf of Mexico than during normal weather from 2005 to 2020.

    The effects of these pollution releases fall disproportionately on low-income communities and people of color, further exacerbating environmental health risks.

    Why residents may not hear about toxic releases

    The statistics are disconcerting, yet they get little attention. That is because hazardous releases remain largely invisible due to limited disclosure requirements and scant public information. Even emergency responders often don’t know exactly which hazardous chemicals they are facing in emergency situations.

    The U.S. Environmental Protection Agency requires major polluters to file only very general information about chemicals and on-site risks in their risk management plans. Some large-scale fuel storage facilities, such as those holding liquefied natural gas, are not even required to do that.

    These risk management plans outline “worst-case” scenarios and are supposed to be publicly accessible. But, in reality, we and others have found them difficult to access, heavily redacted and housed in federal reading rooms with limited access. The reason local officials and national scientific review panels often give for the secrecy is to protect the facilities from terrorist attack.

    Oil storage tanks and industrial facilities line the Houston Ship Channel, which is vulnerable to storm surge from Gulf of Mexico hurricanes.
    AP Photo/David J. Phillip

    Adding to this opacity is the fact that many states – including those along the Gulf – suspend restrictions on pollution releases during emergency declarations. Meanwhile, real-time incident notifications from the National Response Center – the federal government’s repository for all chemical discharges into the environment – typically lag by a week or more,

    We believe this limited public information on rising chemical threats from our changing climate should be front-page news every hurricane season. Communities should be aware of the risks of hosting vulnerable industrial infrastructure, particularly as rising global temperatures increase the risk of extreme downpours and powerful hurricanes.

    Mapping the risks nationwide to raise awareness

    To help communities understand their risks, our team at Rice University’s new Center for Coastal Futures and Adaptive Resilience investigates how industrial communities in flood-prone areas nationwide can better adapt to such threats, socially as well as technologically.

    Our interactive map shows where elevated future flood risks threaten to inundate major polluters that we identify using the EPA’s Toxic Release Inventory.

    The U.S. has several hot spots with clusters of flood-prone polluters. Houston’s Ship Channel, Chicago’s waterfront steel industries and the harbors at Los Angeles and New York/New Jersey are among the biggest.

    Three of the biggest hot spots, where large numbers of industrial facilities with toxic materials face elevated future flood risks, are in the Northeast, the northwestern Gulf Coast and the southern end of the Great Lakes.
    Rice University Center for Coastal Futures and Adaptive Resilience, CC BY-ND

    But, as Helene revealed, there can also be great concern in less obvious spots. Inland, particularly in the mountains, runoff can quickly turn normally tame rivers into fast-rising torrents. The French Broad River at Asheville, North Carolina, rose about 12 feet in 12 hours during Helene and set a new flood stage record.

    When hurricanes and tropical storms are headed for the U.S., our interactive maps show where major polluters are located in the storm’s projected cone of impact. The maps identify hazardous flood-prone facilities down to the address, anywhere in the country.

    Knowledge is the first step

    Knowing where these sites are located is only the first step. Often, it’s up to communities themselves, many of them already overexposed and historically underserved, to raise concerns and demand strategies for mitigating the health, economic and environmental risks that industrial sites at risk of flooding and other damage can pose.

    These discussions can’t wait until a disaster is on the way. By knowing where these risks may be, communities can take steps now to build a safer future.

    This article, originally published Sept. 30, has been updated with Hurricane Milton.

    James R. Elliott receives funding from the National Science Foundation and the National Renewable Energy Lab.

    Dominic Boyer receives funding from the National Science Foundation, NOAA and Texas Sea Grant.

    Phylicia Lee Brown has nothing to disclose.

    ref. Hurricane Milton: Flooded industrial sites and toxic chemical releases are a silent, growing threat – https://theconversation.com/hurricane-milton-flooded-industrial-sites-and-toxic-chemical-releases-are-a-silent-growing-threat-239977

    MIL OSI – Global Reports

  • MIL-OSI Global: What Israel and its neighbours want now as all-out war looms in the Middle East – podcast

    Source: The Conversation – UK – By Gemma Ware, Host, The Conversation Weekly Podcast, The Conversation

    The Middle East is perilously close to all-out war. In the year since the October 7 Hamas-led attacks on Israel, millions of people have been displaced from their homes in Gaza, Israel, the West Bank and now Lebanon, and tens of thousands killed.

    After Israel killed Hassan Nasrallah, leader of Iranian-backed militia Hezbollah, Iran launched a barrage of ballistic missiles against Israel on October 1. As the world waits to see how Israel will retaliate, Israel’s military continues to attack Hezbollah in southern Lebanon and in Beirut.

    In this episode of The Conversation Weekly podcast, we speak to two experts from the Middle East, Mireille Rebeiz and Amnon Aran, to get a sense of the strategic calculations being made by both Israel and its neighbours at this frightening moment for the region.

    Mireille Rebeiz is the chair of Middle East studies at Dickinson College in Pennsylvania in the US and an expert on Hezbollah. She says that since launching its manifesto in 1985 Hezbollah has always positioned itself “in opposition to the existence of the state of Israel”.

    It affirmed the dedication to the Palestinian cause. It affirmed its commitment to the Iranian revolution and the Shi’ite ideology.

    Rebeiz says Iran’s military goals are completely aligned with Hezbollah’s and traces them back to the US’s destabilisation of Iraq.

    When Iraq fell into a full chaos and war (it) allowed for Iran to meddle into Iraq and gave a big voice to the Shiite conservative voices.

    Then followed the 2011 Syrian civil war, in which Hezbollah stepped in to defend the regime of Bashar al-Assad.

    It’s a domino effect – it’s expansion from Iran to Iraq to Syria to Lebanon. And this is clearly visible in Iran’s military goals, which is ultimately the expansion of the Iranian ideology in the region. Honestly, at this point, I would say there is an attempt to hide behind the Palestinian cause to achieve that goal.

    Israel’s choices

    Amnon Aran is a professor of international relations at City St George’s, University of London, in the UK, and an expert in Israeli foreign policy. Aran says that for Israel, the past 12 months have been described as an “existential moment”, which has informed the war in the Gaza Strip and now Lebanon.

    When the question came about how to respond to this existential threat, it was very much from the prism of what I called elsewhere, a form of entrenchment, which really means that Israel only makes peace in exchange for peace. Any diplomatic arrangement has to be dependent upon and subordinate to a military advantageous balance of power towards Israel and that the Palestinians in the West Bank, and now in the Gaza Strip, would remain under Israeli occupation for the foreseeable future.

    Aran says there is fierce debate in Israel about what to do now. One side follows the line of thinking of the former Israeli prime minister, Nafthali Bennett, who took to X in early October to say that: “Israel now has its greatest opportunity in 50 years to change the face of the Middle East.” This camp is arguing that with Hezbollah weakened, this is the moment to attack Iran’s nuclear facilities.

    On the other side, Aran says, are those in the military establishment arguing against attacking Israel’s nuclear facilities and instead focus on weakening Hezbollah as much as possible. This camp’s reasoning is that:

    After a year of being in a prolonged and very difficult conflict, the next question is you are actually starting a war presumably on five or six fronts, including a very vast country, 90 million people, Iran, with a very rich history, and you are actually entering into a very new phase, which could become very prolonged.

    To hear the full interviews with Mireille Rebeiz and Amnon Aran, listen to The Conversation Weekly podcast.


    This episode of The Conversation Weekly was produced by Mend Mariwany. Sound design was by Michelle Macklem, and our theme music is by Neeta Sarl. Gemma Ware is the executive producer.

    You can find us on Instagram at theconversationdotcom or via email. You can also subscribe to The Conversation’s free daily email here.

    Listen to The Conversation Weekly via any of the apps listed above, download it directly via our RSS feed or find out how else to listen here.

    Amnon Aran does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment. Mireille Rebeiz is affiliated with the American Red Cross.

    ref. What Israel and its neighbours want now as all-out war looms in the Middle East – podcast – https://theconversation.com/what-israel-and-its-neighbours-want-now-as-all-out-war-looms-in-the-middle-east-podcast-240952

    MIL OSI – Global Reports

  • MIL-OSI Global: Why isometric exercises are so good for you

    Source: The Conversation – UK – By Dan Gordon, Professor of Exercise Physiology, Anglia Ruskin University

    Isometric exercises involve contracting your muscles. Odua Images/ Shutterstock

    Exercise is great for improving heart health. But the thought of hitting the gym or going for a jog might put some people off from doing it. And, if you have a heart condition already, such dynamic exercises may not be safe to do.

    The good news is, you don’t necessarily need to do a vigorous workout to see heart benefits. You can even improve your heart health by holding still and trying really hard not to move.

    Isometric training, as this is called, is becoming increasingly popular as a way of reducing blood pressure and hypertension, and improving strength and muscle stability.

    Normally, to build strength and force, our muscles need to change length throughout a movement. Squats and bicep curls are good examples of exercises that cause the muscle to change length throughout the movement.

    But isometric training involves simply contracting your muscles, which generates force without needing to move your joints. The harder a muscle is contracted, the more forceful it becomes (and the more forceful a muscle is, the more powerfully we can perform a movement).

    If you add weight to an isometric exercise, it causes the muscle to contract even harder. A wall sit and a plank are examples of isometric contractions.

    Isometric exercises are associated with a high degree of “neural recruitment”, because of the need to maintain the contraction. This means these exercises are good at engaging specialised neurons in our brain and spinal cord, which play an important role in all the movements we do – both voluntary and involuntary. The greater this level of neural activation, the more muscle fibres are recruited – and the more force generated. As a result, this can lead to strength gains.

    Isometric exercises have long been of interest to strength and power athletes as a means of preparing their muscles to generate high forces by activating them. But research also shows isometric exercises are beneficial for other areas of our health – including reducing hypertension and promoting better blood flow.

    There are a couple reasons why isometric exercises are so good for the heart.

    When a muscle is contracted, it expands its size. This causes it to compress the blood vessels supplying this muscle, reducing blood flow and raising the blood pressure in our arteries – a mechanism known as the “pressor reflex”.

    Then, once the contraction is relaxed, a sudden surge of blood flows into the blood vessels and muscle. This influx of blood brings more oxygen and (crucially) nitric oxide into the blood vessels – causing them to widen. This in turn reduces blood pressure. Over time, this action will reduce stiffness of the arteries, which may lower blood pressure.

    Over time, isometric exercises may help lower blood pressure.
    Andrey_Popov/ Shutterstock

    When blood flow is reduced during an isometric movement, it also reduces the amount of available oxygen that cells need to function. This triggers the release of metabolites, such as hydrogen ions and lactate, which stimulate the sympathetic nervous system – which controls our “fight of flight” response. In the short term, this leads to an increase in blood pressure.

    But when an isometric exercise is done repeatedly over many weeks, there’s a reduction in sympathetic nervous system activity. This means blood pressure is lowered and there’s less strain on the cardiovascular system – which makes these exercises good for the heart.

    Isometric exercises may be even more beneficial for heart health than other types of cardiovascular exercise. A study which compared the benefits of isometric exercise versus high-intensity interval training found isometrics led to significantly greater reductions in resting blood pressure over the study period of between two and 12 weeks.

    How to use isometric exercise

    If you want to use isometric training to reduce blood pressure, it’s recommended that you should do any isometric contraction for two minutes at around 30-50% of your maximum effort. This is enough to trigger physiological improvements.

    You can start by doing this four times a day, three-to-five times per week – focusing on the same exercise. As you progress, you can start to vary the exercises you do, add weights to the exercise, or add in more than one isometric exercise.

    Some good isometric exercises to begin with include a static squat, a wall sit or a plank. Even during these small bouts of exercise, your heart rate, breathing and arterial pressure will all increase – the same responses that occur during more conventional whole-body exercises, such as cycling and running.

    The beneficial improvements in blood pressure start to manifest around 4-10 weeks after starting isometric training – though this depends on a person’s health and fitness levels when starting out.

    Isometric training appears to be a simple, low-intensity mode of exercise that offers big benefits for cardiovascular health – all while requiring little time commitment compared with other workouts.

    The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    ref. Why isometric exercises are so good for you – https://theconversation.com/why-isometric-exercises-are-so-good-for-you-239543

    MIL OSI – Global Reports

  • MIL-OSI Global: Telegram: why the app is allowed when other social media is censored in Russia

    Source: The Conversation – UK – By Olga Logunova, Research Associate, King’s Russia Institute, King’s College London

    Telegram’s founder Pavel Durov has confirmed that the messaging app, which is widely used in Russia, has made several changes related to user privacy.

    Durov, who was arrested in France in August in connection with a range of crimes as well as refusal to communicate information or documents, has made some alterations that address user safety and user privacy.

    Telegram says the changes are expected to also reduce criminal activity on the app. But users are concerned that the changes make the app more compliant with legal requests from authorities.

    While Durov’s political and legal tussle continues in the EU, at home in Russia Telegram remains one of the most influential media platforms. It is one of the only places where both opposition and official voices coexist.

    It is particularly popular with Russians between the ages of 12 and 24, with around 85% of them using Telegram. Around 25 of its 30 most popular channels are news and politics related. Telegram is also popular for calls and messaging.

    The platform is a vital space for the independent journalism and activism that survives in Russia. Independent media outlets and commentators covering Russian affairs and using Telegram include Meduza (1.3 million subscribers), TV Rain (500,000 subscribers) and Mediazona. All are using Telegram to reach the public but are operating from outside Russia’s borders.

    Pro-government channels also attract big audiences on Telegram, often with even larger followings than the independent outlets mentioned above. The most popular Telegram channels are Ria Novosti with 3.3 million subscribers, Readovka with 2.6 million subscribers, and Solovyov Live (1.3 million subscribers), along with several others promoting pro-government lines and supporting Russia’s war in Ukraine.

    Additionally, alternative voices such as Mikhail Khodorkovsky, a former oligarch and prominent Kremlin critic, and Ekaterina Shulman, a respected political scientist and commentator, are steadily gaining audiences. Both have been labelled as foreign agents or extremists in Russia.

    Where do Russians get news?

    In the past decade, Russia’s media landscape has undergone significant censorship due to increasing state control. Radio stations have closed down and many journalists have left the country to be able to report.

    Russian media usage

    Traditional media sources, such as television, continue to have a massive audience. Television has a monthly reach of 98%, while radio has a monthly reach of 79%. (Reach is the total number of different people or households exposed, at least once, to a medium during a given period).

    Both remain significant in today’s Russia. While television remains a primary news source for many Russians, the internet is used by 84% of people daily.

    Since 2012, the state has progressively tightened control over political information. People and organisations will self-censor, and there is legislation penalising social media reposts and other forms of dissent. These laws claim to be addressing users who “discredit the armed forces” or “spread fake news”, but are actually aimed at cracking down on dissent.

    Most viewed Telegram channels in Russia during July 2024

    As of 2024, over 2,000 administrative cases and more than 273 criminal cases have been initiated under these laws. Individuals and organisations critical of the official Kremlin narrative have been fined, had their assets confiscated and been imprisoned.




    Read more:
    Ukraine recap: Putin’s nuclear sabre-rattling becomes more ominous


    Another government method used to control online discussion includes slowing down or blocking social media platforms. The state blocked major western platforms Facebook, Instagram and Twitter in March 2022, leading millions of Russian users to migrate to Telegram.

    Content creators followed en masse, transforming Telegram into a vital hub for news and political debate. Alternatives to Telegram in Russia include state-controlled domestic networks like VKontakte (VK) and Odnoklassniki, which have strong ties to figures close to the Kremlin.

    Why is Telegram allowed?

    The use of Telegram for propaganda, influencing public opinion, and promoting the positions of the state and Putin could be one of the reasons why Telegram has not faced the same restrictions as other platforms.

    Another reason for its popularity is the platform’s ease of use as a messaging app, including for state organisations. This makes it less of a direct threat to state control over public opinion, while still serving as a crucial tool for those seeking alternative sources of information.

    Its appeal to the Russian government is strengthened by the fact that Telegram is not owned by global (western) companies such as Meta, which owns WhatsApp (also popular in Russia). Additionally, issues surrounding legally questionable content, such as the near-official tolerance of digital piracy, have long been controversial in Russia.

    Telegram’s moderation policies have often been associated with a less regulated approach to content, which has contributed to its popularity in Russia. These new changes may make ordinary Russians worry more about whether what they say on the app is safe from the state’s prying eyes.

    The platform’s prominence in Russian public life is undeniable, but so too are the challenges it faces. How Telegram and its leadership navigate the coming years will have profound implications, not just for the platform, but for broader public debate in Russia.

    Durov’s arrest underscores the growing pressure on Telegram, from some quarters, and reflects a critical juncture for platform leaders navigating state intervention. But for Russian people looking for a space where they can exchange news and views, it remains one of most free platforms they can still access.

    Olga Logunova does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Telegram: why the app is allowed when other social media is censored in Russia – https://theconversation.com/telegram-why-the-app-is-allowed-when-other-social-media-is-censored-in-russia-238261

    MIL OSI – Global Reports

  • MIL-OSI Global: How 19th-century French novelist Balzac mastered the multiverse long before Marvel

    Source: The Conversation – UK – By Harsh Trivedi, Associate Teacher, School of Languages and Cultures, University of Sheffield

    The multiverse has become an essential part of pop culture. The Marvel Cinematic Universe (MCU) brought this shared universe style of storytelling to global prominence with Iron Man (2008), where a post-credit scene hinted at a larger interconnected universe.

    Over time, this expanded into a cinematic multiverse, particularly with the 2016 film Doctor Strange. Films like Spider-Man: No Way Home (2021) and Doctor Strange: Multiverse of Madness (2022) introduced audiences to parallel universes where different versions of the same character coexist. The multiverse has also been embraced by other films, like Everything Everywhere All At Once (2022), which won multiple Academy Awards and Stree 2, which became the highest-grossing Bollywood film of all time in September 2024.

    This style of storytelling has deep literary roots. I believe the first person to master the fictional multiverse was the 19th-century French novelist, Honoré de Balzac, in his monumental work La Comédie Humaine (The Human Comedy, 1829-1847).




    Read more:
    Multiverse films take characters to increasingly dark places – as Robert Downey Jr’s Doctor Doom casting shows


    In the 1920s, German physicist Werner Heisenberg challenged Newtonian physics, positing that particles can simultaneously occupy multiple states – he called this the Uncertainty Principle. Later, in the 1950s, American physicist Hugh Everett proposed the Many Worlds Interpretation, suggesting that all possible outcomes of a quantum event occur, each in a separate parallel universe.

    While this theory was developed in physics, the term “multiverse” was introduced into literature by British science fiction writer Michael Moorcock. In The Eternal Champion (1970), he envisioned characters existing in parallel worlds with multiple avatars.

    Honoré de Balzac, by Louis Boulanger (1836).
    Wikimedia., CC BY-SA

    However, Balzac’s La Comédie Humaine, written over a century earlier, already contained the seeds of multiverse storytelling. Comprising nearly 100 novels and short stories, it features thousands of characters who reappear across different works, creating a shared universe that allows for complex narrative interconnections.

    Balzac’s innovation was not merely in these recurring characters, but in the thematic and conceptual unity he established across his fictional universe.

    This cohesion is built through his “typology” of characters. Balzac’s “types” are characters who embody universal traits while retaining their individual personalities – making them instantly recognisable across different stories.

    In his preface to Une Ténébreuse Affaire (An Historical Mystery, 1841), Balzac defends his use of types: “A type … is a character who summarises in himself certain characteristic traits of all those who more or less resemble him; he is the model of the genre.”

    Hungarian philosopher Georg Lukács expanded on this idea, stating that Balzac’s types represent a synthesis of the individual and the universal. These characters are universal enough to represent broader societal forces, while remaining distinct individuals within their own narratives.

    The moment Andrew Garfield’s Spider-Man saves the love interest of Tom Holland’s Spider-Man, MJ.

    This balance between the universal and individual is a cornerstone of multiverse storytelling. For instance, the climax of Spider-Man: No Way Home highlights the interplay between the universal and individual aspects of characters, as seen when three versions of Spider-Man (Toby Maguire, Andrew Garfield, Tom Holland) from parallel universes unite. Garfield’s Spider-Man finds redemption by saving MJ (Holland’s Spider-Man’s love interest), a moment that mirrors his own tragic loss of Gwen – emphasising both their shared trauma and divergent fates.

    In much the same way, Balzac’s recurring characters evolve across La Comédie Humaine, reflecting different facets of their personalities and situations. Although not planned as a shared universe from the beginning – Balzac retrofitted earlier works to fit this framework – the coherence of his fictional world is remarkable.

    Mobilising the multiverse

    The French philosopher Alain wrote that Balzac’s literary universe can sometimes feel like a “crossroads where characters from La Comédie Humaine meet, greet each other, and pass”. This creates a sense of disjointedness, due to its lack of strict chronological order, allowing readers to enter Balzac’s universe from any of the nearly 100 novels or short stories.

    Balzac addressed these concerns in his prefaces. He engaged in a meta-discourse similar to the post-credit scenes in modern Marvel films, where future plot-lines and character arcs are hinted at.

    Balzac’s use of prefaces as a space to preempt criticism and engage with his readers anticipates the dialogue between creators and fans in the MCU. Just as Marvel balances creative vision with fan demands, Balzac used his prefaces to address concerns from his readers about the trajectories of beloved characters.

    One of many such instances occurs in the preface to Pierrette (1840), where Balzac reveals that Maxime de Trailles, a notorious bachelor who ruins many women’s lives in La Comédie Humaine, is finally getting married. Despite criticisms from readers who wanted De Trailles to meet a tragic and painful end, Balzac defends his decision, humorously remarking: “What do you want me to do? That devil Maxime is in good health.”

    Both Balzac and Marvel deal with the challenge of catering to a wide and diverse audience. The multiverse model, however, offers a solution to the limitations of a shared universe. While Balzac struggled with the impossibility of creating a completely coherent world – La Comédie Humaine was unfinished at his death – the multiverse allows modern creators to explore multiple realities and satisfy diverse audience expectations without making irreversible narrative choices.

    In 2019, Marvel faced a backlash to the film Captain Marvel from conservative fans, for casting a female actor in a lead role – and then, in 2022, another backlash for casting a Muslim Pakistani actress as Ms. Marvel. Rather than directly addressing the criticism, which could have alienated both conservative and liberal audiences, Marvel used the multiverse to cater to a wide range of expectations.

    Across the Spider-Verse (2023) is a prime example. This animated film features over 600 versions of Spider-Man, from the “traditional” white Spider-Man to black, Indian and even animal versions of the character (notably Peter “Porker”, the Spider-Pig). In doing so, Marvel catered to diverse global markets without committing to a single interpretation.

    Balzac’s La Comédie Humaine laid the groundwork for modern multiverse storytelling. This approach allowed him to explore different dimensions of his characters across various stories. His visionary storytelling anticipated the fluidity and complexity found in today’s shared cinematic universes, demonstrating his enduring influence on narrative structures.



    Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.


    Harsh Trivedi does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. How 19th-century French novelist Balzac mastered the multiverse long before Marvel – https://theconversation.com/how-19th-century-french-novelist-balzac-mastered-the-multiverse-long-before-marvel-239764

    MIL OSI – Global Reports

  • MIL-OSI Global: How to recognise burnout – and what to do if you’re affected

    Source: The Conversation – UK – By Michael Koch, Reader in Human Resource Management & Organisational Behaviour, Brunel University London

    PeopleImages.com – Yuri A/Shutterstock

    Emily, a finance manager, has been working 60-hour weeks for several months to meet deadlines. She starts feeling constantly exhausted, both physically and mentally. Work that she once found engaging now seems overwhelming, and she’s easily irritated with her colleagues. Despite putting in more hours, her productivity declines. Eventually, she starts calling in sick frequently and considers quitting her job, feeling like she just can’t keep going any more.

    Emily is a victim of burnout. For 2024, World Mental Health Day is focused on workplace health, with the aim of helping people like Emily recognise when work is affecting their wellbeing, so that they can take steps to address it.

    Burnout happens when the demands of a job are high for a long time, and are not offset by sufficient mental and physical resources. In this situation, people are no longer able to recover from their demanding job. Their energy is gradually drained, resulting in a state of mental exhaustion, a cynical and negative attitude towards their work, as well as a declining performance.

    In other words, people affected by burnout are neither able nor willing to fully function in their job. Burnout can occur in any job, but is most likely in workplaces where demands are high and resources low. It is a widespread phenomenon.

    A report by the charity Mental Health UK asserts that the country is on the verge of becoming a burnt-out nation, with 91% of the working adults surveyed reporting high or extreme levels of pressure and stress at some point in the past year.

    According to the same report, 20% of workers in the UK even took time off work due to poor mental health caused by stress last year.

    You don’t have to work in a desk job to be at risk of burnout.
    ultramansk/Shutterstock

    Research has consistently shown that the primary causes of burnout are excessive and prolonged job demands. This includes, for example, high workloads, job insecurity, role ambiguity, conflict, stress or stressful events, and work pressure.

    Burnout has severe consequences, most of all for people affected by it. Burnout impacts people differently, but even mild cases – which could linger for several years – can lead to a multitude of negative health outcomes. This includes work-related anxiety and depression, increased risk of cardiovascular diseases, Type 2 diabetes, insomnia, headaches and perhaps most alarmingly, increased mortality.

    People with mild cases of burnout are also at risk of developing more severe burnout that will keep them off work sick for long periods.

    Burnout is also worrying for organisations as it has a negative impact on creativity, leads to higher employee turnover, increased absenteeism and poor job performance.

    The symptoms of burnout differ from one person to another, and sometimes people might not even fully realise they’re burnt out until they are no longer just tired but too exhausted to function.

    People who experience burnout are drained of energy and may be overwhelmed even by
    small tasks. They distance themselves from their work, struggle with self doubt and develop cynical, negative attitudes regarding their job or the people they work for.

    When looking for symptoms of burnout, it might help to ask yourself questions like: Do you mostly talk about your work in a negative way? Do you tend to think less about your work and do your job almost mechanically? Do you sometimes feel sickened by your work tasks? Are there days when you feel tired before you arrive at work? Do you often feel emotionally drained during your work? Do you usually feel worn out and weary after your work?

    Burnout recovery and prevention needs to help minimise the job demands which cause
    exhaustion and disengagement. For example, reducing workload and work pressure, and establishing clear boundaries between life and work can help to reduce stressful job demands.

    Job resources can also help to mitigate the impact of job demands. This includes things like job control, having a variety of tasks, social support, performance feedback, opportunities for professional development and the quality of a worker’s relationship with their supervisor.

    When people have an abundance of these resources, the link between the demands of the job and burnout is greatly reduced because they help workers to cope better.

    Recovery is possible

    Opportunities for recovery from work-related stress are an especially important job resource in this context. Recovery means that employees have non-work time where they can relax and detach themselves from work. This may include leisure activities that allow people to simply experience pleasure without competitive pressures.

    Research has also shown that job crafting is an effective burnout intervention. Job crafting means that employees make small adjustments to both their job demands and resources. Employees can decrease their job demands by taking steps to minimise the emotionally, mentally or physically demanding job aspects or by reducing their workload.

    For example, this might involve looking for a calmer place to work. They can also increase job resources by engaging in professional development, gaining more autonomy at work and by asking others for support, feedback and advice. Over time, engaging in job crafting will lead to lower burnout.

    Organisations also need to play their part to reduce burnout. A range of intervention strategies such as stress management training, mindfulness-based approaches or policies that allow employees to disconnect from work outside of normal working hours are useful tools for combating burnout in an organisation.

    The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    ref. How to recognise burnout – and what to do if you’re affected – https://theconversation.com/how-to-recognise-burnout-and-what-to-do-if-youre-affected-240747

    MIL OSI – Global Reports

  • MIL-OSI Global: How a subfield of physics led to breakthroughs in AI – and from there to this year’s Nobel Prize

    Source: The Conversation – USA – By Veera Sundararaghavan, Professor of Aerospace Engineering, University of Michigan

    Neural networks have their roots in statistical mechanics. BlackJack3D/iStock via Getty Images Plus

    John J. Hopfield and Geoffrey E. Hinton received the Nobel Prize in physics on Oct. 8, 2024, for their research on machine learning algorithms and neural networks that help computers learn. Their work has been fundamental in developing neural network theories that underpin generative artificial intelligence.

    A neural network is a computational model consisting of layers of interconnected neurons. Like the neurons in your brain, these neurons process and send along a piece of information. Each neural layer receives a piece of data, processes it and passes the result to the next layer. By the end of the sequence, the network has processed and refined the data into something more useful.

    While it might seem surprising that Hopfield and Hinton received the physics prize for their contributions to neural networks, used in computer science, their work is deeply rooted in the principles of physics, particularly a subfield called statistical mechanics.

    As a computational materials scientist, I was excited to see this area of research recognized with the prize. Hopfield and Hinton’s work has allowed my colleagues and me to study a process called generative learning for materials sciences, a method that is behind many popular technologies like ChatGPT.

    What is statistical mechanics?

    Statistical mechanics is a branch of physics that uses statistical methods to explain the behavior of systems made up of a large number of particles.

    Instead of focusing on individual particles, researchers using statistical mechanics look at the collective behavior of many particles. Seeing how they all act together helps researchers understand the system’s large-scale macroscopic properties like temperature, pressure and magnetization.

    For example, physicist Ernst Ising developed a statistical mechanics model for magnetism in the 1920s. Ising imagined magnetism as the collective behavior of atomic spins interacting with their neighbors.

    In Ising’s model, there are higher and lower energy states for the system, and the material is more likely to exist in the lowest energy state.

    One key idea in statistical mechanics is the Boltzmann distribution, which quantifies how likely a given state is. This distribution describes the probability of a system being in a particular state – like solid, liquid or gas – based on its energy and temperature.

    Ising exactly predicted the phase transition of a magnet using the Boltzmann distribution. He figured out the temperature at which the material changed from being magnetic to nonmagnetic.

    Phase changes happen at predictable temperatures. Ice melts to water at a specific temperature because the Boltzmann distribution predicts that when it gets warm, the water molecules are more likely to take on a disordered – or liquid – state.

    Statistical mechanics tells researchers about the properties of a larger system, and how individual objects in that system act collectively.

    In materials, atoms arrange themselves into specific crystal structures that use the lowest amount of energy. When it’s cold, water molecules freeze into ice crystals with low energy states.

    Similarly, in biology, proteins fold into low energy shapes, which allow them to function as specific antibodies – like a lock and key – targeting a virus.

    Neural networks and statistical mechanics

    Fundamentally, all neural networks work on a similar principle – to minimize energy. Neural networks use this principle to solve computing problems.

    For example, imagine an image made up of pixels where you only can see a part of the picture. Some pixels are visible, while the rest are hidden. To determine what the image is, you consider all possible ways the hidden pixels could fit together with the visible pieces. From there, you would choose from among what statistical mechanics would say are the most likely states out of all the possible options.

    In statistical mechanics, researchers try to find the most stable physical structure of a material. Neural networks use the same principle to solve complex computing problems.
    Veera Sundararaghavan

    Hopfield and Hinton developed a theory for neural networks based on the idea of statistical mechanics. Just like Ising before them, who modeled the collective interaction of atomic spins to solve the photo problem with a neural network, Hopfield and Hinton imagined collective interactions of pixels. They represented these pixels as neurons.

    Just as in statistical physics, the energy of an image refers to how likely a particular configuration of pixels is. A Hopfield network would solve this problem by finding the lowest energy arrangements of hidden pixels.

    However, unlike in statistical mechanics – where the energy is determined by known atomic interactions – neural networks learn these energies from data.

    Hinton popularized the development of a technique called backpropagation. This technique helps the model figure out the interaction energies between these neurons, and this algorithm underpins much of modern AI learning.

    The Boltzmann machine

    Building upon Hopfield’s work, Hinton imagined another neural network, called the Boltzmann machine. It consists of visible neurons, which we can observe, and hidden neurons, which help the network learn complex patterns.

    In a Boltzmann machine, you can determine the probability that the picture looks a certain way. To figure out this probability, you can sum up all the possible states the hidden pixels could be in. This gives you the total probability of the visible pixels being in a specific arrangement.

    My group has worked on implementing Boltzmann machines in quantum computers for generative learning.

    In generative learning, the network learns to generate new data samples that resemble the data the researchers fed the network to train it. For example, it might generate new images of handwritten numbers after being trained on similar images. The network can generate these by sampling from the learned probability distribution.

    Generative learning underpins modern AI – it’s what allows the generation of AI art, videos and text.

    Hopfield and Hinton have significantly influenced AI research by leveraging tools from statistical physics. Their work draws parallels between how nature determines the physical states of a material and how neural networks predict the likelihood of solutions to complex computer science problems.

    Veera Sundararaghavan receives external funding for research unrelated to the content of this article.

    ref. How a subfield of physics led to breakthroughs in AI – and from there to this year’s Nobel Prize – https://theconversation.com/how-a-subfield-of-physics-led-to-breakthroughs-in-ai-and-from-there-to-this-years-nobel-prize-240871

    MIL OSI – Global Reports

  • MIL-OSI Global: Blitz of political attack ads in Pennsylvania and other swing states may be doing candidates and voters more harm than good

    Source: The Conversation – USA – By Heather LaMarre, Associate Professor of Media and Communication, Temple University

    Nearly $11 billion is projected to be spent on political advertising in the 2024 fall election season. PM Images/DigitalVision Collection via Getty Images

    For Pennsylvania residents like me, there is no escape from the record-breaking number of political attack ads disrupting our favorite shows and filling our social media feeds.

    A projected US$10.7 billion is being spent nationwide – but particularly in battleground states – on political ads this election season.

    For those who are feeling election fatigue and just want to stream in peace: Buckle in, because it’s about to get worse.

    As of late August 2024, over $1.7 billion in political ads had been reserved nationwide to run between Labor Day and Election Day. Over $400 million of that is just for presidential election ads in seven key battleground states.

    With Pennsylvania widely considered the most decisive state in the 2024 presidential election, it may be no surprise that the Keystone State has the most presidential ad reservations, totaling $137 million.

    And the Philadelphia market alone is the top market in the country, with $125 million in ad reservations. Democrats are spending about 25% more than Republicans on presidential ads in Philly.

    As a political communication expert and professor of media and social influence who lives in Philadelphia, I am often asked: “Why are there so many political ads, why are they so negative, and more importantly, how do we make it stop?”

    I’ll answer the first two below. For the last, the truth is we don’t.

    A billboard in Philadelphia purchased by the Trump campaign.
    Selcuk Acar/Anadolu via Getty Images

    Voters feel exhausted, angry, stressed

    If campaigns are spending all this money on political attack ads, they must work, right? Surely they sway at least undecided voters?

    In a word: no. Research suggests deluges of negative political advertising do little to change voters’ minds.

    They can even backfire on candidates.

    When voters perceive ads as unfair or manipulative, they are less likely to vote for the candidate or party producing the ads. And when subjected to repeated unwanted exposure to political ads, they can experience “psychological reactance” and behave opposite of what the ads intended.

    Some studies also suggest that negative ads create election stress, which can reduce voter turnout among the less politically interested.

    In a 2023 Pew Research Center survey, 65% of U.S. adults reported that they always or often feel “exhausted” when they think about U.S. politics. More than half reported that they always or often feel “angry” with U.S. politics.

    More concerning, research suggests our elections are harming voters’ mental health. This is marked by lost sleep, increased anxiety and chronic stress.

    ‘Daisy’ and the birth of ad wars

    Historically, political advertising was considered an effective tool for educating voters, building momentum and engaging the politically uninterested.

    Although the research is mixed, past studies have shown that advertising increased election turnout and influenced voter behavior.

    The infamous 1964 “Daisy” ad run by President Lyndon Johnson’s campaign shocked audiences with the potential horrors of nuclear war. While the ad never mentioned Johnson’s opponent, Arizona Sen. Barry Goldwater, it is largely credited as a turning point in presidential political advertising, ushering in an era of political attack ads.

    LBJ’s “Daisy” ad played on American’s Cold War fears.

    However, political ad wars have been a feature of U.S. presidential elections since the 1800s, with attack ads on TV starting in the early 1950s.

    But why the constant barrage now?

    Citizens United unleashes flood of dark money

    Political ad spending has monumentally increased over the past several election cycles, and hit the billions after the landmark 2010 Citizens United case.

    In that ruling, the Supreme Court decided that limiting spending from corporations or outside groups violated those groups’ First Amendment right to free speech. Prior to Citizens United, corporations and other groups like nonprofits and labor unions were subject to prohibitions on campaign donations. Individual campaign contribution limits, which currently stand at $3,300 per candidate per election, kept spending relatively level across the electorate.

    Following the ruling, however, the influx of corporate and outside money completely changed the campaign finance landscape.

    In 2010, political ad spending reached $3.3 billion – an 11% increase from the 2008 election that took place pre-Citizens United. A decade later, total spending on political ads soared to $9 billion in the 2020 election.

    Significant portions of this spending come from political action committees that are not bound by traditional campaign contribution limits as long as they do not donate the money directly to a candidate or coordinate with a candidate’s campaign.

    These groups, known as super PACs, can raise and spend unlimited amounts of money from undisclosed donors. While super PACs have to disclose identities of people who donate over $200 in a year, donors can use shell companies to hide their identities.

    This web of secret money, known as dark money, exceeded $1 billion in 2020.

    During the 2024 election cycle, over $2.4 billion has been raised by super PACs. This is where much of the funding for the political ad barrage that voters experience in the weeks leading up to the election comes from.

    But why are the ads so negative?

    Attack ads lose appeal

    These days, most political ads are negative, according to a 2020 Pew Research Center study.

    For example, in the weeks following President Joe Biden leaving the race, 95% of pro-Trump ads focused on attacking Vice President Kamala Harris rather than promoting policy, according to the Wesleyan Media Project, which tracks political advertising.

    Americans are a deeply divided electorate. Political violence is on the rise, misinformation floods the system, and trust in media is at an all-time low.

    Research shows that fear-based negative messaging leads to stress and anxiety, elicits more bias and entrenches attitudes.

    Knowing this, it is reasonable to ask why campaigns continue down the path of negative advertising. The answer likely rests in old beliefs.

    Prior studies have shown that people pay closer attention to negative information than to positive information. And infamous ad effects like Johnson’s easy win after the airing of the Daisy ad contribute to the commonly held belief that negative ads still win elections.

    But the media environment has changed drastically, and voters are growing resentful.

    Voters resent microtargeting

    Unlike traditional voter segmentation where an entire group of voters would receive similar messages, campaigns now use data analytics to microtarget messages for specific voters.

    Microtargeting enlists the help of social monitoring companies to identify voters’ psychometric data – their hopes, fears, likes, dislikes and so on – so that campaigns can finely tune messages to target them on social media.

    Not only are these microtargeted messages manipulative, but they can be an unwelcome disruption and invasion of privacy, especially among the politically uninterested.

    A 2020 Pew survey found that over half of voters believe tech companies should not allow political ads on social media. Three-quarters oppose campaigns using their personal data to target them with political ads.

    Some evidence suggests that political microtargeting even reduces citizens’ trust in democracy.

    After record-breaking amounts of advertising this election cycle, the latest polls remain very tight, and most are within the margin of error. The reality is that Americans are already divided and steadfast in their voting decisions, and it is difficult to change entrenched political attitudes.

    Put simply, the political ad barrage coupled with microtargeting strategies is not an effective campaign strategy that sways voters’ minds. Meanwhile, there is growing evidence that this level of negativity is harming the electorate and undermining trust in democracy.

    Heather LaMarre does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Blitz of political attack ads in Pennsylvania and other swing states may be doing candidates and voters more harm than good – https://theconversation.com/blitz-of-political-attack-ads-in-pennsylvania-and-other-swing-states-may-be-doing-candidates-and-voters-more-harm-than-good-239034

    MIL OSI – Global Reports

  • MIL-OSI Global: Misspoke: The long and winding road to becoming a political weasel word

    Source: The Conversation – USA – By Valerie M. Fridland, Professor of Linguistics, University of Nevada, Reno

    Democratic candidate Tim Walz, during the vice presidential debate in which he said he ‘misspoke’ about being in Hong Kong during Tiananmen Square protests. Chip Somodevilla/Getty Images

    During the Sept. 24, 2024, debate, Democratic vice presidential hopeful Tim Walz said he “misspoke” when asked to clarify his story of being in Hong Kong during the Tiananmen Square crackdown in June 1989.

    To many, Walz’s use of the word misspoke came across as an attempt to weasel out of what was at best an embellishment and at worst an outright lie.

    The word misspoke has certainly long been used to politically backpedal after verbal inaccuracies or blunders, as Ronald Reagan learned in 1981 after he said that Syrian surface-to-air missiles placed in Lebanon were “offensive weapons,” when they were in fact defensive weapons. Both Presidents Bill Clinton and the much “misunderestimated” George W. Bush likewise were deemed to have misspoken after making mistakes, big and small.

    For instance, a spokesperson for Clinton claimed he had misspoken when the then-president said that North Korea would not be allowed to develop a nuclear bomb – after there was reason to believe they had already developed them. During George W. Bush’s term in office, verbal errors were so common they earned a nickname of their own: “Bushisms.”

    But misspoke’s extension to factual fabrication is one step further down the semantic road. In using it in this way, Walz joined other “misspoken” politicians, such as Hillary Clinton, who used it after falsely recollecting having landed in Bosnia under sniper fire.

    As a sociolinguist who writes about how language changes over time, misspoke’s euphemistic recasting of lying as an inadvertent mistake calls for deeper linguistic scrutiny.

    Tim Walz, being pressed on a statement he made and whether it was true, during the vice presidential debate.

    From mumble to mea culpa

    To understand how and why words morph like this, linguists like to trace them to their very beginnings.

    According to the Oxford English Dictionary, “misspeaking” is quite old in the history of English, appearing as “missprecon” in a Northumbrian text dating before the 11th century. Its original sense was one of “to grumble” or “to mumble,” a meaning now obsolete.

    But after the 11th century, its meaning shifted from inarticulateness to that of speaking amiss or disparagingly, often mentioned in reference to saying something improper or upsetting. Chaucer makes use of this sense in the “Miller’s Tale”: “And therfore if that I mysspeke or seye, Wyte it the ale of Southwerk, I you preye,” where the Miller handily blames a bit too much ale for whatever impropriety might fall from his mouth.

    Around the time Chaucer was composing “The Canterbury Tales” in the late 14th century, the word “misspeak” branched off down yet another semantic path, taking on the meaning of “to speak incorrectly or misleadingly.” It is this sense that gave birth to the modern political mea culpa used when backtracking on a misleading prior statement, such as by Sen. John McCain after he claimed President Barack Obama was directly responsible for terrorist attacks on Americans.

    Expanding meaning

    These shifts in the meaning of a word over time fall under what linguists refer to as “semantic broadening.” Semantic broadening, which means expansion of a word’s meaning, is incredibly common, generally occurring when a word becomes used more frequently and across more situations. As a result, its core sense can expand to take on supplemental or tangential meanings.

    Semantic shift like this is constantly at work, pushing and pulling senses in related but new directions to stay relevant to the needs of speakers.

    The word “soon,” for instance, at first carried a meaning of “immediately,” but human nature being what it is, its meaning began to creep in the direction of “as immediately as possible” as people took their merry time.

    Some new meanings, such as the nonliteral use of “literally” and Walz’s use of “misspeak,” are sites of contest, with multiple meanings at play.

    The semantic broadening of misspeaking to cover not just misleading but knowingly false information didn’t start with Walz, nor did it begin with Clinton. In fact, this politically expedient expansion seems to go back at least to the Nixon administration.

    There’s been a lot of misspeaking by politicians over the years, as these stories show.
    The Guardian US; The Hill; Wall Street Journal; Politico; Washington Post.

    ‘I misspoke myself’

    In 1973, Nixon and his advisers were called to task in a Time article accusing them of a tendency to “make flat statements one day, and the next day reverse field with the simple phrase, ‘I misspoke myself.’” Given the Watergate scandal, it’s safe to say that misspoke as used by his administration had already shifted into deceptive speech territory.

    Perhaps misspeaking’s semantic slippery slope started even further back, when the prefix “mis,” with its sense of “badly,” combined with “speaking.”

    Consider other potentially weaselly words that are also formed by “mis” prefixation: misunderstood, misinterpret, mishear, mistake. These are all examples of words, like misspeak, that can and have been used by politicians to avoid taking responsibility for the false or “misleading” things they say.

    Even if led astray by its prefix, from a linguistic perspective, the broadening of misspeak to cover not just incorrect but fabricated statements turns out to be not such a surprising development given the tendency of words to take on new senses over time, particularly in the world of political doublespeak.

    The bigger surprise might be how this new meaning translates with voters, but that’s one surprise that will have to wait for the ballot box.

    Valerie M. Fridland does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Misspoke: The long and winding road to becoming a political weasel word – https://theconversation.com/misspoke-the-long-and-winding-road-to-becoming-a-political-weasel-word-240533

    MIL OSI – Global Reports

  • MIL-OSI Global: So you don’t like Trump or Harris – here’s why it’s still best to vote for one of them

    Source: The Conversation – USA – By Daniel F. Stone, Associate Professor of Economics, Bowdoin College

    In a close election, every vote really does matter. Nadzeya Haroshka/iStock / Getty Images Plus

    Many Americans are not thrilled with either of the two major-party candidates for president. As of Oct. 4, 2024, polls showed that 46.5% had an unfavorable opinion of Kamala Harris and 52.6% felt unfavorably toward Donald Trump.

    Some of these unhappy voters are considering voting for a third-party candidate, or not voting at all. They may be thinking of those actions as a form of protest against the two-party system dominant in the United States, or against these two particular candidates.

    For example, in a September poll 3.5% of Michigan voters said they planned to vote for a candidate other than Harris and Trump.

    At first glance, these choices might seem perfectly reasonable: If you don’t like a candidate, don’t vote for that person. But my work as a scholar of cognitive biases – systematic errors people make in their thinking – makes me fear that this option does not best serve the interests of those voters.

    Instead, protest voting is in fact likely to harm the democratic process, potentially leading to the election of the candidate the majority of voters overall, and protest voters specifically, most dislike. There are several reasons protest voters might make this mistake.

    How much does one vote matter?

    It’s clear that any one vote is very unlikely to swing the presidential election. And some might say that if one vote doesn’t really matter, then voters may as well vote however they want, or not bother to vote at all. Here’s why that’s flawed thinking:

    Suppose there are 10,000 voters in a state who feel unhappy with both candidates. But they almost surely dislike one candidate more than they dislike the other. Perhaps they disagree with some of Harris’ views but fear Trump. Or maybe it’s the other way around. They don’t have to agree on why they’re unhappy about the candidates either – some who are unhappy with Harris but prefer her over Trump may think Harris is too far left, while others may think she’s not enough of a leftist.

    Now suppose the rest of the state’s voters – those who are happy to vote for one of the two major candidates – are very narrowly split. Perhaps the gap is 5,000 votes. So, if the 10,000 unhappy voters do vote for one of the two major-party candidates, they can swing the election.

    Again, these unhappy voters really do have a preference – they like one of the major candidates better than the other. So while each individual unhappy voter wants to keep their hands clean and not vote, they would each like the other 9,999 unhappy voters to step up and swing the outcome in favor of their preferred candidate.

    Parents teach the Golden Rule to kids – do unto others as you would have them do unto you – and most people do actually believe in it and try to act accordingly. In this case, following the Golden Rule means that if you’re an unhappy voter and would like other unhappy voters to hold their noses and vote for the major candidate they least dislike, you should be willing to do the same thing yourself.

    But not all unhappy voters think this way. Some are led astray by their intuition and choose to protest-vote even when their own values would indicate they shouldn’t.

    A boycott might close a store, but it’s not going to prevent an election from delivering a winner.
    Nikolay Tsuguliev/iStock / Getty Images Plus

    A boycott error

    One reason a person might still think a protest vote makes sense is because of the assumption that boycotting something they don’t like is an effective means of contributing to positive change.

    A boycott against a person or organization you have a problem with often makes good sense. For instance, if there’s a restaurant in town with a reputation for being discriminatory, or just for being slow to get the food out, don’t go to it. Maybe it will close and make room for another business with better performance. Or maybe it will make some changes in hopes of growing its customer base.

    But when you cast a vote, whether on Election Day or beforehand, boycotting the viable candidates isn’t going to help. One of them is going to win whether you like it or not. Boycotting in this context is an example of a misapplied heuristic – a rule of thumb that’s often, but not always, helpful. Boycotting here doesn’t help you achieve your goal of eliminating or improving something you don’t like.

    Omission vs. commission

    Another reason people might choose a protest vote is because of a phenomenon in which people prefer to make mistakes of inaction – omission – over making mistakes that involve taking action – commission. People feel less guilty when they haven’t acted directly in support of a bad outcome. But both action and inaction can be errors, and both can deliver undesired results that constitute bad outcomes.

    The omission bias can help explain why some people are hesitant to get vaccinated against serious diseases: If they chose to get vaccinated and the vaccination led to a health problem, that would be a mistake of commission. Not getting vaccinated also might lead to a health problem, but that would be a mistake of omission. People tend to prefer the latter.

    Similarly, voting for a candidate you’re unsatisfied with could feel like a mistake of commission. Not voting, or voting for a third party, risks a mistake of omission – an error often assumed to be less significant. But choosing the possibility of an error of omission over one of commission doesn’t ensure you aren’t making a mistake – it just changes your mistake to one that’s intuitively more appealing.

    They are both politicians, but they are very different candidates.
    AP Photo

    False equivalence

    A final reason people might opt out of voting or choose to back a third-party candidate is that they object to the assumption that they dislike one candidate more than the other. Instead, these people claim the two main options are equally bad.

    But regardless of what your actual values and policy preferences are, that’s almost certainly untrue. The two candidates hold very different views on a wide range of issues, and have different records of what they have done – and not done – when in office.

    People who claim the two different candidates are basically the same are misusing another mental shortcut: the human tendency to think in categories. Grouping distinct items in the same category can simplify thinking, but it can ignore substantial differences.

    Some people think about 1-in-10 chances and 1-in-a-million chances as both being in the category of “possibilities.” But they’re very different: If you’re flipping a coin repeatedly, one is about equal to your chance of getting heads three times in a row, and the other is how likely you are to get heads 20 times in a row.

    Seeking your most desired outcome

    During the 2000 presidential campaign, I recall a friend said he wasn’t voting for Democratic candidate Al Gore because he thought Gore and Republican nominee George W. Bush were equally bad. But after winning – partly because of third-party voters who cast ballots for independent Ralph Nader – Bush withdrew the U.S. from the Kyoto Protocol to limit global carbon emissions, invaded Iraq, and passed tax cuts favoring the wealthy.

    All of those were actions Gore would almost certainly not have taken. The two candidates were very far from being the same, and even though my friend didn’t see it beforehand, he should have been able to.

    The U.S. will have a new president on Jan. 20, 2025: Trump or Harris. A third-party winner is not a real option.

    In some states voters can rank candidates in order of preference, more clearly expressing their choices without wasting their vote on a candidate who can’t win. People who believe it would be nice to have more choices with realistic chances of winning could work to adopt that system – known as ranked-choice voting – in their communities, or seek to adopt other methods that could eventually yield more viable options in the future. But it won’t happen in time for this election.

    Whether you like it or not, you face a binary choice: Vote for one or vote for the other. And please vote.

    Daniel F. Stone does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. So you don’t like Trump or Harris – here’s why it’s still best to vote for one of them – https://theconversation.com/so-you-dont-like-trump-or-harris-heres-why-its-still-best-to-vote-for-one-of-them-240632

    MIL OSI – Global Reports

  • MIL-OSI Global: Though home to about 50 white extremist groups, Ohio’s social and political landscape is undergoing rapid racial change

    Source: The Conversation – USA – By Paul J. Becker, Associate Professor of Sociology, University of Dayton

    Members of the white militia group Proud Boys march on the Ohio state capitol in Columbus on Jan. 6, 2024. Paul Becker, CC BY

    The first time many Americans heard about Springfield, Ohio, came during the September 2024 presidential debate when Donald Trump falsely claimed that Haitian immigrants in the city were eating other residents’ cats and dogs.

    Though shocking, these harmful rumors had been spreading on social media since the beginning of the summer and had gained more notoriety when JD Vance, a U.S. senator from Ohio and Trump’s running mate, made similar statements on X, the social media platform formerly called Twitter.

    But what has gone mostly overlooked is the effect these racist lies have had on energizing Ohio’s nearly 50 white extremist groups.

    Members of the white supremacist group Blood Tribe marched through Springfield on Aug. 10, 2024, with with swastikas on their signs.

    Since then, members of the Ku Klux Klan and the right-wing extremist group Proud Boys have each marched in separate demonstrations through Springfield.

    As scholars of extremism who live in Ohio and work at the University of Dayton, we have seen these events unfold at a time when city officials have received multiple bomb threats targeting local government offices and schools since Trump’s false and racist claims against Haitian immigrants.

    The changing landscape

    In our research, we have found that the rapidly changing social conditions in Ohio have played a significant role in the growth of extremism.

    Between 1990 and 2019, for instance, manufacturing jobs shrank from 21.7% of all employment in the state to 12.5%, a loss of nearly 360,000 jobs. As a result, income disparities between the professional and working classes have widened – as has the heightened sense among some alienated white men that white conservatives are the real victims of bias in a society growing more racially and culturally diverse.

    A neo-Nazi group speaks under heavy police protection at a 2005 rally sponsored by the National Socialist Movement at City Hall in Toledo, Ohio.
    Bill Pugliano/Getty Images

    For many of these alienated men, particularly those in rural areas that lack significant numbers of Black and Hispanic residents, extremist ideologies offer easy answers to complex questions that involve their sense of disenfranchisement.

    In 2020, for example, the population of Springfield was about 60,000. But over the past three years, city officials estimate that the population has grown by about 25%, partly fueled by the arrival of as many as 15,000 Haitian immigrants during that time. Many of them are legally living in the U.S. under a special federal program.

    Similar demographic shifts are occurring throughout the state. Between 2010 and 2022, the percentage of the white population dropped from 81.2% to 77.3%, a loss of about 250,000, putting the state’s white population at about 9.1 million. During the same time, the Hispanic population, for instance, grew from about 357,000 in 2010 to nearly 525,000.

    For some of these white extremists, these population changes will lead to an inevitable race war between white people and nonwhite people. We have found that the attraction of belonging to a group that promises strength, protection and a source of identity can be particularly compelling.

    The Ohio connection

    In recent years, white extremism in Ohio has received attention as a result of the extremist rhetoric of and often violent crimes committed by white men who call the state home. Consider just a few examples:

    Born and raised in Ohio, Andrew Anglin founded the Daily Stormer, a popular neo-Nazi website, in 2016.

    James Alex Fields Jr. of Maumee, Ohio, poses for a mug shot after he drove his car into a crowd of counterprotesters in Charlottesville, Va., on Aug. 12, 2017.
    Albemarle-Charlottesville Regional Jail via Getty Images

    James Alex Fields Jr., a white nationalist from the Toledo area, was sentenced to life in prison in 2019 for the murder of Heather Heyer in Charlottesville, Virginia. Fields was convicted of driving his car into a crowd of counterprotesters during the white nationalist Unite the Right Rally in August 2017.

    Prior to the attack, Fields frequently posted the hashtag #Hitlerwasright on his social media accounts and called for violence against nonwhites and Jews.

    In the summer of 2022, Ohio law enforcement officers shot and killed Ricky Shiffer after the armed Navy veteran fired a nail gun at the FBI field office in Cincinnati. On his social media accounts, Shiffer had called for violence against federal law enforcement officials after the FBI searched Donald Trump’s residence at Mar-a-Lago as part of the federal probe into Trump’s handling of classified documents.

    Tres Genco, a self-described incel – short for “involuntary celibate” – who hated women and believed he was owed sex from them, was from the Cincinnati area and pled guilty in 2022 to plotting a mass shooting of women at Ohio State University. Law enforcement officials in Ohio stopped the planned attack before it happened.

    On April 21, 2023, Christopher Brenner Cook, 20, of Columbus, Ohio, and others were sentenced to nearly eight years in prison for his plan to attack power grids across the U.S. Cook and his accomplices believed that they were starting a race war and used neo-Nazi propaganda and white supremacist ideology to recruit young people to join their group.

    Online recruitment tactics

    Leaders of white supremacist and militia groups often use both traditional outreach and digital platforms to recruit people to their groups. Traditional outreach includes recruitment in conversations, attending events, and sharing books, pamphlets, flyers and posters.

    At the same time, social media has become a critical tool for extremist groups to spread their message, recruit members and organize events.

    These online platforms create echo chambers that reinforce extremist beliefs in debunked conspiracy theories, such as the assumption that the federal government is part of a plot to eliminate the white race.

    In addition to the increased traffic on social media, we have seen a rise of extremist groups in Ohio known as active clubs, where members engage in physical fitness, combat training and emotional support that encourages the development of a warrior mentality in preparation for what followers believe is an inevitable race war.

    Countering extremism in Ohio

    Though the emergence of white extremist groups goes far beyond the borders of Ohio, we have found that community-based, educational initiatives are effective in understanding and ultimately eradicating the root causes of racial and ethnic hatred on the local level.

    In our view, community engagement that emphasizes dialogue and understanding across different racial groups is crucial for demonstrating the dangers of intolerance – and the benefits of diversity.

    Paul J. Becker is part of a team at The University of Dayton that received funding from the Department of Homeland Security for the Preventing Radicalization to Extremist Violence through Education, Network-Building and Training in Southwest Ohio (PREVENTS-OH) project. Funded by the Department of Homeland Security under the Targeted Violence and Terrorism Prevention (TVTP) Grant Program, PREVENTS-OH recognizes that domestic violent extremism and hate movements pose a serious threat to the realization of human rights.

    Art Jipson is part of a team at The University of Dayton that received funding from the Department of Homeland Security for the Preventing Radicalization to Extremist Violence through Education, Network-Building and Training in Southwest Ohio (PREVENTS-OH) project. Funded by the Department of Homeland Security under the Targeted Violence and Terrorism Prevention (TVTP) Grant Program, PREVENTS-OH recognizes that domestic violent extremism and hate movements pose a serious threat to the realization of human rights.

    ref. Though home to about 50 white extremist groups, Ohio’s social and political landscape is undergoing rapid racial change – https://theconversation.com/though-home-to-about-50-white-extremist-groups-ohios-social-and-political-landscape-is-undergoing-rapid-racial-change-239997

    MIL OSI – Global Reports

  • MIL-OSI Global: Buyer beware: Off-brand Ozempic, Zepbound and other weight loss products carry undisclosed risks for consumers

    Source: The Conversation – USA – By C. Michael White, Distinguished Professor of Pharmacy Practice, University of Connecticut

    In just a few years, brand-name injectable drugs such as Ozempic, Wegovy, Mounjaro and Zepbound have rocketed to fame as billion-dollar annual sellers for weight loss as well as to control blood sugar levels and reduce the risk of heart disease.

    But the price of these injections is steep: They cost about US$800-$1,000 per month, and if used for weight loss alone, they are not covered by most insurance policies. Both drugs mimic the naturally occurring hormone GLP-1 to help regulate blood sugar and reduce cravings. They can be taken only with a prescription.

    The Food and Drug Administration announced an official shortage of the active ingredients in these drugs in 2022, but on Oct. 2, 2024, the agency announced that the shortage has been resolved for the medicine tirzepatide, the active ingredient in Mounjaro and Zepbound.

    Despite the soaring demand and limited supply of these drugs, there are no generic versions available. This is because the patents for semaglutide – the active ingredient in Ozempic and Wegovy, which is still in shortage – and tirzepatide don’t expire until 2033 and 2036, respectively.

    As a result, nonbrand alternatives that can be purchased with or without a prescription are flooding the market. Yet these products come with real risks to consumers.

    I am a pharmacist who studies weaknesses in federal oversight of prescription and over-the-counter drugs and dietary supplements in the U.S. My research group recently has investigated loopholes that are allowing alternative weight loss products to enter the market.

    High demand is driving GLP-1 wannabes

    The dietary supplement market has sought to cash in on the GLP-1 demand with pills, teas, extracts and all manner of other products that claim to produce similar effects as the brand names at a much lower price.

    Products containing the herb berberine offer only a few pounds of weight loss, while many dietary supplement weight loss products contain stimulants such as sibutramine and laxatives such as phenolphthalein, which increase the risk of heart attacks, strokes and cancer.

    Poison control centers have seen a steep rise in calls related to off-brand weight loss medications.

    The role of compounding pharmacies

    Unlike the dietary supplements that are masquerading as GLP-1 weight loss products, compounding pharmacies can create custom versions of products that contain the same active ingredients as the real thing for patients who cannot use either brand or generic products for some reason.

    These pharmacies can also produce alternative versions of brand-name drugs when official drug shortages exist.

    Since the demand for GLP-1 medications has far outpaced the supply, compounding pharmacies are legally producing a variety of different semaglutide and tirzepatide products.

    These products may come in versions that differ from the brand-name companies, such as vials of powder that must be dissolved in liquid, or as tablets or nasal sprays.

    Just like the brand-name drugs, you must have a valid prescription to receive them. The prices range from $250-$400 a month – still a steep price for many consumers.

    Compounding pharmacies must adhere to the FDA’s sterility and quality production methods, but these rules are not as rigorous for compounding pharmacies as those for commercial manufacturers of generic drugs.

    In addition, the products compounding pharmacies create do not have to be tested in humans for safety or effectiveness like brand-name products do.

    Proper dosing can also be challenging with compounded forms of the drugs.

    Companies that work the system

    For people who cannot afford a compounding pharmacy product, or cannot get a valid prescription for semaglutide or tirzepatide, opportunistic companies are stepping in to fill the void. These include “peptide companies,” manufacturers that create non-FDA approved knockoff versions of the drugs.

    From November 2023 to March 2024, my team carried out a study to assess which of these peptide companies are selling semaglutide or tirzepatide products. We scoured the internet looking for these peptide companies and collected information about what they were selling and their sales practices.

    We found that peptide sellers use a loophole to sell these drugs. On their websites, the companies state that their drugs are for “research purposes only” or “not for human consumption,” but they do nothing to verify that the buyers are researchers or that the product is going to a research facility.

    By reading the comments sections of the company websites and the targeted ads on social media, it becomes clear that both buyers and sellers understand the charade. Unlike compounding pharmacies, these peptide sellers do not provide the supplies you need to dissolve and inject the drug, provide no instructions, and will usually not answer questions.

    Peptide sellers, since they allegedly are not selling to consumers, do not require a valid prescription and will sell consumers whatever quantity of drug they wish to purchase. Even if a person has an eating disorder such as anorexia nervosa, the companies will happily sell them a semaglutide or tirzepatide product without a prescription. The average prices of these peptide products range from $181-$203 per month.

    Skirting regulations

    Peptide sellers do not have to adhere to the rules or regulations that drug manufacturers or compounding pharmacies do. Many companies state that their products are 99% pure, but an independent investigation of three companies’ products from August 2023 to March 2024 found that the purity of the products were far less than promised.

    One product contained endotoxin – a toxic substance produced by bacteria – suggesting that it was contaminated with microbes. In addition, the products’ promised dosages were off by up 29% to 39%. Poor purity can cause patients to experience fever, chills, nausea, skin irritation, infections and low blood pressure.

    In this study, some companies never even shipped the drug, telling the buyers they needed to pay an additional fee to have the product clear customs.

    If a consumer is harmed by a poor-quality product, it would be difficult to sue the seller, since the products specifically say they are “not for human consumption.” Ultimately, consumers are being led to spend money on products that may never arrive, could cause an infection, might not have the correct dose, and contain no instructions on how to safely use or store the product.

    Will prices for brand-name products come down?

    To combat these alternative sellers, pharmaceutical company Eli Lilly began offering an alternative version of its brand-name Zepbound product for weight loss in September 2024.

    Instead of its traditional injection pen products that cost more than $1,000 for a month’s supply, this product comes in vials that patients draw up and inject themselves. For patients who take 5 milligrams of Zepbound each week, the vial products would cost them $549 a month if patients buy it through the company’s online pharmacy and can show that they do not have insurance coverage for the drug.

    After a grilling on Capitol Hill in September 2024, pharmaceutical company Novo Nordisk came under intense pressure to offer patients without prescription coverage a lower-priced product for its brand-name Wegovy as well.

    In the next few years, additional brand-name GLP-1 agonist drugs will likely make it to market. As of October 2024, a handful of these products are in late-phase clinical trials, with active ingredients such as retatrutide, survodutide and ecnoglutide, and more than 18 other drug candidates are in earlier stages of development.

    When new pharmaceutical companies enter this market, they will have to offer patients lower prices than Eli Lilly and Novo Nordisk in order to gain market share. This is the most likely medium-term solution to drive down the costs of GLP-1 drugs and eliminate the drug shortages in the marketplace.

    C. Michael White does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Buyer beware: Off-brand Ozempic, Zepbound and other weight loss products carry undisclosed risks for consumers – https://theconversation.com/buyer-beware-off-brand-ozempic-zepbound-and-other-weight-loss-products-carry-undisclosed-risks-for-consumers-239480

    MIL OSI – Global Reports

  • MIL-OSI Global: Columbus who? Decolonizing the calendar in Latin America

    Source: The Conversation – USA – By Elena Jackson Albarrán, Associate Professor of History and Global and Intercultural Studies, Miami University

    Demonstrators make graffiti reading ‘Columbus Out, Long Live the People’ on a fence protecting a statue of Christopher Columbus in Mexico City on Oct. 12, 2020. Pedro Pardo/AFP via Getty Images

    This is the season of patriotism in Latin America as many countries commemorate their independence from colonial powers. From July to September, public plazas in countries from Mexico to Honduras and Chile fill with crowds dressed and painted in national colors, parades feature participants costumed as independence heroes, fireworks fill the skies, and schoolchildren reenact historical battles.

    Beneath these nationalist displays ripples an uneasy tide: the colonial legacies that still tie the Americas to their Iberian conquerors. And as the calendar turns to October, another holiday highlights similar tensions – Columbus Day.

    Since 1937, the U.S. has observed the holiday on the second Monday of the month, commemorating the explorer’s 1492 arrival in the New World. It remains a federal holiday, even as many states and cities rename it “Indigenous Peoples’ Day,” rejecting Christopher Columbus as a symbol of imperialism.

    Indigenous groups protest in front of a statue of Christopher Columbus on Oct. 12, 1997, during marches in Mexico against ‘Dia de la Raza’ celebrations.
    David Hernandez/AFP via Getty Images

    Most Latin Americans, meanwhile, know Oct. 12 as “Día de la Raza,” or Day of the Race, which also celebrates Columbus’ arrival in the New World and the tide of Iberian conquistadors that followed. But commemorating the event is all the more charged in these countries, home to the Spanish Empire’s most lucrative territorial assets and sweeping spiritual conquests. Days before taking office in September 2024, Mexican President Claudia Sheinbaum reiterated her predecessor’s demand that the king of Spain apologize for the genocide and exploitation of the conquest 500 years ago.

    As a historian of Latin America, I’ve paid attention to the ways calendars signal a nation’s “official” values and how countries wrestle with these holidays’ meanings.

    Día de la Raza

    The first encounter between Aztec emperor Montezuma and conquistador Hernando Cortés took place on Nov. 8, 1519 – the latter backed by an entourage of 300 Spaniards, thousands of Indigenous allies and slaves, and hundreds of Africans, free or otherwise.

    This moment of contact began Mexico’s 500-year transformation into a “mestizo” nation: a hybrid identity with largely European and Indigenous roots. During the colonial period, racial differences were codified into law, and those with “pure” Spanish bloodlines enjoyed legal privileges over the racially mixed categories that fell below them. The 19th century ushered in independence from Spain and liberal ideas that promoted racial equality – in principle – but in reality, European influence prevailed.

    It was Spain that first proposed the Día de la Raza, held on Oct. 12, 1892, to commemorate the 400-year anniversary of Columbus’ arrival in the Americas – implying a celebration of Spain’s contributions to the mestizo racial mixture.

    The celebration was part of a bid to fortify nationalism in Spain, as the waning colonial power continued its retreat from the hemisphere it controlled for the better part of four centuries. Spain also hoped to export the invented holiday to the Americas, strengthening trans-Atlantic cultural affinities tested by the United States’ growing sway. Across the Americas, Día de la Raza came to be synonymous with celebrating European influence.

    Decorations for ‘Día de la Raza,’ in the Monserrat neighborhood of Buenos Aires in 1929.
    Archivo General de la Nación/Wikimedia Commons

    In Mexico, the 1892 commemoration empowered members of the political elite who promoted European investments and culture as the model for modernizing the country. They used the occasion to extol the civilizing influence of the “madre patria,” or motherland, justifying the conquest and colonialism as a period of benevolent rule.

    Mestizo nationalism

    Only a few years later, however, the U.S. victory in the Spanish-American War swept the last vestiges of Spanish empire from the hemisphere. Spain’s exit made way for dual – and dueling – phenomena: rising patriotic spirit in Latin American countries, even amid increasing economic pressure and cultural influence from the U.S.

    The 1910 Mexican Revolution ignited mestizo nationalism, which soon extended to other countries. In 1930s Nicaragua, Augusto Sandino started a revolution to oust the occupying U.S. Marines while calling for the unification of the “Indo-Hispanic Race.” Meanwhile, Peruvian intellectual José Mariátegui envisioned a modern nation built upon the ideals of a collective, reciprocal society, modeled by the Incan ayllu system. And in Mexico, beauty pageants celebrating native features gained popularity among the social classes accustomed to perusing department stores for Parisian imports.

    Yet a tendency to emphasize Spanish cultural ancestry rather than Indigenous ones persisted. In the late 1930s, for example, October issues of Mexican children’s magazine Palomilla celebrated Columbus’ arrival as a heroic entry that provided the region with a common language and religion.

    Pan American Day

    Meanwhile, the U.S viewed Pan-Hispanic sentiments as a threat: Spanish economic goals, cloaked in racial and cultural solidarity.

    To help shore up hemispheric allegiances, Franklin D. Roosevelt proclaimed a new holiday on April 14, 1930: Pan American Day, or Día de las Américas. The holiday sought to offset the narratives of both Columbus Day and Día de la Raza and marked the U.S. administration’s Good Neighbor Policy pivot toward Latin America – a softer form of imperialism that promoted solidarity and brotherhood, at least on the surface.

    The Pan American Union, an inter-American organization headquartered in Washington, saw the new date as an opportunity to forge common traditions across the hemisphere. It vigorously promoted Pan American Day celebrations, primarily among schoolchildren, exhorting teachers to implement games, puzzles, pageants and songs created in Pan American Union offices.

    Students at Parkway Public School in New York present a pageant for Pan American Day in 1943.
    Bettmann/CORBIS/Bettmann Archive via Getty Images

    The holiday met enthusiastic reception in the United States. Midwesterners donned sombreros for parades, and Spanish language clubs in California hosted pageants celebrating the flags of American nations.

    But Latin American commemoration was tepid at best. The Organization of American States, the successor to the Pan American Union, still recognizes Pan American Day. However, it never gained traction in Latin America and faded in the U.S. during World War II.

    Recent shift

    Latin America’s ambivalence toward holidays to commemorate the colonizers has taken a turn since 1992. The 500-year anniversary of Columbus’ arrival corresponded with yet another form of colonialism, in many Latin Americans’ eyes, as a new wave of multinational corporations colluded with heads of state to tap the continent’s oil, lithium, water and avocados.

    Activists used the commemoration to call attention to lingering economic, social, racial and cultural inequities. In particular, the anniversary inspired Indigenous rights movements – some of which commemorated an “anti-quincentenary” to celebrate “500 years of resistance.”

    The Día de la Raza has since been renamed to reflect anti-colonial sentiments, similar to Columbus Day in the United States. Ecuador calls Oct. 12 the Day of Interculturalism and Ethnic Identity; Argentina celebrates it as Day of Respect for Cultural Diversity; Nicaragua now refers to it as the Day of Indigenous, Black and Popular Resistance; in Colombia it is the Day of Ethnic and Cultural Diversity; and the Dominican Republic celebrates it as Intercultural Day.

    A statue in honor of ‘women who fight’ has replaced an effigy of Christopher Columbus on Paseo de la Reforma Avenue in Mexico City.
    Pedro Pardo/AFP via Getty Images

    In some places, renaming the holiday has drawn attention to Indigenous rights and culture. Bolivians, for example, draped a statue of a European monarch in a traditional “aguayo” garment, transforming her into an Indigenous woman. However, critics suggest that removing the holiday’s reference to the colonizers erases an important reminder of the conquest and its painful legacy.

    As in the U.S., monuments to colonizers are coming down – including the monument to Columbus that occupied a conspicuous spot on La Reforma, one of Mexico City’s most-traversed thoroughfares.

    In its place is a new installation: a purple silhouette of a girl with her fist raised, in honor of Latin America’s women activists. She heralds a new era of statues lining La Reforma, and heroes for the future – not mired in the colonial legacies of the past.

    Elena Jackson Albarrán does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Columbus who? Decolonizing the calendar in Latin America – https://theconversation.com/columbus-who-decolonizing-the-calendar-in-latin-america-233307

    MIL OSI – Global Reports