Category: The Conversation

  • MIL-OSI Analysis: For Jane Austen and her heroines, walking was more than a pastime – it was a form of resistance

    Source: The Conversation – UK – By Nada Saadaoui, PhD Candidate in English Literature, University of Cumbria

    In Pride and Prejudice (1813), when heroine Elizabeth Bennet arrives at Netherfield Park with “her petticoat six inches deep in mud”, she walks not only through the fields of Hertfordshire, but into one of literature’s most memorable images of women’s independence.

    Her decision to walk alone, “above her ankles in dirt”, is met with horror. “What could she mean by it?” sneers Miss Bingley. “It seems to me to show an abominable sort of conceited independence.” And yet, in that walk – unaccompanied, unfashionable, unbothered – Elizabeth reveals more about her spirit and autonomy than any parlour conversation could.

    For Austen’s heroines, independence – however “abominable” – often begins on foot. Elizabeth may be the most iconic of Austen’s pedestrians, but she is far from alone. Across Austen’s novels, women are constantly in motion: walking through country lanes, walled gardens, shrubberies, city streets and seaside resorts.

    These are not idle excursions. They are socially legible acts, shaped by class, decorum, and gender – yet often quietly resistant to them.


    This article is part of a series commemorating the 250th anniversary of Jane Austen’s birth. Despite having published only six books, she is one of the best-known authors in history. These articles explore the legacy and life of this incredible writer.


    Fanny Price, the often underestimated heroine of Mansfield Park (1814), is typically seen as timid and passive. Yet beneath her reserved exterior lies a quiet but determined spirit.

    “She takes her own independent walk whenever she can”, remarks Mrs Norris disapprovingly. “She certainly has a little spirit of secrecy, and independence, and nonsense about her.” Austen’s choice of “nonsense” here is revealing: Fanny’s desire for solitude and self-direction is not revolutionary, but it is gently subversive. In a world offering women little room for self-assertion, her steps become acts of resistance.

    When Jane Fairfax, constrained by class and circumstance in Emma (1815), declines a carriage ride, she asserts: “I would rather walk … quick walking will refresh me.” It’s a seemingly modest decision, but one layered with significance. To walk is to control your own movement, to maintain autonomy and resist the genteel suffocation of being constantly observed or helped.

    In Persuasion (1817), Anne Elliot’s story shows walking as a path to renewal. Reserved and long burdened by regret, Anne finds restoration in the coastal air of Lyme Regis. As she walks along the Cobb, Austen notes that “she was looking remarkably well … having the bloom and freshness of youth restored by the fine wind … and by the animation of eye which it had also produced”.

    Her emotional reawakening is framed as a physical one. Walking becomes not only therapeutic but transformative – a way back to herself.

    Not all of Austen’s walks are reflective or restorative. Some are decidedly social. Lydia and Kitty Bennet’s frequent walks to Meryton in Pride and Prejudice, for example, are driven as much by shopping as by the hope of romantic encounters.

    Austen notes the “most convenient distance” of the village, where “their eyes were immediately wandering up in the street in quest of the officers”. These girls were more interested in uniforms than in bonnets.

    Yet even this behaviour hints at something subtler. For young, unmarried women, shopping and social errands were among the few socially sanctioned reasons to move independently through public space. These excursions offered moments of visibility, mobility, and the possibility of courtship – however frivolously pursued.

    Kitty and Lydia walk to Meryton in order to encounter the officers.

    Catherine Morland of Northanger Abbey (1817), a devoted reader of gothic fiction, fuses her walks with imagination. As she strolls along the Avon River with the Tilneys, she muses: “It always puts me in mind of the country that Emily and her father travelled through in The Mysteries of Udolpho.” Walking becomes an act of imaginative projection, where the boundaries between fiction and reality blur in the mind of a heroine learning to navigate both the world and herself.

    Jane Austen the walker

    Austen’s fiction draws much of its vitality from her own experiences. She was, by her own admission, a “desperate walker”, rarely deterred by weather, terrain or propriety.

    A watercolour of Jane Austen by her sister Cassandra, showing her looking out to sea. It was painted while they were on holiday in Lyme Regis in 1804.
    Wiki Commons

    Her letters, written from Bath, Steventon, Chawton and elsewhere, capture the physicality and pleasure of walking in vivid, often playful detail. These glimpses into her daily life reveal not only her attachment to movement but also the quiet autonomy it afforded her.

    In 1805, Austen writes from Bath: “Yesterday was a busy day with me, or at least with my feet & my stockings; I was walking almost all day long.” Several years later, in 1813, she reports with unmistakable relief: “I walked to Alton, & dirt excepted, found it delightful … before I set out we were visited by several callers, all of whom my mother was glad to see, & I very glad to escape.”

    Perhaps most revealing is an earlier letter from December 1798, in which Austen describes a rare solitary excursion: “I enjoyed the hard black frosts of last week very much, & one day while they lasted walked to Deane by myself. I do not know that I ever did such a thing in my life before.” The comment registers the novelty and boldness of a woman walking alone.

    In an age where walking is once again praised for its physical and mental benefits, Austen’s fiction reminds us that these virtues are not new. Her characters have been walking for centuries – through mud, across class boundaries and against expectation.

    They walk in pursuit of clarity, connection, escape and self-hood. Their steps – measured or impulsive, solitary or social – mark turning points in their lives. And in a world designed to keep them stationary, their walking remains a radical act.

    This article features references to books that have been included for editorial reasons, and may contain links to bookshop.org. If you click on one of the links and go on to buy something from this website, The Conversation UK may earn a commission.

    Nada Saadaoui does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. For Jane Austen and her heroines, walking was more than a pastime – it was a form of resistance – https://theconversation.com/for-jane-austen-and-her-heroines-walking-was-more-than-a-pastime-it-was-a-form-of-resistance-258101

    MIL OSI Analysis

  • MIL-OSI Analysis: Believe it or not, there was a time when the US government built beautiful homes for working-class Americans to deal with a housing shortage

    Source: The Conversation – USA – By Eran Ben-Joseph, Professor of Landscape Architecture and Urban Planning, Massachusetts Institute of Technology (MIT)

    The U.S. Housing Corporation built nearly 300 homes in Bremerton, Wash., during World War I. National Archives

    In 1918, as World War I intensified overseas, the U.S. government embarked on a radical experiment: It quietly became the nation’s largest housing developer, designing and constructing more than 80 new communities across 26 states in just two years.

    These weren’t hastily erected barracks or rows of identical homes. They were thoughtfully designed neighborhoods, complete with parks, schools, shops and sewer systems.

    In just two years, this federal initiative provided housing for almost 100,000 people.

    Few Americans are aware that such an ambitious and comprehensive public housing effort ever took place. Many of the homes are still standing today.

    But as an urban planning scholar, I believe that this brief historic moment – spearheaded by a shuttered agency called the United States Housing Corporation – offers a revealing lesson on what government-led planning can achieve during a time of national need.

    Government mobilization

    When the U.S. declared war against Germany in April 1917, federal authorities immediately realized that ship, vehicle and arms manufacturing would be at the heart of the war effort. To meet demand, there needed to be sufficient worker housing near shipyards, munitions plants and steel factories.

    So on May 16, 1918, Congress authorized President Woodrow Wilson to provide housing and infrastructure for industrial workers vital to national defense. By July, it had appropriated US$100 million – approximately $2.3 billion today – for the effort, with Secretary of Labor William B. Wilson tasked with overseeing it via the U.S. Housing Corporation.

    Over the course of two years, the agency designed and planned over 80 housing projects. Some developments were small, consisting of a few dozen dwellings. Others approached the size of entire new towns.

    For example, Cradock, near Norfolk, Virginia, was planned on a 310-acre site, with more than 800 detached homes developed on just 100 of those acres. In Dayton, Ohio, the agency created a 107-acre community that included 175 detached homes and a mix of over 600 semidetached homes and row houses, along with schools, shops, a community center and a park.

    Designing ideal communities

    Notably, the Housing Corporation was not simply committed to offering shelter.

    Its architects, planners and engineers aimed to create communities that were not only functional but also livable and beautiful. They drew heavily from Britain’s late-19th century Garden City movement, a planning philosophy that emphasized low-density housing, the integration of open spaces and a balance between built and natural environments.

    Milton Hill, a neighborhood designed and developed by the United States Housing Corporation in Alton, Ill.
    National Archives

    Importantly, instead of simply creating complexes of apartment units, akin to the public housing projects that most Americans associate with government-funded housing, the agency focused on the construction of single-family and small multifamily residential buildings that workers and their families could eventually own.

    This approach reflected a belief by the policymakers that property ownership could strengthen community responsibility and social stability. During the war, the federal government rented these homes to workers at regulated rates designed to be fair, while covering maintenance costs. After the war, the government began selling the homes – often to the tenants living in them – through affordable installment plans that provided a practical path to ownership.

    A single-family home in Davenport, Iowa, built by the U.S. Housing Corporation.
    National Archives

    Though the scope of the Housing Corporation’s work was national, each planned community took into account regional growth and local architectural styles. Engineers often built streets that adapted to the natural landscape. They spaced houses apart to maximize light, air and privacy, with landscaped yards. No resident lived far from greenery.

    In Quincy, Massachusetts, for example, the agency built a 22-acre neighborhood with 236 homes designed mostly in a Colonial Revival style to serve the nearby Fore River Shipyard. The development was laid out to maximize views, green space and access to the waterfront, while maintaining density through compact street and lot design.

    At Mare Island, California, developers located the housing site on a steep hillside near a naval base. Rather than flatten the land, designers worked with the slope, creating winding roads and terraced lots that preserved views and minimized erosion. The result was a 52-acre community with over 200 homes, many of which were designed in the Craftsman style. There was also a school, stores, parks and community centers.

    Infrastructure and innovation

    Alongside housing construction, the Housing Corporation invested in critical infrastructure. Engineers installed over 649,000 feet of modern sewer and water systems, ensuring that these new communities set a high standard for sanitation and public health.

    Attention to detail extended inside the homes. Architects experimented with efficient interior layouts and space-saving furnishings, including foldaway beds and built-in kitchenettes. Some of these innovations came from private companies that saw the program as a platform to demonstrate new housing technologies.

    One company, for example, designed fully furnished studio apartments with furniture that could be rotated or hidden, transforming a space from living room to bedroom to dining room throughout the day.

    To manage the large scale of this effort, the agency developed and published a set of planning and design standards − the first of their kind in the United States. These manuals covered everything from block configurations and road widths to lighting fixtures and tree-planting guidelines.

    A single-family home in Bremerton, Wash., built by the U.S. Housing Corporation.
    National Archives

    The standards emphasized functionality, aesthetics and long-term livability.

    Architects and planners who worked for the Housing Corporation carried these ideas into private practice, academia and housing initiatives. Many of the planning norms still used today, such as street hierarchies, lot setbacks and mixed-use zoning, were first tested in these wartime communities.

    And many of the planners involved in experimental New Deal community projects, such as Greenbelt, Maryland, had worked for or alongside Housing Corporation designers and planners. Their influence is apparent in the layout and design of these communities.

    A brief but lasting legacy

    With the end of World War I, the political support for federal housing initiatives quickly waned. The Housing Corporation was dissolved by Congress, and many planned projects were never completed. Others were incorporated into existing towns and cities.

    Yet, many of the neighborhoods built during this period still exist today, integrated in the fabric of the country’s cities and suburbs. Residents in places such as Aberdeen, Maryland; Bremerton, Washington; Bethlehem, Pennsylvania; Watertown, New York; and New Orleans may not even realize that many of the homes in their communities originated from a bold federal housing experiment.

    Homes on Lawn Avenue in Quincy, Mass., that were built by the U.S. Housing Corporation.
    Google Street View

    The Housing Corporation’s efforts, though brief, showed that large-scale public housing could be thoughtfully designed, community oriented and quickly executed. For a short time, in response to extraordinary circumstances, the U.S. government succeeded in building more than just houses. It constructed entire communities, demonstrating that government has a major role and can lead in finding appropriate, innovative solutions to complex challenges.

    At a moment when the U.S. once again faces a housing crisis, the legacy of the U.S. Housing Corporation serves as a reminder that bold public action can meet urgent needs.

    This article is part of a series centered on envisioning ways to deal with the housing crisis.

    Eran Ben-Joseph does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Believe it or not, there was a time when the US government built beautiful homes for working-class Americans to deal with a housing shortage – https://theconversation.com/believe-it-or-not-there-was-a-time-when-the-us-government-built-beautiful-homes-for-working-class-americans-to-deal-with-a-housing-shortage-253512

    MIL OSI Analysis

  • MIL-OSI Analysis: What if universal rental assistance were implemented to deal with the housing crisis?

    Source: The Conversation – USA – By Alex Schwartz, Professor of Urban Policy, The New School

    Thousands of American families that can’t find affordable apartments are stuck living in extended-stay motels. Michael S. Williamson/The Washington Post via Getty Images

    If there’s one thing that U.S. politicians and activists from across the political spectrum can agree on, it’s that rents are far too high.

    Many experts believe that this crisis is fueled by a shortage of housing, caused principally by restrictive regulations.

    Rents and home prices would fall, the argument goes, if rules such as minimum lot- and house-size requirements and prohibitions against apartment complexes were relaxed. This, in turn, would make it easier to build more housing.

    As experts on housing policy, we’re concerned about housing affordability. But our research shows little connection between a shortfall of housing and rental affordability problems. Even a massive infusion of new housing would not shrink housing costs enough to solve the crisis, as rents would likely remain out of reach for many households.

    However, there are already subsidies in place that ensure that some renters in the U.S. pay no more than 30% of their income on housing costs. The most effective solution, in our view, is to make these subsidies much more widely available.

    A financial sinkhole

    Just how expensive are rents in the U.S.?

    According to the U.S. Department of Housing and Urban Development, a household that spends more than 30% of its income on housing is deemed to be cost-burdened. If it spends more than 50%, it’s considered severely burdened. In 2023, 54% of all renters spent more than 30% of their pretax income on housing. That’s up from 43% of renters in 1999. And 28% of all renters spent more than half their income on housing in 2023.

    Renters with low incomes are especially unlikely to afford their housing: 81% of renters making less than $30,000 spent more than 30% of their income on housing, and 60% spent more than 50%.

    Estimates of the nation’s housing shortage vary widely, reaching up to 20 million units, depending on analytic approach and the time period covered. Yet our research, which compares growth in the housing stock from 2000 to the present, finds no evidence of an overall shortage of housing units. Rather, we see a gap between the number of low-income households and the number of affordable housing units available to them; more affluent renters face no such shortage. This is true in the nation as a whole and in nearly all large and small metropolitan areas.

    Would lower rents help? Certainly. But they wouldn’t fix everything.

    We ran a simulation to test an admittedly unlikely scenario: What if rents dropped 25% across the board? We found it would reduce the number of cost-burdened renters – but not by as much as you might think.

    Even with the reduction, nearly one-third of all renters would still spend more than 30% of their income on housing. Moreover, reducing rents would help affluent renters much more than those with lower incomes – the households that face the most severe affordability challenges.

    The proportion of cost-burdened renters earning more than $75,000 would fall from 16% to 4%, while the share of similarly burdened renters earning less than $15,000 would drop from 89% to just 80%. Even with a rent rollback of 25%, the majority of renters earning less than $30,000 would remain cost-burdened.

    Vouchers offer more breathing room

    Meanwhile, there’s a proven way of making housing more affordable: rental subsidies.

    In 2024, the U.S. provided what are known as “deep” housing subsidies to about 5 million households, meaning that rent payments are capped at 30% of their income.

    These subsidies take three forms: Housing Choice Vouchers that enable people to rent homes in the private market; public housing; and project-based rental assistance, in which the federal government subsidizes the rents for all or some of the units in properties under private and nonprofit ownership.

    The number of households participating in these three programs has increased by less than 2% since 2014, and they constitute only 25% of all eligible households. Households earning less than 50% of their area’s median family income are eligible for rental assistance. But unlike Social Security, Medicare or food stamps, rental assistance is not an entitlement available to all who qualify. The number of recipients is limited by the amount of funding appropriated each year by Congress, and this funding has never been sufficient to meet the need.

    By expanding rental assistance to all eligible low-income households, the government could make huge headway in solving the rental affordability crisis. The most obvious option would be to expand the existing Housing Choice Voucher program, also known as Section 8.

    The program helps pay the rent up to a specified “payment standard” determined by each local public housing authority, which can set this standard at between 80% and 120% of the HUD-designated fair market rent. To be eligible for the program, units must also satisfy HUD’s physical quality standards.

    Unfortunately, about 43% of voucher recipients are unable to use it. They are either unable to find an apartment that rents for less than the payment standard, meets the physical quality standard, or has a landlord willing to accept vouchers.

    Renters are more likely to find housing using vouchers in cities and states where it’s illegal for landlords to discriminate against voucher holders. Programs that provide housing counseling and landlord outreach and support have also improved outcomes for voucher recipients.

    However, it might be more effective to forgo the voucher program altogether and simply give eligible households cash to cover their housing costs. The Philadelphia Housing Authority is currently testing out this approach.

    The idea is that landlords would be less likely to reject applicants receiving government support if the bureaucratic hurdles were eliminated. The downside of this approach is that it would not prevent landlords from renting out deficient units that the voucher program would normally reject.

    Homeowners get subsidies – why not renters?

    Expanding rental assistance to all eligible low-income households would be costly.

    The Urban Institute, a nonpartisan think tank, estimates it would cost about $118 billion a year.

    However, Congress has spent similar sums on housing subsidies before. But they involve tax breaks for homeowners, not low-income renters. Congress forgoes billions of dollars annually in tax revenue it would otherwise collect were it not for tax deductions, credits, exclusions and exemptions. These are known as tax expenditures. A tax not collected is equivalent to a subsidy payment.

    Only about 25% of eligiblge households receive rental assistance from the federal government.
    Luis Sinco/Los Angeles Times via Getty Images

    For example, from 1998 through 2017 – prior to the tax changes enacted by the first Trump administration in 2017 – the federal government annually sacrificed $187 billion on average, after inflation, in revenue due to mortgage interest deductions, deductions for state and local taxes, and for the exemption of proceeds from the sale of one’s home from capital gains taxes. In fiscal year 2025, these tax expenditures totaled $95.4 billion.

    Moreover, tax expenditures on behalf of homeowners flow mostly to higher-income households. In 2024, for example, over 70% of all mortgage-interest tax deductions went to homeowners earning at least $200,000.

    Broadening the availability of rental subsidies would have other benefits. It would save federal, state and local governments billions of dollars in homeless services. Moreover, automatic provision of rental subsidies would reduce the need for additional subsidies to finance new affordable housing. Universal rental assistance, by guaranteeing sufficient rental income, would allow builders to more easily obtain loans to cover development costs.

    Of course, sharply raising federal expenditures for low-income rental assistance flies in the face of the Trump administration’s priorities. Its budget proposal for the next fiscal year calls for a 44% cut of more than $27 billion in rental assistance and public housing.

    On the other hand, if the government supported rental assistance in amounts commensurate with the tax benefits given to homeowners, it would go a long way toward resolving the rental housing affordability crisis.

    This article is part of a series centered on envisioning ways to deal with the housing crisis.

    Alex Schwartz has received funding from the Catherine and John D. MacArthur Foundation. Since 2019 he has served on New York City’s Rent Guidelines Board. He has a relative who works for The Conversation.

    Kirk McClure received funding from the U.S. Department of Housing and Urban Development and receives funding from the National Science Foundation.

    ref. What if universal rental assistance were implemented to deal with the housing crisis? – https://theconversation.com/what-if-universal-rental-assistance-were-implemented-to-deal-with-the-housing-crisis-257213

    MIL OSI Analysis

  • MIL-OSI Analysis: What if universal rental assistance were implemented to deal with the housing crisis?

    Source: The Conversation – USA – By Alex Schwartz, Professor of Urban Policy, The New School

    Thousands of American families that can’t find affordable apartments are stuck living in extended-stay motels. Michael S. Williamson/The Washington Post via Getty Images

    If there’s one thing that U.S. politicians and activists from across the political spectrum can agree on, it’s that rents are far too high.

    Many experts believe that this crisis is fueled by a shortage of housing, caused principally by restrictive regulations.

    Rents and home prices would fall, the argument goes, if rules such as minimum lot- and house-size requirements and prohibitions against apartment complexes were relaxed. This, in turn, would make it easier to build more housing.

    As experts on housing policy, we’re concerned about housing affordability. But our research shows little connection between a shortfall of housing and rental affordability problems. Even a massive infusion of new housing would not shrink housing costs enough to solve the crisis, as rents would likely remain out of reach for many households.

    However, there are already subsidies in place that ensure that some renters in the U.S. pay no more than 30% of their income on housing costs. The most effective solution, in our view, is to make these subsidies much more widely available.

    A financial sinkhole

    Just how expensive are rents in the U.S.?

    According to the U.S. Department of Housing and Urban Development, a household that spends more than 30% of its income on housing is deemed to be cost-burdened. If it spends more than 50%, it’s considered severely burdened. In 2023, 54% of all renters spent more than 30% of their pretax income on housing. That’s up from 43% of renters in 1999. And 28% of all renters spent more than half their income on housing in 2023.

    Renters with low incomes are especially unlikely to afford their housing: 81% of renters making less than $30,000 spent more than 30% of their income on housing, and 60% spent more than 50%.

    Estimates of the nation’s housing shortage vary widely, reaching up to 20 million units, depending on analytic approach and the time period covered. Yet our research, which compares growth in the housing stock from 2000 to the present, finds no evidence of an overall shortage of housing units. Rather, we see a gap between the number of low-income households and the number of affordable housing units available to them; more affluent renters face no such shortage. This is true in the nation as a whole and in nearly all large and small metropolitan areas.

    Would lower rents help? Certainly. But they wouldn’t fix everything.

    We ran a simulation to test an admittedly unlikely scenario: What if rents dropped 25% across the board? We found it would reduce the number of cost-burdened renters – but not by as much as you might think.

    Even with the reduction, nearly one-third of all renters would still spend more than 30% of their income on housing. Moreover, reducing rents would help affluent renters much more than those with lower incomes – the households that face the most severe affordability challenges.

    The proportion of cost-burdened renters earning more than $75,000 would fall from 16% to 4%, while the share of similarly burdened renters earning less than $15,000 would drop from 89% to just 80%. Even with a rent rollback of 25%, the majority of renters earning less than $30,000 would remain cost-burdened.

    Vouchers offer more breathing room

    Meanwhile, there’s a proven way of making housing more affordable: rental subsidies.

    In 2024, the U.S. provided what are known as “deep” housing subsidies to about 5 million households, meaning that rent payments are capped at 30% of their income.

    These subsidies take three forms: Housing Choice Vouchers that enable people to rent homes in the private market; public housing; and project-based rental assistance, in which the federal government subsidizes the rents for all or some of the units in properties under private and nonprofit ownership.

    The number of households participating in these three programs has increased by less than 2% since 2014, and they constitute only 25% of all eligible households. Households earning less than 50% of their area’s median family income are eligible for rental assistance. But unlike Social Security, Medicare or food stamps, rental assistance is not an entitlement available to all who qualify. The number of recipients is limited by the amount of funding appropriated each year by Congress, and this funding has never been sufficient to meet the need.

    By expanding rental assistance to all eligible low-income households, the government could make huge headway in solving the rental affordability crisis. The most obvious option would be to expand the existing Housing Choice Voucher program, also known as Section 8.

    The program helps pay the rent up to a specified “payment standard” determined by each local public housing authority, which can set this standard at between 80% and 120% of the HUD-designated fair market rent. To be eligible for the program, units must also satisfy HUD’s physical quality standards.

    Unfortunately, about 43% of voucher recipients are unable to use it. They are either unable to find an apartment that rents for less than the payment standard, meets the physical quality standard, or has a landlord willing to accept vouchers.

    Renters are more likely to find housing using vouchers in cities and states where it’s illegal for landlords to discriminate against voucher holders. Programs that provide housing counseling and landlord outreach and support have also improved outcomes for voucher recipients.

    However, it might be more effective to forgo the voucher program altogether and simply give eligible households cash to cover their housing costs. The Philadelphia Housing Authority is currently testing out this approach.

    The idea is that landlords would be less likely to reject applicants receiving government support if the bureaucratic hurdles were eliminated. The downside of this approach is that it would not prevent landlords from renting out deficient units that the voucher program would normally reject.

    Homeowners get subsidies – why not renters?

    Expanding rental assistance to all eligible low-income households would be costly.

    The Urban Institute, a nonpartisan think tank, estimates it would cost about $118 billion a year.

    However, Congress has spent similar sums on housing subsidies before. But they involve tax breaks for homeowners, not low-income renters. Congress forgoes billions of dollars annually in tax revenue it would otherwise collect were it not for tax deductions, credits, exclusions and exemptions. These are known as tax expenditures. A tax not collected is equivalent to a subsidy payment.

    Only about 25% of eligiblge households receive rental assistance from the federal government.
    Luis Sinco/Los Angeles Times via Getty Images

    For example, from 1998 through 2017 – prior to the tax changes enacted by the first Trump administration in 2017 – the federal government annually sacrificed $187 billion on average, after inflation, in revenue due to mortgage interest deductions, deductions for state and local taxes, and for the exemption of proceeds from the sale of one’s home from capital gains taxes. In fiscal year 2025, these tax expenditures totaled $95.4 billion.

    Moreover, tax expenditures on behalf of homeowners flow mostly to higher-income households. In 2024, for example, over 70% of all mortgage-interest tax deductions went to homeowners earning at least $200,000.

    Broadening the availability of rental subsidies would have other benefits. It would save federal, state and local governments billions of dollars in homeless services. Moreover, automatic provision of rental subsidies would reduce the need for additional subsidies to finance new affordable housing. Universal rental assistance, by guaranteeing sufficient rental income, would allow builders to more easily obtain loans to cover development costs.

    Of course, sharply raising federal expenditures for low-income rental assistance flies in the face of the Trump administration’s priorities. Its budget proposal for the next fiscal year calls for a 44% cut of more than $27 billion in rental assistance and public housing.

    On the other hand, if the government supported rental assistance in amounts commensurate with the tax benefits given to homeowners, it would go a long way toward resolving the rental housing affordability crisis.

    This article is part of a series centered on envisioning ways to deal with the housing crisis.

    Alex Schwartz has received funding from the Catherine and John D. MacArthur Foundation. Since 2019 he has served on New York City’s Rent Guidelines Board. He has a relative who works for The Conversation.

    Kirk McClure received funding from the U.S. Department of Housing and Urban Development and receives funding from the National Science Foundation.

    ref. What if universal rental assistance were implemented to deal with the housing crisis? – https://theconversation.com/what-if-universal-rental-assistance-were-implemented-to-deal-with-the-housing-crisis-257213

    MIL OSI Analysis

  • MIL-OSI Analysis: Yelp’s addition of a ‘Black-owned’ tag led to a slight drop in business ratings in Detroit

    Source: The Conversation – USA – By Matthew Bui, Assistant Professor of Information and Digital Studies, University of Michigan

    Yelp’s Black-owned tag was designed to help business owners like Don Studvent attract more customers. His restaurant closed in 2018 after nine years in business. AP Photo/Carlos Osorio

    When the online review platform Yelp added a “Black-owned” tag in 2020, it boosted the visibility of Black-owned restaurants in Detroit. It also caused their ratings to drop, according to our recent study.

    Both local and nonlocal reviewers who showed awareness of a restaurant’s Black ownership rated restaurants 3.03 stars on average. Those who did not acknowledge Black ownership gave a rating of 3.78 stars on average. The tag seems to have caused the average rating to drop by attracting more reviewers who were aware of Black ownership.

    Why it matters

    Technology companies often introduce new features and tools to influence user behavior and make their platforms more usable.

    Although Yelp intended to support Black communities with the Black-owned tag, the design intervention was harmful to Black restaurant owners in Detroit because Yelp failed to consider platform and community-based factors that significantly shape user interactions.

    Yelp’s user base is predominantly white, educated and affluent. Making Detroit’s Black-owned restaurants more visible to Yelp users may have amplified cross-cultural interactions and frictions. For example, non-Black users sometimes mentioned “slower” and “rude” service as justifications for lower ratings. Close readings of these reviews hinted at intercultural and communicative clashes.

    And even businesses that don’t select the tag are identified within searches as Black-owned, based on user reviews and relevant links. Yelp doesn’t provide a way for the business to opt out of these search results.

    How we did our work

    To examine the local impacts of Yelp’s Black-owned tag, we collected over 250,000 Yelp reviews of Black- and non-Black-owned restaurants in Detroit and Los Angeles.

    We identified Black-owned restaurants through community-sourced lists for Detroit and Los Angeles and then generated a random sample for the non-Black-owned restaurants.

    We then identified reviews that explicitly noted “Black ownership” for closer analysis.

    Detroit’s Black-owned businesses saw a greater loss in business compared with “ownership-unreported” restaurants during the COVID-19 pandemic. This means they also potentially had more to gain from the new tag.

    We found the awareness of Black ownership on Yelp significantly increased following Yelp’s addition of the Black-owned tag in June 2020. A year after the tag was added, reviews in Detroit mentioned Black ownership 4.3% more often than a year before it was rolled out.

    Detroit Black-owned restaurants also saw a small temporary spike in their number of reviews, largely around the time Yelp added the Black-owned tag. At the same time, the restaurants’ average star ratings dropped from 3.91 to 3.88. In contrast, non-Black-owned restaurants’ ratings stayed relatively steady at 3.90.

    This metric is an aggregate of all Detroit restaurants’ Yelp reviews over their entire existence, so a .03-star rating change is small but significant.

    Even minor changes to star ratings affect the number of diners restaurants attract, their earning potential and the likelihood they will sell out of food.

    Adding obstacles in digital platforms serves to reproduce and amplify inequalities these businesses already face, rather than alleviate them. For example, Black-owned businesses have a harder time getting loans and are relatively underrepresented in Michigan as a whole.

    These findings may seem surprising given that Detroit is a majority Black city. However, Black users on Yelp are a minority. Keeping in mind the skewed user base of Yelp, we hypothesize the lower reviews for businesses featuring a Black-owned tag reflect existing racial and digital divides in the city.

    Generally, our study provides additional evidence that digital interventions are not “one-size-fits-all,” nor is digital visibility inherently positive for all businesses.

    The Research Brief is a short take on interesting academic work.

    This research was supported by a research grant from the Ewing Marion Kauffman Foundation.

    Matthew Bui does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    Cameron Moy does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Yelp’s addition of a ‘Black-owned’ tag led to a slight drop in business ratings in Detroit – https://theconversation.com/yelps-addition-of-a-black-owned-tag-led-to-a-slight-drop-in-business-ratings-in-detroit-256306

    MIL OSI Analysis

  • MIL-OSI Analysis: Yelp’s addition of a ‘Black-owned’ tag led to a slight drop in business ratings in Detroit

    Source: The Conversation – USA – By Matthew Bui, Assistant Professor of Information and Digital Studies, University of Michigan

    Yelp’s Black-owned tag was designed to help business owners like Don Studvent attract more customers. His restaurant closed in 2018 after nine years in business. AP Photo/Carlos Osorio

    When the online review platform Yelp added a “Black-owned” tag in 2020, it boosted the visibility of Black-owned restaurants in Detroit. It also caused their ratings to drop, according to our recent study.

    Both local and nonlocal reviewers who showed awareness of a restaurant’s Black ownership rated restaurants 3.03 stars on average. Those who did not acknowledge Black ownership gave a rating of 3.78 stars on average. The tag seems to have caused the average rating to drop by attracting more reviewers who were aware of Black ownership.

    Why it matters

    Technology companies often introduce new features and tools to influence user behavior and make their platforms more usable.

    Although Yelp intended to support Black communities with the Black-owned tag, the design intervention was harmful to Black restaurant owners in Detroit because Yelp failed to consider platform and community-based factors that significantly shape user interactions.

    Yelp’s user base is predominantly white, educated and affluent. Making Detroit’s Black-owned restaurants more visible to Yelp users may have amplified cross-cultural interactions and frictions. For example, non-Black users sometimes mentioned “slower” and “rude” service as justifications for lower ratings. Close readings of these reviews hinted at intercultural and communicative clashes.

    And even businesses that don’t select the tag are identified within searches as Black-owned, based on user reviews and relevant links. Yelp doesn’t provide a way for the business to opt out of these search results.

    How we did our work

    To examine the local impacts of Yelp’s Black-owned tag, we collected over 250,000 Yelp reviews of Black- and non-Black-owned restaurants in Detroit and Los Angeles.

    We identified Black-owned restaurants through community-sourced lists for Detroit and Los Angeles and then generated a random sample for the non-Black-owned restaurants.

    We then identified reviews that explicitly noted “Black ownership” for closer analysis.

    Detroit’s Black-owned businesses saw a greater loss in business compared with “ownership-unreported” restaurants during the COVID-19 pandemic. This means they also potentially had more to gain from the new tag.

    We found the awareness of Black ownership on Yelp significantly increased following Yelp’s addition of the Black-owned tag in June 2020. A year after the tag was added, reviews in Detroit mentioned Black ownership 4.3% more often than a year before it was rolled out.

    Detroit Black-owned restaurants also saw a small temporary spike in their number of reviews, largely around the time Yelp added the Black-owned tag. At the same time, the restaurants’ average star ratings dropped from 3.91 to 3.88. In contrast, non-Black-owned restaurants’ ratings stayed relatively steady at 3.90.

    This metric is an aggregate of all Detroit restaurants’ Yelp reviews over their entire existence, so a .03-star rating change is small but significant.

    Even minor changes to star ratings affect the number of diners restaurants attract, their earning potential and the likelihood they will sell out of food.

    Adding obstacles in digital platforms serves to reproduce and amplify inequalities these businesses already face, rather than alleviate them. For example, Black-owned businesses have a harder time getting loans and are relatively underrepresented in Michigan as a whole.

    These findings may seem surprising given that Detroit is a majority Black city. However, Black users on Yelp are a minority. Keeping in mind the skewed user base of Yelp, we hypothesize the lower reviews for businesses featuring a Black-owned tag reflect existing racial and digital divides in the city.

    Generally, our study provides additional evidence that digital interventions are not “one-size-fits-all,” nor is digital visibility inherently positive for all businesses.

    The Research Brief is a short take on interesting academic work.

    This research was supported by a research grant from the Ewing Marion Kauffman Foundation.

    Matthew Bui does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    Cameron Moy does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Yelp’s addition of a ‘Black-owned’ tag led to a slight drop in business ratings in Detroit – https://theconversation.com/yelps-addition-of-a-black-owned-tag-led-to-a-slight-drop-in-business-ratings-in-detroit-256306

    MIL OSI Analysis

  • MIL-OSI Analysis: Self-censorship and the ‘spiral of silence’: Why Americans are less likely to publicly voice their opinions on political issues

    Source: The Conversation – USA – By James L. Gibson, Sidney W. Souers Professor of Government, Washington University in St. Louis

    Polarization has led many people to feel they’re being silenced. AP Photo/Andrew Harnik

    For decades, Americans’ trust in one another has been on the decline, according to the most recent General Social Survey.

    A major factor in that downshift has been the concurrent rise in the polarization between the two major political parties. Supporters of Republicans and Democrats are far more likely than in the past to view the opposite side with distrust.

    That political polarization is so stark that many Americans are now unlikely to have friendly social interactions, live nearby or congregate with people from opposing camps, according to one recent study.

    Social scientists often refer to this sort of animosity as “affective polarization,” meaning that people not only hold conflicting views on many or most political issues but also disdain fellow citizens who hold different opinions. Over the past few decades, such affective polarization in the U.S. has become commonplace.

    Polarization undermines democracy by making the essential processes of democratic deliberation – discussion, negotiation, compromise and bargaining over public policies – difficult, if not impossible. Because polarization extends so broadly and deeply, some people have become unwilling to express their views until they’ve confirmed they’re speaking with someone who’s like-minded.

    I’m a political scientist, and I found that Americans were far less likely to publicly voice their opinions than even during the height of the McCarthy-era Red Scare.

    A supporter of Donald Trump tries to push past demonstrators in Philadelphia on June 30, 2023.
    AP Photo/Nathan Howard

    The muting of the American voice

    According to a 2022 book written by political scientists Taylor Carlson and Jaime E. Settle, fears about speaking out are grounded in concerns about social sanctions for expressing unwelcome views.

    And this withholding of views extends across a broad range of social circumstances. In 2022, for instance, I conducted a survey of a representative sample of about 1,500 residents of the U.S. I found that while 45% of the respondents were worried about expressing their views to members of their immediate family, this percentage ballooned to 62% when it came to speaking out publicly in one’s community. Nearly half of those surveyed said they felt less free to speak their minds than they used to.

    About three to four times more Americans said they did not feel free to express themselves, compared with the number of those who said so during the McCarthy era.

    Censorship in the US and globally

    Since that survey, attacks on free speech have increased markedly, especially under the Trump administration.

    Issues such as the Israeli war in Gaza, activist campaigns against “wokeism,” and the ever-increasing attempts to penalize people for expressing certain ideas have made it more difficult for people to speak out.

    The breadth of self-censorship in the U.S. in recent times is not unprecedented or unique to the U.S. Indeed, research in Germany, Sweden and elsewhere have reported similar increases in self-censorship in the past several years.

    How the ‘spiral of a silence’ explains self-censorship

    In the 1970s, Elisabeth Noelle-Neumann, a distinguished German political scientist, coined the term the “spiral of silence” to describe how self-censorship arises and what its consequences can be. Informed by research she conducted on the 1965 West German federal election, Noelle-Neumann observed that an individual’s willingness to publicly give their opinion was tied to their perceptions of public opinion on an issue.

    The so-called spiral happens when someone expresses a view on a controversial issue and then encounters vigorous criticism from an aggressive minority – perhaps even sharp attacks.

    People rally at the University of California, Berkeley, to protest the Trump administration on March 19, 2025.
    AP Photo/Godofredo A. Vásquez

    A listener can impose costs on the speaker for expressing the view in a number of ways, including criticism, direct personal attacks and even attempts to “cancel” the speaker through ending friendships or refusing to attend social events such as Thanksgiving or holiday dinners.

    This kind of sanction isn’t limited to just social interactions but also when someone is threatened by far bigger institutions, from corporations to the government. The speaker learns from this encounter and decides to keep their mouth shut in the future because the costs of expressing the view are simply too high.

    This self-censorship has knock-on effects, as views become less commonly expressed and people are less likely to encounter support from those who hold similar views. People come to believe that they are in the minority, even if they are, in fact, in the majority. This belief then also contributes to the unwillingness to express one’s views.

    The opinions of the aggressive minority then become dominant. True public opinion and expressed public opinion diverge. Most importantly, the free-ranging debate so necessary to democratic politics is stifled.

    Not all issues are like this, of course – only issues for which a committed and determined minority exists that can impose costs on a particular viewpoint are subject to this spiral.

    The consequences for democratic deliberation

    The tendency toward self-censorship means listeners are deprived of hearing the withheld views. The marketplace of ideas becomes skewed; the choices of buyers in that marketplace are circumscribed. The robust debate so necessary to deliberations in a democracy is squelched as the views of a minority come to be seen as the only “acceptable” political views.

    No better example of this can be found than in the absence of debate in the contemporary U.S. about the treatment of the Palestinians by the Israelis, whatever outcome such vigorous discussion might produce. Fearful of consequences, many people are withholding their views on Israel – whether Israel has committed war crimes, for instance, or whether Israeli members of government should be sanctioned – because they fear being branded as antisemitic.

    Many Americans are also biting their tongues when it comes to DEI, affirmative action and even whether political tolerance is essential for democracy.

    But the dominant views are also penalized by this spiral. By not having to face their competitors, they lose the opportunity to check their beliefs and, if confirmed, bolster and strengthen their arguments. Good ideas lose the chance to become better, while bad ideas – such as something as extreme as Holocaust denial – are given space to flourish.

    The spiral of silence therefore becomes inimical to pluralistic debate, discussion and, ultimately, to democracy itself.

    James L. Gibson does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Self-censorship and the ‘spiral of silence’: Why Americans are less likely to publicly voice their opinions on political issues – https://theconversation.com/self-censorship-and-the-spiral-of-silence-why-americans-are-less-likely-to-publicly-voice-their-opinions-on-political-issues-251979

    MIL OSI Analysis

  • MIL-OSI Analysis: Self-censorship and the ‘spiral of silence’: Why Americans are less likely to publicly voice their opinions on political issues

    Source: The Conversation – USA – By James L. Gibson, Sidney W. Souers Professor of Government, Washington University in St. Louis

    Polarization has led many people to feel they’re being silenced. AP Photo/Andrew Harnik

    For decades, Americans’ trust in one another has been on the decline, according to the most recent General Social Survey.

    A major factor in that downshift has been the concurrent rise in the polarization between the two major political parties. Supporters of Republicans and Democrats are far more likely than in the past to view the opposite side with distrust.

    That political polarization is so stark that many Americans are now unlikely to have friendly social interactions, live nearby or congregate with people from opposing camps, according to one recent study.

    Social scientists often refer to this sort of animosity as “affective polarization,” meaning that people not only hold conflicting views on many or most political issues but also disdain fellow citizens who hold different opinions. Over the past few decades, such affective polarization in the U.S. has become commonplace.

    Polarization undermines democracy by making the essential processes of democratic deliberation – discussion, negotiation, compromise and bargaining over public policies – difficult, if not impossible. Because polarization extends so broadly and deeply, some people have become unwilling to express their views until they’ve confirmed they’re speaking with someone who’s like-minded.

    I’m a political scientist, and I found that Americans were far less likely to publicly voice their opinions than even during the height of the McCarthy-era Red Scare.

    A supporter of Donald Trump tries to push past demonstrators in Philadelphia on June 30, 2023.
    AP Photo/Nathan Howard

    The muting of the American voice

    According to a 2022 book written by political scientists Taylor Carlson and Jaime E. Settle, fears about speaking out are grounded in concerns about social sanctions for expressing unwelcome views.

    And this withholding of views extends across a broad range of social circumstances. In 2022, for instance, I conducted a survey of a representative sample of about 1,500 residents of the U.S. I found that while 45% of the respondents were worried about expressing their views to members of their immediate family, this percentage ballooned to 62% when it came to speaking out publicly in one’s community. Nearly half of those surveyed said they felt less free to speak their minds than they used to.

    About three to four times more Americans said they did not feel free to express themselves, compared with the number of those who said so during the McCarthy era.

    Censorship in the US and globally

    Since that survey, attacks on free speech have increased markedly, especially under the Trump administration.

    Issues such as the Israeli war in Gaza, activist campaigns against “wokeism,” and the ever-increasing attempts to penalize people for expressing certain ideas have made it more difficult for people to speak out.

    The breadth of self-censorship in the U.S. in recent times is not unprecedented or unique to the U.S. Indeed, research in Germany, Sweden and elsewhere have reported similar increases in self-censorship in the past several years.

    How the ‘spiral of a silence’ explains self-censorship

    In the 1970s, Elisabeth Noelle-Neumann, a distinguished German political scientist, coined the term the “spiral of silence” to describe how self-censorship arises and what its consequences can be. Informed by research she conducted on the 1965 West German federal election, Noelle-Neumann observed that an individual’s willingness to publicly give their opinion was tied to their perceptions of public opinion on an issue.

    The so-called spiral happens when someone expresses a view on a controversial issue and then encounters vigorous criticism from an aggressive minority – perhaps even sharp attacks.

    People rally at the University of California, Berkeley, to protest the Trump administration on March 19, 2025.
    AP Photo/Godofredo A. Vásquez

    A listener can impose costs on the speaker for expressing the view in a number of ways, including criticism, direct personal attacks and even attempts to “cancel” the speaker through ending friendships or refusing to attend social events such as Thanksgiving or holiday dinners.

    This kind of sanction isn’t limited to just social interactions but also when someone is threatened by far bigger institutions, from corporations to the government. The speaker learns from this encounter and decides to keep their mouth shut in the future because the costs of expressing the view are simply too high.

    This self-censorship has knock-on effects, as views become less commonly expressed and people are less likely to encounter support from those who hold similar views. People come to believe that they are in the minority, even if they are, in fact, in the majority. This belief then also contributes to the unwillingness to express one’s views.

    The opinions of the aggressive minority then become dominant. True public opinion and expressed public opinion diverge. Most importantly, the free-ranging debate so necessary to democratic politics is stifled.

    Not all issues are like this, of course – only issues for which a committed and determined minority exists that can impose costs on a particular viewpoint are subject to this spiral.

    The consequences for democratic deliberation

    The tendency toward self-censorship means listeners are deprived of hearing the withheld views. The marketplace of ideas becomes skewed; the choices of buyers in that marketplace are circumscribed. The robust debate so necessary to deliberations in a democracy is squelched as the views of a minority come to be seen as the only “acceptable” political views.

    No better example of this can be found than in the absence of debate in the contemporary U.S. about the treatment of the Palestinians by the Israelis, whatever outcome such vigorous discussion might produce. Fearful of consequences, many people are withholding their views on Israel – whether Israel has committed war crimes, for instance, or whether Israeli members of government should be sanctioned – because they fear being branded as antisemitic.

    Many Americans are also biting their tongues when it comes to DEI, affirmative action and even whether political tolerance is essential for democracy.

    But the dominant views are also penalized by this spiral. By not having to face their competitors, they lose the opportunity to check their beliefs and, if confirmed, bolster and strengthen their arguments. Good ideas lose the chance to become better, while bad ideas – such as something as extreme as Holocaust denial – are given space to flourish.

    The spiral of silence therefore becomes inimical to pluralistic debate, discussion and, ultimately, to democracy itself.

    James L. Gibson does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Self-censorship and the ‘spiral of silence’: Why Americans are less likely to publicly voice their opinions on political issues – https://theconversation.com/self-censorship-and-the-spiral-of-silence-why-americans-are-less-likely-to-publicly-voice-their-opinions-on-political-issues-251979

    MIL OSI Analysis

  • MIL-OSI Analysis: I’m a physician who has looked at hundreds of studies of vaccine safety, and here’s some of what RFK Jr. gets wrong

    Source: The Conversation – USA – By Jake Scott, Clinical Associate Professor of Infectious Diseases, Stanford University

    Public health experts worry that factually inaccurate statements by Robert F. Kennedy Jr. threaten the public’s confidence in vaccines. Andrew HarnikGetty Images

    In the four months since he began serving as secretary of the Department of Health and Human Services, Robert F. Kennedy Jr. has made many public statements about vaccines that have cast doubt on their safety and on the objectivity of long-standing processes established to evaluate them.

    Many of these statements are factually incorrect. For example, in a newscast aired on June 12, 2025, Kennedy told Fox News viewers that 97% of federal vaccine advisers are on the take. In the same interview, he also claimed that children receive 92 mandatory shots. He has also widely claimed that only COVID-19 vaccines, not other vaccines in use by both children and adults, were ever tested against placebos and that “nobody has any idea” how safe routine immunizations are.

    As an infectious disease physician who curates an open database of hundreds of controlled vaccine trials involving over 6 million participants, I am intimately familiar with the decades of research on vaccine safety. I believe it is important to correct the record – especially because these statements come from the official who now oversees the agencies charged with protecting Americans’ health.

    Do children really receive 92 mandatory shots?

    In 1986, the childhood vaccine schedule contained about 11 doses protecting against seven diseases. Today, it includes roughly 50 injections covering 16 diseases. State school entry laws typically require 30 to 32 shots across 10 to 12 diseases. No state mandates COVID-19 vaccination. Where Kennedy’s “92 mandatory shots” figure comes from is unclear, but the actual number is significantly lower.

    From a safety standpoint, the more important question is whether today’s schedule with additional vaccines might be too taxing for children’s immune systems. It isn’t, because as vaccine technology improved over the past several decades, the number of antigens in each vaccine dose is much lower than before.

    Antigens are the molecules in vaccines that trigger a response from the immune system, training it to identify the specific pathogen. Some vaccines contain a minute amount of aluminum salt that serves as an adjuvant – a helper ingredient that improves the quality and staying power of the immune response, so each dose can protect with less antigen.

    Those 11 doses in 1986 delivered more than 3,000 antigens and 1.5 milligrams of aluminum over 18 years. Today’s complete schedule delivers roughly 165 antigens – which is a 95% reduction – and 5-6 milligrams of aluminum in the same time frame. A single smallpox inoculation in 1900 exposed a child to more antigens than today’s complete series.

    Jonas Salk, the inventor of the polio vaccine, administers a dose to a boy in 1954.
    Underwood Archives via Getty Images

    Since 1986, the United States has introduced vaccines against Haemophilus influenzae type b, hepatitis A and B, chickenpox, pneumococcal disease, rotavirus and human papillomavirus. Each addition represents a life-saving advance.

    The incidence of Haemophilus influenzae type b, a bacterial infection that can cause pneumonia, meningitis and other severe diseases, has dropped by 99% in infants. Pediatric hepatitis infections are down more than 90%, and chickenpox hospitalizations are down about 90%. The Centers for Disease Control and Prevention estimates that vaccinating children born from 1994 to 2023 will avert 508 million illnesses and 1,129,000 premature deaths.

    Placebo testing for vaccines

    Kennedy has asserted that only COVID-19 vaccines have undergone rigorous safety trials in which they were tested against placebos. This is categorically wrong.

    Of the 378 controlled trials in our database, 195 compared volunteers’ response to a vaccine with their response to a placebo. Of those, 159 gave volunteers only a salt water solution or another inert substance. Another 36 gave them just the adjuvant without any viral or bacterial material, as a way to see whether there were side effects from the antigen itself or the injection. Every routine childhood vaccine antigen appears in at least one such study.

    The 1954 Salk polio trial, one of the largest clinical trials in medical history, enrolled more than 600,000 children and tested the vaccine by comparing it with a salt water control. Similar trials, which used a substance that has no biological effect as a control, were used to test Haemophilus influenzae type b, pneumococcal, rotavirus, influenza and HPV vaccines.

    Once an effective vaccine exists, ethics boards require new versions be compared against that licensed standard because withholding proven protection from children would be unethical.

    How unknown is the safety of widely used vaccines?

    Kennedy has insisted on multiple occasions that “nobody has any idea” about vaccine safety profiles. Of the 378 trials in our database, the vast majority published detailed safety outcomes.

    Beyond trials, the U.S. operates the Vaccine Adverse Event Reporting System, the Vaccine Safety Datalink and the PRISM network to monitor hundreds of millions of doses for rare problems. The Vaccine Adverse Event Reporting System works like an open mailbox where anyone – patients, parents, clinicians – can report a post-shot problem; the Vaccine Safety Datalink analyzes anonymized electronic health records from large health care systems to spot patterns; and PRISM scans billions of insurance claims in near-real time to confirm or rule out rare safety signals.

    These systems led health officials to pull the first rotavirus vaccine in 1999 after it was linked to bowel obstruction, and to restrict the Johnson & Johnson COVID-19 vaccine in 2021 after rare clotting events. Few drug classes undergo such continuous surveillance and are subject to such swift corrective action when genuine risks emerge.

    The conflicts of interest claim

    On June 9, Kennedy took the unprecedented step of dissolving vetted members of the Advisory Committee on Immunization Practices, the expert body that advises the CDC on national vaccine policy. He has claimed repeatedly that the vast majority of serving members of the committee – 97% – had extensive conflicts of interest because of their entanglements with the pharmaceutical industry. Kennedy bases that number on a 2009 federal audit of conflict-of-interest paperwork, but that report looked at 17 CDC advisory committees, not specifically this vaccine committee. And it found no pervasive wrongdoing – 97% of disclosure forms only contained routine paperwork mistakes, such as information in the wrong box or a missing initial, and not hidden financial ties.

    Reuters examined data from Open Payments, a government website that discloses health care providers’ relationships with industry, for all 17 voting members of the committee who were dismissed. Six received no more than US$80 from drugmakers over seven years, and four had no payments at all.

    The remaining seven members accepted between $4,000 and $55,000 over seven years, mostly for modest consulting or travel. In other words, just 41% of the committee received anything more than pocket change from drugmakers. Committee members must divest vaccine company stock and recuse themselves from votes involving conflicts.

    A term without a meaning

    Kennedy has warned that vaccines cause “immune deregulation,” a term that has no basis in immunology. Vaccines train the immune system, and the diseases they prevent are the real threats to immune function.

    Measles can wipe immune memory, leaving children vulnerable to other infections for years. COVID-19 can trigger multisystem inflammatory syndrome in children. Chronic hepatitis B can cause immune-mediated organ damage. Preventing these conditions protects people from immune system damage.

    Today’s vaccine panel doesn’t just prevent infections; it deters doctor visits and thereby reduces unnecessary prescriptions for “just-in-case” antibiotics. It’s one of the rare places in medicine where physicians like me now do more good with less biological burden than we did 40 years ago.

    The evidence is clear and publicly available: Vaccines have dramatically reduced childhood illness, disability and death on a historic scale.

    Jake Scott does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. I’m a physician who has looked at hundreds of studies of vaccine safety, and here’s some of what RFK Jr. gets wrong – https://theconversation.com/im-a-physician-who-has-looked-at-hundreds-of-studies-of-vaccine-safety-and-heres-some-of-what-rfk-jr-gets-wrong-259659

    MIL OSI Analysis

  • MIL-OSI Analysis: I’m a physician who has looked at hundreds of studies of vaccine safety, and here’s some of what RFK Jr. gets wrong

    Source: The Conversation – USA – By Jake Scott, Clinical Associate Professor of Infectious Diseases, Stanford University

    Public health experts worry that factually inaccurate statements by Robert F. Kennedy Jr. threaten the public’s confidence in vaccines. Andrew HarnikGetty Images

    In the four months since he began serving as secretary of the Department of Health and Human Services, Robert F. Kennedy Jr. has made many public statements about vaccines that have cast doubt on their safety and on the objectivity of long-standing processes established to evaluate them.

    Many of these statements are factually incorrect. For example, in a newscast aired on June 12, 2025, Kennedy told Fox News viewers that 97% of federal vaccine advisers are on the take. In the same interview, he also claimed that children receive 92 mandatory shots. He has also widely claimed that only COVID-19 vaccines, not other vaccines in use by both children and adults, were ever tested against placebos and that “nobody has any idea” how safe routine immunizations are.

    As an infectious disease physician who curates an open database of hundreds of controlled vaccine trials involving over 6 million participants, I am intimately familiar with the decades of research on vaccine safety. I believe it is important to correct the record – especially because these statements come from the official who now oversees the agencies charged with protecting Americans’ health.

    Do children really receive 92 mandatory shots?

    In 1986, the childhood vaccine schedule contained about 11 doses protecting against seven diseases. Today, it includes roughly 50 injections covering 16 diseases. State school entry laws typically require 30 to 32 shots across 10 to 12 diseases. No state mandates COVID-19 vaccination. Where Kennedy’s “92 mandatory shots” figure comes from is unclear, but the actual number is significantly lower.

    From a safety standpoint, the more important question is whether today’s schedule with additional vaccines might be too taxing for children’s immune systems. It isn’t, because as vaccine technology improved over the past several decades, the number of antigens in each vaccine dose is much lower than before.

    Antigens are the molecules in vaccines that trigger a response from the immune system, training it to identify the specific pathogen. Some vaccines contain a minute amount of aluminum salt that serves as an adjuvant – a helper ingredient that improves the quality and staying power of the immune response, so each dose can protect with less antigen.

    Those 11 doses in 1986 delivered more than 3,000 antigens and 1.5 milligrams of aluminum over 18 years. Today’s complete schedule delivers roughly 165 antigens – which is a 95% reduction – and 5-6 milligrams of aluminum in the same time frame. A single smallpox inoculation in 1900 exposed a child to more antigens than today’s complete series.

    Jonas Salk, the inventor of the polio vaccine, administers a dose to a boy in 1954.
    Underwood Archives via Getty Images

    Since 1986, the United States has introduced vaccines against Haemophilus influenzae type b, hepatitis A and B, chickenpox, pneumococcal disease, rotavirus and human papillomavirus. Each addition represents a life-saving advance.

    The incidence of Haemophilus influenzae type b, a bacterial infection that can cause pneumonia, meningitis and other severe diseases, has dropped by 99% in infants. Pediatric hepatitis infections are down more than 90%, and chickenpox hospitalizations are down about 90%. The Centers for Disease Control and Prevention estimates that vaccinating children born from 1994 to 2023 will avert 508 million illnesses and 1,129,000 premature deaths.

    Placebo testing for vaccines

    Kennedy has asserted that only COVID-19 vaccines have undergone rigorous safety trials in which they were tested against placebos. This is categorically wrong.

    Of the 378 controlled trials in our database, 195 compared volunteers’ response to a vaccine with their response to a placebo. Of those, 159 gave volunteers only a salt water solution or another inert substance. Another 36 gave them just the adjuvant without any viral or bacterial material, as a way to see whether there were side effects from the antigen itself or the injection. Every routine childhood vaccine antigen appears in at least one such study.

    The 1954 Salk polio trial, one of the largest clinical trials in medical history, enrolled more than 600,000 children and tested the vaccine by comparing it with a salt water control. Similar trials, which used a substance that has no biological effect as a control, were used to test Haemophilus influenzae type b, pneumococcal, rotavirus, influenza and HPV vaccines.

    Once an effective vaccine exists, ethics boards require new versions be compared against that licensed standard because withholding proven protection from children would be unethical.

    How unknown is the safety of widely used vaccines?

    Kennedy has insisted on multiple occasions that “nobody has any idea” about vaccine safety profiles. Of the 378 trials in our database, the vast majority published detailed safety outcomes.

    Beyond trials, the U.S. operates the Vaccine Adverse Event Reporting System, the Vaccine Safety Datalink and the PRISM network to monitor hundreds of millions of doses for rare problems. The Vaccine Adverse Event Reporting System works like an open mailbox where anyone – patients, parents, clinicians – can report a post-shot problem; the Vaccine Safety Datalink analyzes anonymized electronic health records from large health care systems to spot patterns; and PRISM scans billions of insurance claims in near-real time to confirm or rule out rare safety signals.

    These systems led health officials to pull the first rotavirus vaccine in 1999 after it was linked to bowel obstruction, and to restrict the Johnson & Johnson COVID-19 vaccine in 2021 after rare clotting events. Few drug classes undergo such continuous surveillance and are subject to such swift corrective action when genuine risks emerge.

    The conflicts of interest claim

    On June 9, Kennedy took the unprecedented step of dissolving vetted members of the Advisory Committee on Immunization Practices, the expert body that advises the CDC on national vaccine policy. He has claimed repeatedly that the vast majority of serving members of the committee – 97% – had extensive conflicts of interest because of their entanglements with the pharmaceutical industry. Kennedy bases that number on a 2009 federal audit of conflict-of-interest paperwork, but that report looked at 17 CDC advisory committees, not specifically this vaccine committee. And it found no pervasive wrongdoing – 97% of disclosure forms only contained routine paperwork mistakes, such as information in the wrong box or a missing initial, and not hidden financial ties.

    Reuters examined data from Open Payments, a government website that discloses health care providers’ relationships with industry, for all 17 voting members of the committee who were dismissed. Six received no more than US$80 from drugmakers over seven years, and four had no payments at all.

    The remaining seven members accepted between $4,000 and $55,000 over seven years, mostly for modest consulting or travel. In other words, just 41% of the committee received anything more than pocket change from drugmakers. Committee members must divest vaccine company stock and recuse themselves from votes involving conflicts.

    A term without a meaning

    Kennedy has warned that vaccines cause “immune deregulation,” a term that has no basis in immunology. Vaccines train the immune system, and the diseases they prevent are the real threats to immune function.

    Measles can wipe immune memory, leaving children vulnerable to other infections for years. COVID-19 can trigger multisystem inflammatory syndrome in children. Chronic hepatitis B can cause immune-mediated organ damage. Preventing these conditions protects people from immune system damage.

    Today’s vaccine panel doesn’t just prevent infections; it deters doctor visits and thereby reduces unnecessary prescriptions for “just-in-case” antibiotics. It’s one of the rare places in medicine where physicians like me now do more good with less biological burden than we did 40 years ago.

    The evidence is clear and publicly available: Vaccines have dramatically reduced childhood illness, disability and death on a historic scale.

    Jake Scott does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. I’m a physician who has looked at hundreds of studies of vaccine safety, and here’s some of what RFK Jr. gets wrong – https://theconversation.com/im-a-physician-who-has-looked-at-hundreds-of-studies-of-vaccine-safety-and-heres-some-of-what-rfk-jr-gets-wrong-259659

    MIL OSI Analysis

  • MIL-OSI Analysis: Israel-Iran war recalls the 2003 US invasion of Iraq – a war my undergraduate students see as a relic of the past

    Source: The Conversation – USA – By Andrea Stanton, Associate Professor of Islamic Studies & Faculty Affiliate, Center for Middle East Studies, University of Denver

    American troops topple a statue of Saddam Hussein on April 9, 2003, in Baghdad. Gilles Bassignac/Gamma-Rapho via Getty Images

    After 12 days of trading deadly airstrikes, Israel and Iran confirmed on June 24, 2025, that a ceasefire is in effect, one day after President Donald Trump proclaimed the countries reached a deal to end fighting. Experts are wondering how long the ceasefire, which does not contain any specific conditions, will hold.

    Meanwhile, Republicans and Democrats alike have debated whether the Trump administration’s decision to bomb Iran’s three nuclear facilities on June 22 constituted an unofficial declaration of war – since Trump has not asked Congress to formally declare war against Iran.

    The United States’ involvement in the fighting between Iran and Israel, which Israel started on June 12, has also sparked concerned comparisons with the eight-year war the U.S. waged in Iraq, another Middle Eastern country.

    The U.S. invaded Iraq more than 20 years ago in March 2003, claiming it had to disarm the Iraqi government of weapons of mass destruction and end the dictatorial rule of President Saddam Hussein. U.S. soldiers captured Saddam in December 2003, but the war dragged on through 2011.

    A 15-month search by U.S. and United Nations inspectors revealed in 2004 that Iraq had no weapons of mass destruction to seize.

    The Trump administration, bolstered by the Israeli government, has claimed that Iran’s development of nuclear weapons represents an imminent, dangerous threat to Western countries and the rest of the world. Iran says that its nuclear development program is for civilian use. While the International Atomic Energy Agency, an independent organization that is part of the United Nations, monitors Iran and other countries’ nuclear development work, Iran has not complied with recent IAEA requests for information about its nuclear program.

    Trump has also called for regime change in Iran, writing on his Truth Social media platform on June 22 that he wants to “Make Iran Great Again”, though he has since walked back that plan. The case of U.S. involvement in Iraq might offer some lessons in this current moment.

    The start and cost of the Iraq War

    The conflict between Western powers and Iraq dragged on until 2011. More than 4,600 American soldiers died in combat – and thousands more died by suicide after they returned home.

    More than 288,000 Iraqis, including fighters and civilians, have died from war-related violence since the invasion.

    The war cost the U.S. over $2 trillion.

    And Iraq is still dealing with widespread political violence between rival religious-political groups and an unstable government.

    Most of these problems stem directly or indirectly from the war. The 2003 U.S. invasion of Iraq and the war that followed are defining events in the histories of both countries – and the region. Yet, for many young people in the United States, drawing a connection between the war and its present-day impact is becoming more difficult. For them, the war is an artifact of the past.

    I am a Middle East historian and an Islamic studies scholar who teaches two undergraduate courses that cover the 2003 invasion and the Iraq War. My courses attract students who hope to work in politics, law, government and nonprofit groups, and whose personal backgrounds include a range of religious traditions, immigration histories and racial identities.

    The stories of the invasion and subsequent war resonate with them in the same way that stories of other past events do – they’re eager to learn from them, but don’t see them as directly connected to their lives.

    Former President George W. Bush formally declared war on Iraq in a televised address on March 19, 2003.
    Brooks Kraft LLC/Corbis via Getty Images

    A generational shift

    Since I started teaching courses related to the Iraq War in 2010, my students have shifted from millennials to Generation Z. The latter were born between the mid-1990s and early 2010s. There has also been a change in how these students understand major early 21st-century events, including the U.S. invasion of Iraq.

    I teach this event by showing things like former President George W. Bush’s March 19, 2003, televised announcement of the invasion.

    I also teach it through the flow of my lived experience. That includes remembering the Feb. 15, 2003, anti-war protests that took place in over 600 cities around the world as an effort to prevent what appeared to be an inevitable war. And I show students aspects of material culture, like the “Iraqi most wanted” deck of playing cards, distributed to deployed U.S. military personnel in Iraq, who used the cards for games and to help them identify key figures in the Iraq government.

    The millennial students I taught around 2010 recalled the U.S. invasion of Iraq from their early teen years – a confusing but foundational moment in their personal timelines.

    But for the Gen-Z students I teach today, the invasion sits firmly in the past, as a part of history.

    Why this matters

    Since the mid-2010s, I have not been able to expect students to enroll in my course with personal prior knowledge about the invasion and war that followed. In 2013, my students would tell me that their childhoods had been defined by a United States at war – even if those wars happened far from U.S. soil.

    Millennial students considered the trifecta of 9/11, the war in Afghanistan and the war in Iraq to be defining events in their lives. The U.S. and its allies launched airstrikes against al-Qaida and Taliban targets in Afghanistan on Oct. 7, 2001, less than a month after the Sept. 11 terrorist attacks. This followed the Taliban refusing to hand over Osama bin Laden, the architect of 9/11.

    By 2021, my students considered Bush’s actions with the same level of abstract curiosity that they had brought to the class’s earlier examination of the 1957 Eisenhower Doctrine, which said that a country could request help from U.S. military forces if it was being threatened by another country, and was used to justify U.S. military involvement in Lebanon in 1958.

    On an educational level, this means that I now provide much more background information on the first the Gulf War, the 2000 presidential elections, the Bush presidency, the immediate U.S. responses to 9/11 and the Afghanistan invasion than I had to do before. All of these events help students better understand why the U.S. invaded Iraq and why Americans felt so strongly about the military action – whether they were for or against the invasion.

    The Iraq invasion lost popularity among Americans within two years. In March 2003, 71% of Americans said that the U.S. made the right decision to use military force in Iraq.

    That percentage dropped to 47% in 2005, following the revelation that there were no weapons of mass destruction. Yet those supporters continued to strongly endorse the invasion in later polls.

    In 2018, just over half of Americans believed that the U.S. failed to achieve its goals, however those goals might have been defined in Iraq.

    An Iraqi family flees past British tanks from the city of Basra in March 2003.
    Odd Andersen/AFP via Getty Images

    A new set of priorities

    Older Americans age 65 and up are more likely than young people to prioritize foreign policy issues, including maintaining a U.S. military advantage.

    Younger Americans – age 18 to 39 – say the top issues that require urgency are providing support to refugees and limiting U.S. military commitments abroad, according to a 2021 Pew research survey.

    Generation Z members are also less likely than older Americans to think that the U.S. should act by itself in defending or protecting democracy around the world, according to a 2019 poll by the think tank Center for American Progress.

    They also agree with the statement that the United States’ “wars in the Middle East and Afghanistan were a waste of time, lives, and taxpayer money and they did nothing to make us safer at home.” They prefer that the U.S. use economic and diplomatic means, rather than military intervention, to advance American interests around the world.

    Israel’s conflict with Iran may not flare again and give way to more airstrikes and violence. If the countries resume fighting, however, their conflict threatens to draw in Lebanon, Qatar and other countries in the Middle East, as well as likely the U.S. – and to drag on for a long time.

    This is an update from a story originally published on March 15, 2023.

    Andrea Stanton does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Israel-Iran war recalls the 2003 US invasion of Iraq – a war my undergraduate students see as a relic of the past – https://theconversation.com/israel-iran-war-recalls-the-2003-us-invasion-of-iraq-a-war-my-undergraduate-students-see-as-a-relic-of-the-past-259652

    MIL OSI Analysis

  • MIL-OSI Analysis: Using TikTok could be making you more politically polarized, new study finds

    Source: The Conversation – USA – By Zicheng Cheng, Assistant Professor of Mass Communications, University of Arizona

    Are you in an echo chamber on TikTok? LeoPatrizi/E+ via Getty Images

    People on TikTok tend to follow accounts that align with their own political beliefs, meaning the platform is creating political echo chambers among its users. These findings, from a study my collaborators, Yanlin Li and Homero Gil de Zúñiga, and I published in the academic journal New Media & Society, show that people mostly hear from voices they already agree with.

    We analyzed the structure of different political networks on TikTok and found that right-leaning communities are more isolated from other political groups and from mainstream news outlets. Looking at their internal structures, the right-leaning communities are more tightly connected than their left-leaning counterparts. In other words, conservative TikTok users tend to stick together. They rarely follow accounts with opposing views or mainstream media accounts. Liberal users, on the other hand, are more likely to follow a mix of accounts, including those they might disagree with.

    Our study is based on a massive dataset of over 16 million TikTok videos from more than 160,000 public accounts between 2019 and 2023. We saw a spike of political TikTok videos during the 2020 U.S. presidential election. More importantly, people aren’t just passively watching political content; they’re actively creating political content themselves.

    Some people are more outspoken about politics than others. We found that users with stronger political leanings and those who get more likes and comments on their videos are more motivated to keep posting. This shows the power of partisanship, but also the power of TikTok’s social rewards system. Engagement signals – likes, shares, comments – are like a fuel, encouraging users to create even more.

    Why it matters

    People are turning to TikTok not just for a good laugh. A recent Pew Research Center survey shows that almost 40% of U.S. adults under 30 regularly get news on TikTok. The question becomes what kind of news are they watching, and what does that mean for how they engage with politics.

    The content on TikTok often comes from creators and influencers or digital-native media sources. The quality of this news content remains uncertain. Without access to balanced, fact-based information, people may struggle to make informed political decisions.

    TikTok is not unique; social media generally fosters polarization.

    Amid the debates over banning TikTok, our study highlights how TikTok can be a double-edged sword in political communication. It’s encouraging to see people participate in politics through TikTok when that’s their medium of choice. However, if a user’s network is closed and homogeneous and their expression serves as in-group validation, it may further solidify the political echo chamber.

    When people are exposed to one-sided messages, it can increase hostility toward outgroups. In the long run, relying on TikTok as a source for political information might deepen people’s political views and contribute to greater polarization.

    What other research is being done

    Echo chambers have been widely studied on platforms like Twitter and Facebook, but similar research on TikTok is in its infancy. TikTok is drawing scrutiny, particularly its role in news production, political messaging and social movements.

    TikTok has its unique format, algorithmic curation and entertainment-driven design. I believe that its function as a tool for political communication calls for closer examination.

    What’s next

    In 2024, the Biden/Harris and Trump campaigns joined TikTok to reach young voters. My research team is now analyzing how these political communication dynamics may have shifted during the 2024 election. Future research could use experiments to explore whether these campaign videos significantly influence voters’ perceptions and behaviors.

    The Research Brief is a short take on interesting academic work.

    Zicheng Cheng does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Using TikTok could be making you more politically polarized, new study finds – https://theconversation.com/using-tiktok-could-be-making-you-more-politically-polarized-new-study-finds-258791

    MIL OSI Analysis

  • MIL-OSI Analysis: Uranium enrichment: A chemist explains how the surprisingly common element is processed to power reactors and weapons

    Source: The Conversation – USA – By André O. Hudson, Dean of the College of Science, Professor of Biochemistry, Rochester Institute of Technology

    Yellowcake is a concentrated form of mined and processed uranium. Nuclear Regulatory Commission, CC BY

    When most people hear the word uranium, they think of mushroom clouds, Cold War standoffs or the glowing green rods from science fiction. But uranium isn’t just fuel for apocalyptic fears. It’s also a surprisingly common element that plays a crucial role in modern energy, medicine and geopolitics.

    Uranium reentered the global spotlight in June 2025, when the U.S. launched military strikes on sites in Iran believed to be housing highly enriched uranium, a move that reignited urgent conversations around nuclear proliferation. Many headlines have mentioned Iran’s 60% enrichment of uranium, but what does that really mean?

    As a biochemist, I’m interested in demystifying this often misunderstood element.

    What is uranium?

    Uranium holds the 92nd position on the periodic table, and it is a radioactive, metallic element. Radioactivity is a natural process where some atoms – like uranium, thorium and radium – break down on their own, releasing energy.

    The German chemist Martin Heinrich Klaproth initially identified uranium in 1789, and he named it after the newly discovered planet Uranus. However, its power was not unlocked until the 20th century, when scientists discovered that uranium atoms could split via a process known as nuclear fission. In fission, the nucleus of the atom splits into two or more nuclei, which releases large amounts of energy.

    Uranium is found almost everywhere. It is in rocks, soil and water. There are even traces of uranium in plants and animals – albeit tiny amounts. Most of it is found in the Earth’s crust, where it is mined and concentrated to increase the amount of its most useful radioactive form, uranium-235.

    The enrichment dilemma

    Uranium-235 is an isotope of uranium, which is a version of an element that has the same basic identity but weighs a little more or less. Think about apples from the same tree. Some are big and some are small, but they are all apples – even though they have slightly different weights. Basically, an isotope is the same element but with a different mass.

    Unprocessed uranium is mostly uranium-238. It only contains approximately 0.7% uranium-235, the isotope that allows the most nuclear fission to occur. So, the enrichment process concentrates uranium-235.

    Enrichment can make uranium more useful for the development of nuclear weapons, since natural uranium doesn’t have enough uranium-235 to work well in reactors or weapons. The process usually contains three steps.

    Centrifuges spin the uranium to separate out its isotopes.

    The first step is to convert the uranium into a gas, called uranium hexafluoride. In the second step, the gas gets funneled into a machine called a centrifuge that spins very fast. Because uranium-235 is a little lighter than uranium-238, it moves outward more slowly when spun, and the two isotopes separate.

    It’s sort of like how a salad spinner separates water from lettuce. One spin doesn’t make much of a difference, so the gas is spun through many centrifuges in a row until the uranium-235 is concentrated.

    Uranium can typically power nuclear plants and generate electricity when it is 3%-5% enriched, meaning 3%-5% of the uranium is uranium-235. At 20% enriched, uranium-235 is considered highly enriched uranium, and 90% or higher is known as weapons-grade uranium.

    The enrichment level depends on the proportion of uranium-235 to uranium-238.
    Wikimedia Commons

    This high grade works in nuclear weapons because it can sustain a fast, uncontrolled chain reaction, which releases a large amount of energy compared with the other isotopes.

    Uranium’s varied powers

    While many headlines focus on uranium’s military potential, this element also plays a vital role in modern life. At low enrichment levels, uranium powers nearly 10% of the world’s electricity.

    In the U.S., many nuclear power plants run on uranium fuel, producing carbon-free energy. In addition, some cancer therapies and diagnostic imaging technologies harness uranium to treat diseases.

    Enriched uranium is used for nuclear power.
    Raimond Spekking/Wikimedia Commons, CC BY-SA

    In naval technology, nuclear-powered submarines and aircraft carriers rely on enriched uranium to operate silently and efficiently for years.

    Uranium is a story of duality. It is a mineral pulled from ancient rocks that can light up a city or wipe one off the map. It’s not just a relic of the Cold War or science fiction. It’s real, it’s powerful, and it’s shaping our world – from global conflicts to cancer clinics, from the energy grid to international diplomacy.

    In the end, the real power is not just in the energy released from the element. It is in how people choose to use it.

    André O. Hudson receives funding from the National Institutes of Health.

    ref. Uranium enrichment: A chemist explains how the surprisingly common element is processed to power reactors and weapons – https://theconversation.com/uranium-enrichment-a-chemist-explains-how-the-surprisingly-common-element-is-processed-to-power-reactors-and-weapons-259646

    MIL OSI Analysis

  • MIL-OSI Analysis: Uranium enrichment: A chemist explains how the surprisingly common element is processed to power reactors and weapons

    Source: The Conversation – USA – By André O. Hudson, Dean of the College of Science, Professor of Biochemistry, Rochester Institute of Technology

    Yellowcake is a concentrated form of mined and processed uranium. Nuclear Regulatory Commission, CC BY

    When most people hear the word uranium, they think of mushroom clouds, Cold War standoffs or the glowing green rods from science fiction. But uranium isn’t just fuel for apocalyptic fears. It’s also a surprisingly common element that plays a crucial role in modern energy, medicine and geopolitics.

    Uranium reentered the global spotlight in June 2025, when the U.S. launched military strikes on sites in Iran believed to be housing highly enriched uranium, a move that reignited urgent conversations around nuclear proliferation. Many headlines have mentioned Iran’s 60% enrichment of uranium, but what does that really mean?

    As a biochemist, I’m interested in demystifying this often misunderstood element.

    What is uranium?

    Uranium holds the 92nd position on the periodic table, and it is a radioactive, metallic element. Radioactivity is a natural process where some atoms – like uranium, thorium and radium – break down on their own, releasing energy.

    The German chemist Martin Heinrich Klaproth initially identified uranium in 1789, and he named it after the newly discovered planet Uranus. However, its power was not unlocked until the 20th century, when scientists discovered that uranium atoms could split via a process known as nuclear fission. In fission, the nucleus of the atom splits into two or more nuclei, which releases large amounts of energy.

    Uranium is found almost everywhere. It is in rocks, soil and water. There are even traces of uranium in plants and animals – albeit tiny amounts. Most of it is found in the Earth’s crust, where it is mined and concentrated to increase the amount of its most useful radioactive form, uranium-235.

    The enrichment dilemma

    Uranium-235 is an isotope of uranium, which is a version of an element that has the same basic identity but weighs a little more or less. Think about apples from the same tree. Some are big and some are small, but they are all apples – even though they have slightly different weights. Basically, an isotope is the same element but with a different mass.

    Unprocessed uranium is mostly uranium-238. It only contains approximately 0.7% uranium-235, the isotope that allows the most nuclear fission to occur. So, the enrichment process concentrates uranium-235.

    Enrichment can make uranium more useful for the development of nuclear weapons, since natural uranium doesn’t have enough uranium-235 to work well in reactors or weapons. The process usually contains three steps.

    Centrifuges spin the uranium to separate out its isotopes.

    The first step is to convert the uranium into a gas, called uranium hexafluoride. In the second step, the gas gets funneled into a machine called a centrifuge that spins very fast. Because uranium-235 is a little lighter than uranium-238, it moves outward more slowly when spun, and the two isotopes separate.

    It’s sort of like how a salad spinner separates water from lettuce. One spin doesn’t make much of a difference, so the gas is spun through many centrifuges in a row until the uranium-235 is concentrated.

    Uranium can typically power nuclear plants and generate electricity when it is 3%-5% enriched, meaning 3%-5% of the uranium is uranium-235. At 20% enriched, uranium-235 is considered highly enriched uranium, and 90% or higher is known as weapons-grade uranium.

    The enrichment level depends on the proportion of uranium-235 to uranium-238.
    Wikimedia Commons

    This high grade works in nuclear weapons because it can sustain a fast, uncontrolled chain reaction, which releases a large amount of energy compared with the other isotopes.

    Uranium’s varied powers

    While many headlines focus on uranium’s military potential, this element also plays a vital role in modern life. At low enrichment levels, uranium powers nearly 10% of the world’s electricity.

    In the U.S., many nuclear power plants run on uranium fuel, producing carbon-free energy. In addition, some cancer therapies and diagnostic imaging technologies harness uranium to treat diseases.

    Enriched uranium is used for nuclear power.
    Raimond Spekking/Wikimedia Commons, CC BY-SA

    In naval technology, nuclear-powered submarines and aircraft carriers rely on enriched uranium to operate silently and efficiently for years.

    Uranium is a story of duality. It is a mineral pulled from ancient rocks that can light up a city or wipe one off the map. It’s not just a relic of the Cold War or science fiction. It’s real, it’s powerful, and it’s shaping our world – from global conflicts to cancer clinics, from the energy grid to international diplomacy.

    In the end, the real power is not just in the energy released from the element. It is in how people choose to use it.

    André O. Hudson receives funding from the National Institutes of Health.

    ref. Uranium enrichment: A chemist explains how the surprisingly common element is processed to power reactors and weapons – https://theconversation.com/uranium-enrichment-a-chemist-explains-how-the-surprisingly-common-element-is-processed-to-power-reactors-and-weapons-259646

    MIL OSI Analysis

  • MIL-OSI Analysis: Israel bombed an Iraqi nuclear reactor in 1981 − it pushed program underground and spurred Saddam Hussein’s desire for nukes

    Source: The Conversation – Global Perspectives – By Jeffrey Fields, Professor of the Practice of International Relations, USC Dornsife College of Letters, Arts and Sciences

    The Osirak nuclear power research station in 1981. Jacques Pavlovsky/Sygma via Getty Images

    Israel, with the assistance of U.S. military hardware, bombs an adversary’s nuclear facility to set back the perceived pursuit of the ultimate weapon. We have been here before, about 44 years ago.

    In 1981, Israeli fighter jets supplied by Washington attacked an Iraqi nuclear research reactor being built near Baghdad by the French government.

    The reactor, which the French called Osirak and Iraqis called Tammuz, was destroyed. Much of the international community initially condemned the attack. But Israel claimed the raid set Iraqi nuclear ambitions back at least a decade. In time, many Western observers and government officials, too, chalked up the attack as a win for nonproliferation, hailing the strike as an audacious but necessary step to prevent Iraqi dictator Saddam Hussein from building a nuclear arsenal.

    But the reality is more complicated. As nuclear proliferation experts assess the extent of damage to Iran’s nuclear facilities following the recent U.S. and Israeli raids, it is worth reassessing the longer-term implications of that earlier Iraqi strike.

    The Osirak reactor

    Iraq joined the landmark Nuclear Non-Proliferation Treaty in 1970, committing the country to refrain from the pursuit of nuclear weapons. But in exchange, signatories are entitled to engage in civilian nuclear activities, including having research or power reactors and access to the enriched uranium that drives them.

    The International Atomic Energy Agency is responsible through safeguards agreements for monitoring countries’ civilian use of nuclear technology, with on-the-ground inspections to ensure that civilian nuclear programs do not divert materials for nuclear weapons.

    But to Israel, the Iraqi reactor was provocative and an escalation in the Arab-Israeli conflict.

    Israel believed that Iraq would use the French reactor – Iraq said it was for research purposes – to generate plutonium for a nuclear weapon. After diplomacy with France and the United States failed to persuade the two countries to halt construction of the reactor, Prime Minister Menachem Begin concluded that attacking the reactor was Israel’s best option. That decision gave birth to the “Begin Doctrine,” which has committing Israel to preventing its regional adversaries from becoming nuclear powers ever since.

    Israeli Prime Minister Menachem Begin addresses the press after the 1981 attack on the Osarik nuclear reactor.
    Israel Press and Photo Agency/Wikimedia Commons

    In spring 1979, Israel attempted to sabotage the project, bombing the reactor core destined for Iraq while it sat awaiting shipment in the French town of La Seyne-sur-Mer. The mission was only a partial success, damaging but not destroying the reactor.

    France and Iraq persisted with the project, and in July 1980 – with the reactor having been delivered – Iraq received the first shipment of highly enriched uranium fuel at the Tuwaitha Nuclear Research Center near Baghdad.

    Then in September 1980, during the initial days of the Iran-Iraq war, Iranian jets struck the nuclear research center. The raid also targeted a power station, knocking out electricity in Baghdad for several days. But a Central Intelligence Agency situation report assessed that “only secondary buildings” were hit at the nuclear site itself.

    It was then Israel’s turn. The reactor was still unfinished and not in operation when on June 7, 1981, eight U.S.-supplied F-16s flew over Jordanian and Saudi airspace and bombed the reactor in Iraq. The attack killed 10 Iraqi soldiers and a French civilian.

    Revisiting the ‘success’ of Israeli raid

    Many years later, U.S. President Bill Clinton commented: “Everybody talks about what the Israelis did at Osirak in 1981, which I think, in retrospect, was a really good thing. You know, it kept Saddam from developing nuclear power.”

    But nonproliferation experts have contended for years that while Saddam may have had nuclear weapons ambitions, the French-built research reactor would not have been the route to go. Iraq would either have had to divert the reactor’s highly enriched uranium fuel for a few weapons or shut the reactor down to extract plutonium from the fuel rods – all while hiding these operations from the International Atomic Energy Agency.

    As an additional safeguard, the French government, too, had pledged to shut down the reactor if it detected efforts to use the reactor for weapons purposes.

    In any event, Iraq’s desire for a nuclear weapon was more aspirational than operational. A 2011 article in the journal International Security included interviews with several scientists who worked on Iraq’s nuclear program and characterized the country’s pursuit of a nuclear weapons capability as “both directionless and disorganized” before the attack.

    Iraq’s program begins in earnest

    So what happened after the strike? Many analysts have argued that the Israeli attack, rather than diminish Iraqi desire for a nuclear weapon, actually catalyzed it.

    Nuclear proliferation expert Målfrid Braut-Hegghammer, the author of the 2011 study, concluded that the Israeli attack “triggered a nuclear weapons program where one did not previously exist.”

    In the aftermath of the attack, Saddam decided to formally, if secretively, establish a nuclear weapons program, with scientists deciding that a uranium-based weapon was the best route. He tasked his scientists with pursuing multiple methods to enrich uranium to weapons grade to ensure success, much the way the Manhattan Project scientists approached the same problem in the U.S.

    In other words, the Israeli attack, rather than set back an existing nuclear weapons program, turned an incoherent and exploratory nuclear endeavor into a drive to get the bomb personally overseen by Saddam and sparing little expense even as Iraq’s war with Iran substantially taxed Iraqi resources.

    From 1981 to 1987, the nuclear program progressed fitfully, facing both organizational and scientific challenges.

    As those challenges were beginning to be addressed, Iraq invaded Kuwait in 1990, provoking a military response from the United States. In the aftermath of what would become Operation Desert Storm, U.N. weapons inspectors discovered and dismantled the clandestine Iraqi nuclear weapons program.

    The Tammuz nuclear reactor was hit again during the 1991 Gulf War.
    Ramzi Haidar/AFP via Getty Images

    Had Saddam not invaded Kuwait over a matter not related to security, it is very possible that Baghdad would have had a nuclear weapon capability by the mid-to-late 1990s.

    Similarly to Iraq in 1980, Iran today is a party to the Nuclear Non-Proliferation Treaty. At the time President Donald Trump withdrew U.S. support in 2018 for the Joint Comprehensive Plan of Action, colloquially known as the Iran nuclear deal, the International Atomic Energy Agency certified that Tehran was complying with the requirements of the agreement.

    In the case of Iraq, military action on its nascent nuclear program merely pushed it underground – to Saddam, the Israeli strikes made acquiring the ultimate weapon more rather than less attractive as a deterrent. Almost a half-century on, some analysts and observers are warning the same about Iran.

    Jeffrey Fields receives funding from the Carnegie Corporation of New York and Schmidt Futures.

    ref. Israel bombed an Iraqi nuclear reactor in 1981 − it pushed program underground and spurred Saddam Hussein’s desire for nukes – https://theconversation.com/israel-bombed-an-iraqi-nuclear-reactor-in-1981-it-pushed-program-underground-and-spurred-saddam-husseins-desire-for-nukes-259618

    MIL OSI Analysis

  • MIL-OSI Analysis: Japanese prime minister’s abrupt no-show at NATO summit reveals a strained alliance with the US

    Source: The Conversation – Global Perspectives – By Craig Mark, Adjunct Lecturer, Faculty of Economics, Hosei University

    Japanese Prime Minister Shigeru Ishiba has sent a clear signal to the Trump administration: the Japan–US relationship is in a dire state.

    After saying just days ago he would be attending this week’s NATO summit at The Hague, Ishiba abruptly pulled out at the last minute.

    He joins two other leaders from the Indo-Pacific region, Australian Prime Minister Anthony Albanese and South Korean President Lee Jae-myung, in skipping the summit.

    The Japanese media reported Ishiba cancelled the trip because a bilateral meeting with US President Donald Trump was unlikely, as was a meeting of the Indo-Pacific Four (IP4) NATO partners (Australia, New Zealand, South Korea and Japan).

    Japan will still be represented by Foreign Minister Takeshi Iwaya, showing its desire to strengthen its security relationship with NATO.

    However, Ishiba’s no-show reveals how Japan views its relationship with the Trump administration, following the severe tariffs Washington imposed on Japan and Trump’s mixed messages on the countries’ decades-long military alliance.

    Tariffs and diplomatic disagreements

    Trump’s tariff policy is at the core of the divide between the US and Japan.

    Ishiba attempted to get relations with the Trump administration off to a good start. He was the second world leader to visit Trump at the White House, after Israeli Prime Minister Benjamin Netanyahu.

    However, Trump’s “Liberation Day” tariffs imposed a punitive rate of 25% on Japanese cars and 24% on all other Japanese imports. They are already having an adverse impact on Japan’s economy: exports of automobiles to the US dropped in May by 25% compared to a year ago.

    Six rounds of negotiations have made little progress, as Ishiba’s government insists on full tariff exemptions.

    Japan has been under pressure from the Trump administration to increase its defence spending, as well. According to the Financial Times, Tokyo cancelled a summit between US and Japanese defence and foreign ministers over the demand. (A Japanese official denied the report.)

    Japan also did not offer its full support to the US bombings of Iran’s nuclear facilities earlier this week. The foreign minister instead said Japan “understands” the US’s determination to prevent Iran from acquiring nuclear weapons.

    Japan has traditionally had fairly good relations with Iran, often acting as an indirect bridge with the West. Former Prime Minister Shinzo Abe even made a visit there in 2019.

    Japan also remains heavily dependent on oil from the Middle East. It would have been adversely affected if the Strait of Hormuz had been blocked, as Iran was threatening to do.

    Unlike the response from the UK and Australia, which both supported the strikes, the Ishiba government prioritised its commitment to upholding international law and the rules-based global order. In doing so, Japan seeks to deny China, Russia and North Korea any leeway to similarly erode global norms on the use of force and territorial aggression.

    Strategic dilemma of the Japan–US military alliance

    In addition, Japan is facing the same dilemma as other American allies – how to manage relations with the “America first” Trump administration, which has made the US an unreliable ally.

    Earlier this year, Trump criticised the decades-old security alliance between the US and Japan, calling it “one-sided”.

    “If we’re ever attacked, they don’t have to do a thing to protect us,” he said of Japan.

    Lower-level security cooperation is ongoing between the two allies and their regional partners. The US, Japanese and Philippine Coast Guards conducted drills in Japanese waters this week. The US military may also assist with upgrading Japan’s counterstrike missile capabilities.

    But Japan is still likely to continue expanding its security ties with partners beyond the US, such as NATO, the European Union, India, the Philippines, Vietnam and other ASEAN members, while maintaining its fragile rapprochement with South Korea.

    Australia is now arguably Japan’s most reliable security partner. Canberra is considering buying Japan’s Mogami-class frigates for the Royal Australian Navy. And if the AUKUS agreement with the US and UK collapses, Japanese submarines could be a replacement.

    Ishiba under domestic political pressure

    There are also intensifying domestic political pressures on Ishiba to hold firm against Trump, who is deeply unpopular among the Japanese public.

    After replacing former prime minister Fumio Kishida as leader of the Liberal Democratic Party (LDP) last September, the party lost its majority in the lower house of parliament in snap elections. This made it dependent on minor parties for legislative support.

    Ishiba’s minority government has struggled ever since with poor opinion polling. There has been widespread discontent with inflation, the high cost of living and stagnant wages, the legacy of LDP political scandals, and ever-worsening geopolitical uncertainty.

    On Sunday, the party suffered its worst-ever result in elections for the Tokyo Metropolitan Assembly, winning its lowest number of seats.

    The party could face a similar drubbing in the election for half of the upper house of the Diet (Japan’s parliament) on July 20. Ishiba has pledged to maintain the LDP’s majority in the house with its junior coalition partner Komeito. But if the government falls into minority status in both houses, Ishiba will face heavy pressure to step down.

    Craig Mark does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Japanese prime minister’s abrupt no-show at NATO summit reveals a strained alliance with the US – https://theconversation.com/japanese-prime-ministers-abrupt-no-show-at-nato-summit-reveals-a-strained-alliance-with-the-us-259694

    MIL OSI Analysis

  • MIL-OSI Analysis: Oil shocks in the 1970s drove rapid changes in transport. It could happen again if Middle East tensions continue

    Source: The Conversation – Global Perspectives – By Hussein Dia, Professor of Future Urban Mobility, Swinburne University of Technology

    The Image Bank/Getty

    As the world watches the US–Iran situation with concern, the ripple effect from these events are reaching global oil supply chains – and exposing their fragility.

    If Iran closes the Strait of Hormuz as it is considering, it would restrict the global oil trade and trigger energy chaos.

    Petrol in some Australian cities could hit A$2.50 a litre according to some economists. As global instability worsens, other experts warn price spikes are increasingly likely.

    What would happen next? There is a precedent: the oil shocks of the 1970s, when oil prices quadrupled. The shock drove rapid change, from more efficient cars to sudden interest in alternative energy sources. This time, motorists would likely switch to electric vehicles.

    If this crisis continues or if another one flares up, it could mark a turning point in Australia’s long dependence on foreign oil.

    What would an oil shock mean?

    Australia currently imports 80% of its liquid fuels, the highest level on record. If the flow of oil stopped, we would have about 50 days worth in storage before we ran out.

    Our cars, buses, trucks and planes run overwhelmingly on petrol and diesel. Almost three-quarters (74%) of these liquid fuels are used in transport, with road transport accounting for more than half (54%) of all liquid fuels. Australia is highly exposed to global supply shocks.

    The best available option to reduce dependence on oil imports is to electrify transport.

    How does Australia compare on EVs?

    EV uptake in Australia continues to lag behind global leaders. In 2024, EVs accounted for 9.65% of new car sales in Australia, up from 8.45% in 2023.

    In the first quarter of 2025, EVs were 6.3% of new car sales, a decline from 7.4% in the final quarter of 2024.

    Norway remains the global leader, with battery-electric passenger cars making up 88.9% of sales in 2024. The United Kingdom also saw significant growth – EVs hit almost 20% of new car registrations in 2024.

    In China, EVs made up 40.9% of new car sales in 2024. The 12.87 million cars sold represent three-quarters of total EV sales worldwide.

    One reason for Australia’s sluggishness is a lack of reliable public chargers. While charging infrastructure is expanding, large parts of regional Australia still lack reliable access to EV charging.

    Until recently, Australia’s fuel efficiency standards were among the weakest in the OECD. Earlier this year, the government’s new standards came into force. These are expected to boost EV uptake.

    Could global tensions trigger faster action?

    If history is any guide, oil shocks lead to long-term change.

    The 1970s oil shocks triggered waves of energy reform.

    When global oil prices quadrupled in 1973–74, many nations were forced to reconsider where they got their energy. A few years later, the 1979 Iranian Revolution caused another major supply disruption, sending oil prices soaring and pushing much of the world into recession.

    Huge increases in oil prices drove people to look for alternatives during the 1970s oil shocks.
    Everett Collection/Shutterstock

    These shocks drove the formation of the International Energy Agency in 1974, spurred alternative energy investment and led to advances in fuel-efficiency standards.

    Much more recently, Russia’s invasion of Ukraine pushed the European Union to face up to its reliance on Russian gas and find alternatives by importing gas from different countries and accelerating the clean energy shift.

    Clearly, energy shocks can be catalysts for long-term structural change in how we produce and consume energy.

    The new crisis could do the same, but only if policy catches up.

    If fuel prices shot up and stayed there, consumer behaviour would begin to shift. People would drive less and seek alternate forms of transport. Over time, more would look for better ways to get around.

    But without stronger support such as incentives, infrastructure and fuel security planning, shifting consumer preferences could be too slow to matter.

    A clean-energy future is more secure

    Cutting oil dependency through electrification isn’t just good for the climate. It’s also a hedge against future price shocks and supply disruptions.

    Transport is now Australia’s third-largest source of greenhouse gas emissions. Now that emissions are falling in the electricity sector, transport will be the highest emitting sector emissions source as soon as 2030.

    Building a cleaner transport system also means building a more resilient one. Charging EVs on locally produced renewable power cuts our exposure to global oil markets. So do biofuels, better public transport and smarter urban planning.

    Improving domestic energy resilience isn’t just about climate targets. It’s about economic stability and national security. Clean local energy sources reduce vulnerability to events beyond our control.

    What can we learn from China?

    China offers a compelling case study. The nation of 1.4 billion faces real oil security challenges. In response, Beijing has spent the past decade building a domestic clean energy ecosystem to reduce oil dependency and cut emissions.

    This is now bearing fruit. Last year, China’s oil imports had the first sustained fall in nearly two decades. Crude oil imports fell 1.5%, while oil refinery activity also fell due to lower demand.

    China’s rapid uptake of EVs has clear energy security benefits.
    pim pic/Shutterstock

    China’s green energy transition was driven by coordinated policy, industrial investment and public support for clean transport.

    China’s rapid shift to EVs and clean energy shows how long-term planning and targeted investment can pay off on climate and energy security.

    What we do next matters

    The rolling crises of 2025 present Australian policymakers a rare alignment of interests. What’s good for the climate, for consumers and for national security may now be the same thing.

    Real change will require more than sustained high petrol prices. It demands political will, targeted investment and a long-term vision for clean, resilient transport.

    Doing nothing has a real cost – not just in what we pay at the service station, but in how vulnerable we remain to events a long way away.

    Hussein Dia receives funding from the Australian Research Council, the iMOVE Australia Cooperative Research Centre, Transport for New South Wales, Queensland Department of Transport and Main Roads, Victorian Department of Transport and Planning, and Department of Infrastructure, Transport, Regional Development, Communications and the Arts.

    ref. Oil shocks in the 1970s drove rapid changes in transport. It could happen again if Middle East tensions continue – https://theconversation.com/oil-shocks-in-the-1970s-drove-rapid-changes-in-transport-it-could-happen-again-if-middle-east-tensions-continue-259670

    MIL OSI Analysis

  • MIL-OSI Analysis: How Nato summit shows Europe and US no longer have a common enemy

    Source: The Conversation – UK – By Andrew Corbett, Senior Lecturer in Defence Studies, King’s College London

    Mark Rutte had an unenviable task at the Hague summit this week. The Nato secretary-general had to work with diverging American and European views of current security threats. After Rutte made extraordinary efforts at highly deferential, overt flattery of Donald Trump to secure crucial outcomes for the alliance, he seems to have succeeded for now.

    But what this meeting and the run-up has made increasingly clear is that the US and Europe no longer perceive themselves as having a single common enemy. Nato was established in 1949 as a defensive alliance against the acknowledged threat from the USSR. This defined the alliance through the cold war until the dissolution of the Soviet Union in 1991. Since Russia invaded Ukraine and annexed Crimea in 2014, Nato has focused on Moscow as the major threat to international peace. But the increasingly bellicose China is demanding more attention from the US.

    There are some symbolic moves that signal how things are changing. Every Nato summit declaration since the Russian invasion of Ukraine in 2022 has used the same form of words: “We adhere to international law and to the purposes and principles of the Charter of the United Nations and are committed to upholding the rules-based international order.”

    The declaration published during the Hague summit on June 25 conspicuously does not mention either. Indeed, in a departure from recent declarations, the five paragraphs of the Hague summit declaration are brutally short and focused entirely on portraying the alliance solely in terms of military capability and economic investment to sustain that. No mention of international law and order this time.

    This appears to be a carefully orchestrated output of a deliberately shortened summit designed to contain Trump’s unpredictable interventions. This also seems symptomatic of a widening division between the American strategic trajectory and the security interests perceived by Canada and the European members of Nato.

    That this declaration was so short, and so focused on such a narrow range of issues suggests there were unusually entrenched differences that could not be surmounted.

    Since the onslaught of the full Russian invasion of Ukraine in February 2022, the Nato allies have been united in their criticism of Russia and support for Ukraine; until now.

    Since January, the Trump administration has not authorised any military aid to Ukraine and significantly reduced material support to Ukraine and criticism of Russia. Trump has sought to end the war rapidly on terms effectively capitulating to Russian aggression; his proposal suggests recognising Russia’s control over Crimea and de facto control over some other occupied territories (Luhansk, parts of Zaporizhzhia, Donetsk, and Kherson) He has also suggested Ukraine would not join Nato but might receive security guarantees and the right to join the EU.

    Meanwhile, European allies have sought to fund and support Ukraine’s defensive efforts, increasing aid and military support, and continuing to ramp up sanctions.

    Another sign of the differing priorities of Europe and Canada v the US, was the decision by Pete Hegseth, US secretary of defense, to step back from leadership of the Ukraine defence contact group, an ad-hoc coalition of states across the world providing military support to Ukraine. Hegseth also symbolically failed to attend the group’s pre-summit meeting in June.

    Trump has long been adamant that Nato members should meet their 2014 commitment to spend 2% of their GDP on defence, and Rutte recognised that. In 2018, Trump suggested that this should be increased to 4 or 5% but this was dismissed as unreasonable. Now, in a decision which indicates increasing concern about both Russia as a threat and US support, Nato members (except for Spain) have agreed to increase spending to 5% of GDP on defence over the next 10 years.

    Donald Trump gives a press conference after the Nato summit.

    Nato’s article 3 requires states to maintain and develop their capacity to resist attack. However, since 2022, it has become increasingly apparent that many Nato members are unprepared for any major military engagement. At the same time, they are increasingly feeling that Russia is more of a threat on their doorsteps. There has been recognition, particularly among the Baltic states, Germany, France and the UK that they need to increase their military spending and preparedness.

    For the US to focus more on China, US forces will shift a greater percentage of the US Navy to the Pacific. It will also assign its most capable new ships and aircraft to the region and increase general presence operations, training and developmental exercises, and engagement and cooperation with allied and other navies in the western Pacific. To do this US forces will need to reduce commitments in Europe, and European allies must replace those capabilities in order to sustain deterrence against Russia.

    The bedrock of the Nato treaty, article 5, is commonly paraphrased as “an attack on one is an attack on all”. On his way to the Hague summit, Trump seemed unsure about the US commitment to Nato. Asked to clarify this at the summit, he stated: “I stand with it [Article 5]. That’s why I’m here. If I didn’t stand with it, I wouldn’t be here.”

    Lord Ismay, the first secretary-general of Nato, famously (if apocryphally) suggested that the purpose of the alliance was to keep the Russians out, the Americans in and the Germans down. Germany is now an integral part of Nato, and the Americans are in, if distracted. But there are cracks, and Rutte will have his hands full managing Trump’s declining interest in protecting Europe if he is to keep the Russians at bay.

    Andrew Corbett does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. How Nato summit shows Europe and US no longer have a common enemy – https://theconversation.com/how-nato-summit-shows-europe-and-us-no-longer-have-a-common-enemy-259842

    MIL OSI Analysis

  • MIL-OSI Analysis: How Nato summit shows Europe and US no longer have a common enemy

    Source: The Conversation – UK – By Andrew Corbett, Senior Lecturer in Defence Studies, King’s College London

    Mark Rutte had an unenviable task at the Hague summit this week. The Nato secretary-general had to work with diverging American and European views of current security threats. After Rutte made extraordinary efforts at highly deferential, overt flattery of Donald Trump to secure crucial outcomes for the alliance, he seems to have succeeded for now.

    But what this meeting and the run-up has made increasingly clear is that the US and Europe no longer perceive themselves as having a single common enemy. Nato was established in 1949 as a defensive alliance against the acknowledged threat from the USSR. This defined the alliance through the cold war until the dissolution of the Soviet Union in 1991. Since Russia invaded Ukraine and annexed Crimea in 2014, Nato has focused on Moscow as the major threat to international peace. But the increasingly bellicose China is demanding more attention from the US.

    There are some symbolic moves that signal how things are changing. Every Nato summit declaration since the Russian invasion of Ukraine in 2022 has used the same form of words: “We adhere to international law and to the purposes and principles of the Charter of the United Nations and are committed to upholding the rules-based international order.”

    The declaration published during the Hague summit on June 25 conspicuously does not mention either. Indeed, in a departure from recent declarations, the five paragraphs of the Hague summit declaration are brutally short and focused entirely on portraying the alliance solely in terms of military capability and economic investment to sustain that. No mention of international law and order this time.

    This appears to be a carefully orchestrated output of a deliberately shortened summit designed to contain Trump’s unpredictable interventions. This also seems symptomatic of a widening division between the American strategic trajectory and the security interests perceived by Canada and the European members of Nato.

    That this declaration was so short, and so focused on such a narrow range of issues suggests there were unusually entrenched differences that could not be surmounted.

    Since the onslaught of the full Russian invasion of Ukraine in February 2022, the Nato allies have been united in their criticism of Russia and support for Ukraine; until now.

    Since January, the Trump administration has not authorised any military aid to Ukraine and significantly reduced material support to Ukraine and criticism of Russia. Trump has sought to end the war rapidly on terms effectively capitulating to Russian aggression; his proposal suggests recognising Russia’s control over Crimea and de facto control over some other occupied territories (Luhansk, parts of Zaporizhzhia, Donetsk, and Kherson) He has also suggested Ukraine would not join Nato but might receive security guarantees and the right to join the EU.

    Meanwhile, European allies have sought to fund and support Ukraine’s defensive efforts, increasing aid and military support, and continuing to ramp up sanctions.

    Another sign of the differing priorities of Europe and Canada v the US, was the decision by Pete Hegseth, US secretary of defense, to step back from leadership of the Ukraine defence contact group, an ad-hoc coalition of states across the world providing military support to Ukraine. Hegseth also symbolically failed to attend the group’s pre-summit meeting in June.

    Trump has long been adamant that Nato members should meet their 2014 commitment to spend 2% of their GDP on defence, and Rutte recognised that. In 2018, Trump suggested that this should be increased to 4 or 5% but this was dismissed as unreasonable. Now, in a decision which indicates increasing concern about both Russia as a threat and US support, Nato members (except for Spain) have agreed to increase spending to 5% of GDP on defence over the next 10 years.

    Donald Trump gives a press conference after the Nato summit.

    Nato’s article 3 requires states to maintain and develop their capacity to resist attack. However, since 2022, it has become increasingly apparent that many Nato members are unprepared for any major military engagement. At the same time, they are increasingly feeling that Russia is more of a threat on their doorsteps. There has been recognition, particularly among the Baltic states, Germany, France and the UK that they need to increase their military spending and preparedness.

    For the US to focus more on China, US forces will shift a greater percentage of the US Navy to the Pacific. It will also assign its most capable new ships and aircraft to the region and increase general presence operations, training and developmental exercises, and engagement and cooperation with allied and other navies in the western Pacific. To do this US forces will need to reduce commitments in Europe, and European allies must replace those capabilities in order to sustain deterrence against Russia.

    The bedrock of the Nato treaty, article 5, is commonly paraphrased as “an attack on one is an attack on all”. On his way to the Hague summit, Trump seemed unsure about the US commitment to Nato. Asked to clarify this at the summit, he stated: “I stand with it [Article 5]. That’s why I’m here. If I didn’t stand with it, I wouldn’t be here.”

    Lord Ismay, the first secretary-general of Nato, famously (if apocryphally) suggested that the purpose of the alliance was to keep the Russians out, the Americans in and the Germans down. Germany is now an integral part of Nato, and the Americans are in, if distracted. But there are cracks, and Rutte will have his hands full managing Trump’s declining interest in protecting Europe if he is to keep the Russians at bay.

    Andrew Corbett does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. How Nato summit shows Europe and US no longer have a common enemy – https://theconversation.com/how-nato-summit-shows-europe-and-us-no-longer-have-a-common-enemy-259842

    MIL OSI Analysis

  • MIL-OSI Analysis: UK’s F-35A fighter jet deal problem: the RAF has no aircraft to refuel them in mid-air

    Source: The Conversation – UK – By Arun Dawson, PhD Candidate, Department of War Studies, King’s College London

    A1C Jake Welty

    The UK has decided to acquire at least 12 F-35A stealth fighters. These fighter jets should be able to carry out nuclear and conventional strikes from the air, a capability the Royal Air Force (RAF) has lacked since the 1990s. The deal also marks a significant move for the UK’s participation in Nato operations amid rising nuclear rhetoric from adversaries.

    The F-35A brings notable advantages over the F-35B variant already in RAF service. It’s less expensive to buy and operate, has greater range – 679 miles (1093km) vs 517 miles (833km) – and supports a broader variety of weapons, including the nuclear-capable B61 bomb (with US agreement). Because it can spend longer in the air, it may also allow prospective RAF pilots to get through their pilot training quicker.

    Yet while the F-35A offers greater range than many comparable fighter jets, it still requires in-flight refuelling to operate effectively over extended distances and to return home from such missions. This exposes a critical vulnerability that has been largely overlooked in public commentary: the RAF has no tanker aircraft capable of supporting the F-35A in this way. As a result, these fighter jets – carrying nuclear ordnance or otherwise – are limited in the types of operations they can carry out.

    Unlike the F-35B which is compatible with the UK’s current fleet of tankers, the A-model depends exclusively on “flying boom” refuelling. Flying boom is one of two aerial refuelling methods. Favoured by the United States Air Force, it uses a rigid, extendable tube to deliver fuel at a high transfer rate and is generally easier for receiving pilots to operate.

    The alternative is probe-and-drogue which relies on a flexible hose and basket, connected to a probe on the receiving aircraft. While slower and more demanding to operate, it allows multiple fighters to refuel simultaneously, offers redundancy (backup options) and is simpler to integrate.

    The RAF’s refuelling predicament stems from an exclusive leasing deal negotiated under the last Labour government, which supplied only probe-and-drogue Voyager tankers. Although the aircraft were designed to support both systems, the UK opted not to include booms due to cost constraints and limited demand at the time.

    Since then, however, the UK has steadily acquired more American-made aircraft that can only use the flying boom method to refuel: the C-17 Globemaster (air transport), RC-135W Rivet Joint (intelligence), E-7 Wedgetail (airborne command and control) and P-8A Poseidon (maritime patrol).

    The F-35A announcement continues this trend but with greater implications. While the aircraft can carry external fuel tanks to extend its range, this degrades its stealth capability. Stealth means it is less easy for enemy sensors – like radar – to detect. The F-35A needs this stealth capability for nuclear missions that require penetration of contested airspace to deliver unguided B61 bombs.

    The outcome is that Britain’s F-35As, along with alternative and otherwise highly capable aircraft, will not be ablew to operate independently during critical military operations. London to Eastern Europe, for instance, is roughly 1,150 miles (1,852km): nearly double the distance the F-35A can fly without refuelling. Without flying boom tankers or bases in foreign countries for refuelling, tactical flexibility is compromised.

    This shortfall imposes a growing reliance on allied tanker support. In crisis conditions, UK aircraft could be confined to American-led operations where such tankers exist.

    This risk was manageable in previous decades; the possibility of operating without the Americans considered remote. But as the 2025 Strategic Defence Review concedes, the United States is clear that the “security of Europe is no longer its primary international focus”.

    And while some Nato allies in Europe as well as Australia are increasing their flying boom capacity through a multinational fleet, the UK is not as yet part of those arrangements. Retrofitting the existing Voyager fleet remains an option, but it would require an extensive – and expensive – structural overhaul, prompting the question of whether acquiring new, compatible tankers might now be a more viable path.

    Either way, until Britain invests in flying boom capability or secures assured access from allies, it will have to accept constraints to its military power. Buying frontline jets is only part of the equation. Without the means to sustain them in the air, the UK risks fielding a force that can’t reach its target, leaving it a spectator when it matters most.

    Arun Dawson does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. UK’s F-35A fighter jet deal problem: the RAF has no aircraft to refuel them in mid-air – https://theconversation.com/uks-f-35a-fighter-jet-deal-problem-the-raf-has-no-aircraft-to-refuel-them-in-mid-air-259821

    MIL OSI Analysis

  • MIL-OSI Analysis: UK’s F-35A fighter jet deal problem: the RAF has no aircraft to refuel them in mid-air

    Source: The Conversation – UK – By Arun Dawson, PhD Candidate, Department of War Studies, King’s College London

    A1C Jake Welty

    The UK has decided to acquire at least 12 F-35A stealth fighters. These fighter jets should be able to carry out nuclear and conventional strikes from the air, a capability the Royal Air Force (RAF) has lacked since the 1990s. The deal also marks a significant move for the UK’s participation in Nato operations amid rising nuclear rhetoric from adversaries.

    The F-35A brings notable advantages over the F-35B variant already in RAF service. It’s less expensive to buy and operate, has greater range – 679 miles (1093km) vs 517 miles (833km) – and supports a broader variety of weapons, including the nuclear-capable B61 bomb (with US agreement). Because it can spend longer in the air, it may also allow prospective RAF pilots to get through their pilot training quicker.

    Yet while the F-35A offers greater range than many comparable fighter jets, it still requires in-flight refuelling to operate effectively over extended distances and to return home from such missions. This exposes a critical vulnerability that has been largely overlooked in public commentary: the RAF has no tanker aircraft capable of supporting the F-35A in this way. As a result, these fighter jets – carrying nuclear ordnance or otherwise – are limited in the types of operations they can carry out.

    Unlike the F-35B which is compatible with the UK’s current fleet of tankers, the A-model depends exclusively on “flying boom” refuelling. Flying boom is one of two aerial refuelling methods. Favoured by the United States Air Force, it uses a rigid, extendable tube to deliver fuel at a high transfer rate and is generally easier for receiving pilots to operate.

    The alternative is probe-and-drogue which relies on a flexible hose and basket, connected to a probe on the receiving aircraft. While slower and more demanding to operate, it allows multiple fighters to refuel simultaneously, offers redundancy (backup options) and is simpler to integrate.

    The RAF’s refuelling predicament stems from an exclusive leasing deal negotiated under the last Labour government, which supplied only probe-and-drogue Voyager tankers. Although the aircraft were designed to support both systems, the UK opted not to include booms due to cost constraints and limited demand at the time.

    Since then, however, the UK has steadily acquired more American-made aircraft that can only use the flying boom method to refuel: the C-17 Globemaster (air transport), RC-135W Rivet Joint (intelligence), E-7 Wedgetail (airborne command and control) and P-8A Poseidon (maritime patrol).

    The F-35A announcement continues this trend but with greater implications. While the aircraft can carry external fuel tanks to extend its range, this degrades its stealth capability. Stealth means it is less easy for enemy sensors – like radar – to detect. The F-35A needs this stealth capability for nuclear missions that require penetration of contested airspace to deliver unguided B61 bombs.

    The outcome is that Britain’s F-35As, along with alternative and otherwise highly capable aircraft, will not be ablew to operate independently during critical military operations. London to Eastern Europe, for instance, is roughly 1,150 miles (1,852km): nearly double the distance the F-35A can fly without refuelling. Without flying boom tankers or bases in foreign countries for refuelling, tactical flexibility is compromised.

    This shortfall imposes a growing reliance on allied tanker support. In crisis conditions, UK aircraft could be confined to American-led operations where such tankers exist.

    This risk was manageable in previous decades; the possibility of operating without the Americans considered remote. But as the 2025 Strategic Defence Review concedes, the United States is clear that the “security of Europe is no longer its primary international focus”.

    And while some Nato allies in Europe as well as Australia are increasing their flying boom capacity through a multinational fleet, the UK is not as yet part of those arrangements. Retrofitting the existing Voyager fleet remains an option, but it would require an extensive – and expensive – structural overhaul, prompting the question of whether acquiring new, compatible tankers might now be a more viable path.

    Either way, until Britain invests in flying boom capability or secures assured access from allies, it will have to accept constraints to its military power. Buying frontline jets is only part of the equation. Without the means to sustain them in the air, the UK risks fielding a force that can’t reach its target, leaving it a spectator when it matters most.

    Arun Dawson does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. UK’s F-35A fighter jet deal problem: the RAF has no aircraft to refuel them in mid-air – https://theconversation.com/uks-f-35a-fighter-jet-deal-problem-the-raf-has-no-aircraft-to-refuel-them-in-mid-air-259821

    MIL OSI Analysis

  • MIL-OSI Analysis: Amid alarm over a US ‘autism registry’, people are using these tactics to avoid disability surveillance – podcast

    Source: The Conversation – UK – By Gemma Ware, Host, The Conversation Weekly Podcast, The Conversation

    Robert F. Kennedy Jr. caused controversy in April by promising to find the cause for autism by September. Claims by the new US secretary for health and human services that autism is a “preventable disease” with an environmental cause,  contradict a body of research that suggests autism is caused by a combination of genetic and external factors.

    The US government announced that to support the research effort into autism, the National Institutes of Health (NIH), would partner with Medicare and Medicaid to build a “data platform” involving data on claims, medical records and consumer wearables.

    When first announced this plan was dubbed an autism registry, though the government later denied that’s what it was creating, instead calling it a “ real-world platform” to allow researchers to study comprehensive data on people with autism.

    While the NIH defended the decision as “fully compliant with privacy and security laws”, autistic people and disability advocates are alarmed at the potential violations such a data platform could enable.

     In this episode of The Conversation Weekly podcast, we speak to Amy Gaeta, a  research associate at the University of Cambridge in the UK who studies disability surveillance.

    Gaeta, who is American, explains that for over a century, disabled people have often been denied the right to privacy and been subjected to a sinister history of forced medical testing, forced sterilisation and various laws that criminalise mental illness. She says:

     I think this is why a lot of these everyday actions that disabled people do to resist surveillance don’t even come across as anti-surveillance. To them it just comes across as this is how I exist in the world.

    Gaeta talks us through some of the strategies people are using to avoid potential surveillance, from self-diagnosis, to withholding information or being careful with the language they use to describe themselves. Listen to our conversation with Gaeta on The Conversation Weekly podcast.

    This episode of The Conversation Weekly was written and produced by Katie Flood with assistance from Mend Mariwany. Gemma Ware is the executive producer. Mixing and sound design by Eloise Stevens and theme music by Neeta Sarl.

    Newsclips in this episode from ABC News.

    Listen to The Conversation Weekly via any of the apps listed above, download it directly via our RSS feed or find out how else to listen here. A transcript of this episode is available on Apple Podcasts or Spotify.

    Amy Gaeta receives research funding from UKRI, a grant that is hosted at the Leverhulme Centre for the Future of Intelligence.

    ref. Amid alarm over a US ‘autism registry’, people are using these tactics to avoid disability surveillance – podcast – https://theconversation.com/amid-alarm-over-a-us-autism-registry-people-are-using-these-tactics-to-avoid-disability-surveillance-podcast-259818

    MIL OSI Analysis

  • MIL-OSI Analysis: How Bordeaux wine estates price their bottles

    Source: The Conversation – France – By Jean-Marc Figuet, Professeur d’économie, Université de Bordeaux

    On wine-rating platforms, amateur ratings better explain the price differences of bottles than professional scores. JuanGarciaHinojosa/Shutterstock

    Research in economics has unravelled the workings of the complex market for Bordeaux wines, in which perceived quality, historical reputation and critical reviews are intertwined. The question of how bottles are priced is all the more relevant amid a crisis for the Bordeaux industry, which is facing the threat of higher US tariffs on EU exports.

    Reputation, ranking, vintage and climate

    A document pertaining to the ranking of Bordeaux wines in the 19th century.
    Wikimediacommons

    To assess the relationship between the quality and price of Bordeaux wines, Jean-Marie Cardebat and I applied the “hedonic” method. The analysis links price to the observable characteristics of a wine: its ranking, vintage, designation of origin, alcohol content, flavour, etc.

    The results are striking: the reputation of the wine estate and its official ranking, in particular that of 1855, are more powerful factors in explaining price than taste and sensory characteristics. In other words, a ranked wine, because of the prestige of its label, sells for significantly more than an unranked wine of equivalent taste and sensory appeal.




    À lire aussi :
    Our perception of wine has more to do with its commercial history than we think


    The economist Orley Ashenfelter has shown that the weather conditions of a vintage – temperature, sunshine, rainfall – are predictors of its quality and therefore its price. A simple model, based solely on climatic data.

    Robert Parker and the golden age of experts

    For more than 30 years, the critic Robert Parker stirred up the Bordeaux wine market. His famous scores out of 100, published in The Wine Advocate, made and broke the value of wines. The economist Robert H. Ashton measured the scores’ impact: an extra point could boost a price by 10-20%.

    Parker was the originator of a tribe of “gurus”, whose scores structured the entire early season for wines. The estates adjusted prices according to their assessments, and wine buyers followed suit, convinced of the accuracy of the scores.


    A weekly e-mail in English featuring expertise from scholars and researchers. It provides an introduction to the diversity of research coming out of the continent and considers some of the key issues facing European countries. Get the newsletter!

    Fragmented influence

    The Bordeaux wine landscape has changed since Parker’s retirement in 2019. The critics are still around but their influence has fragmented. No one has taken over Parker’s leadership. Consensus is now less clear and rating discrepancies are more frequent.

    An even deeper turning point is evident when we compare the impact of expert and consumer ratings – notably from the Vivino platform – on the price of French red wines.

    The result is clear: in the majority of cases, the scores of amateurs surpass those of professionals when it comes to explaining price differences. The market has therefore moved from a “guru” logic to a “geek” logic, in which the collective intelligence of connected consumers now carries as much weight, if not more, than expert opinions.




    À lire aussi :
    Appearance, aroma and mouthfeel: all you need to know to give wine tasting a go


    ‘Bordeaux bashing’

    During the “primeurs” or early harvest campaign, the most prestigious Bordeaux wines are offered 18 months before bottling, often at a price that is supposed to be lower than the future market price. It’s a great opportunity for a bargain. Philippe Masset’s research shows that most wine estates overestimate the price of early harvest wines.

    For example, for the 2021 vintage, over 80% of the wines analysed were priced above their “fair value” as estimated by an econometric model. The more a wine is overpriced on its release, the worse it performs on the secondary market. This discrepancy between asking price and perceived value feeds what is known as “Bordeaux bashing”. There is disaffection with these wines that are considered too expensive, too complex, too austere and totally out of step with today’s expectations – young people’s in particular.

    A changing market

    While the price of Bordeaux wine is still based on its quality, origin, weather and ranking, it also depends on criticism not just by experts, but by consumers. This shift is redefining the balance of power in the world of wine.

    Reputation still pays, but prestige is no longer enough. Nonelite wine consumers are gradually taking over, gaining a new form of power over prices. If the Bordeaux market wants to emerge from crisis and reclaim its place, it will undoubtedly have to rethink the way its prices are set and perceived.

    Jean-Marc Figuet has received public funding for his research.

    ref. How Bordeaux wine estates price their bottles – https://theconversation.com/how-bordeaux-wine-estates-price-their-bottles-259830

    MIL OSI Analysis

  • MIL-OSI Analysis: A preservative removed from childhood vaccines 20 years ago is still causing controversy today − a drug safety expert explains

    Source: The Conversation – USA – By Terri Levien, Professor of Pharmacy, Washington State University

    A discredited study published in 1989 first alleged a link between thimerosal and autism. Flavio Coelho/Moment via Getty Images

    An expert committee that advises the Centers for Disease Control and Prevention on vaccines is meeting for the first time since Health Secretary Robert F. Kennedy Jr. abruptly replaced the committee’s 17 members with eight hand-picked ones on June 11, 2025.

    The committee, called the Advisory Committee on Immunization Practices, generally discusses and votes on recommendations for specific vaccines. For this meeting, taking place June 25-26, 2025, vaccines for COVID-19, human papillomavirus, influenza and other infectious diseases were on the schedule. According to an updated agenda, however, the committee is now also scheduled to hear a presentation on a chemical called thimerosal and to vote on proposed recommendations regarding its use in influenza vaccines.

    Public health experts have raised concerns about the presentation, noting that anti-vaccine advocates continue to promote confusion regarding the purported health risks of thimerosal despite extensive research demonstrating its safety.

    I’m a pharmacist and expert on drug information with 35 years of experience critically evaluating the safety and effectiveness of medications in clinical trials. No evidence supports the idea that thimerosal, used as a preservative in vaccines, is unsafe or carries any health risks.

    What is thimerosal?

    Thimerosal, also known as thiomersal, is a preservative that has been used in some drug products since the 1930s because it prevents contamination by killing microbes and preventing their growth.

    In the human body, thimerosal is metabolized, or changed, to ethylmercury, an organic derivative of mercury. Studies in infants have shown that ethylmercury is quickly eliminated from the blood.

    Even though thimerosal is no longer used in childhood vaccines, many parents still worry about whether it can harm their kids.

    Ethylmercury is sometimes confused with methylmercury. Methylmercury is known to be toxic and is associated with many negative effects on brain development even at low exposure. Environmental researchers identified the neurotoxic effects of mercury in children in the 1970s, primarily resulting from exposure to methylmercury in fish. In the 1990s, the Environmental Protection Agency and the Food and Drug Administration established limits for maximum recommended exposure to methylmercury, especially for children, pregnant women and women of childbearing age.

    Why is thimerosal controversial?

    Fears about the safety of thimerosal in vaccines spread for two reasons.

    First, in 1998, a now discredited report was published in a major medical journal called The Lancet. In it, a British doctor named Andrew Wakefield described eight children who developed autism after receiving the MMR vaccine, which protects against measles, mumps and rubella. However, the patients were not compared with a control group that was vaccinated, so it was impossible to draw conclusions about the vaccine’s effects. Also, the data report was later found to be falsified. And the MMR vaccine that children received in that report never contained thimerosal.

    Second, the federal guidelines on exposure limits for the toxic substance methylmercury came out about the same time as the Wakefield study’s publication. During that period, autism was becoming more widely recognized as a developmental condition, and its rates of diagnosis were rising. People who believed Wakefield’s results conflated methylmercury and ethylmercury and promoted the unfounded idea that ethylmercury in vaccines from thimerosal were driving the rising rates of autism.

    The Wakefield study was retracted in 2010, and Wakefield was found guilty of dishonesty and flouting ethics protocols by the U.K. General Medical Council, as well as stripped of his medical license. Subsequent studies have not shown a relationship between the MMR vaccine and autism, but despite the absence of evidence, the idea took hold and has proven difficult to dislodge.

    The Wakefield study severely damaged many parents’ faith in the MMR vaccine, even though its results were eventually shown to be fraudulent.
    Peter Dazeley/The Image Bank, Getty Images

    Have scientists tested whether thimerosal is safe?

    No unbiased research to date has identified toxicity caused by ethylmercury in vaccines or a link between the substance and autism or other developmental concerns – and not from lack of looking.

    A 1999 review conducted by the Food and Drug Administration in response to federal guidelines on limiting mercury exposure found no evidence of harm from thimerosal as a vaccine preservative other than rare allergic reactions. Even so, as a precautionary measure in response to concerns about exposure to mercury in infants, the American Academy of Pediatrics and the U.S. Public Health Service issued a joint statement in 1999 recommending removal of thimerosal from vaccines.

    At that time, just one childhood vaccine was available only in a version that contained thimerosal as an ingredient. This was a vaccine called DTP, for diphtheria, tetanus and pertussis. Other childhood vaccines were either available only in formulations without thimerosal or could be obtained in versions that did not contain it.

    By 2001, U.S. manufacturers had removed thimerosal from almost all vaccines – and from all vaccines in the childhood vaccination schedule.

    In 2004, the U.S. Institute of Medicine Immunization Safety Review Committee reviewed over 200 scientific studies and concluded there is no causal relationship between thimerosal-containing vaccines and autism. Additional well-conducted studies reviewed independently by the CDC and by the FDA did not find a link between thimerosal-containing vaccines and autism or neuropsychological delays.

    How is thimerosal used today?

    In the U.S., most vaccines are now available in single-dose vials or syringes. Thimerosal is found only in multidose vials that are used to supply vaccines for large-scale immunization efforts – specifically, in a small number of influenza vaccines. It is not added to modern childhood vaccines, and people who get a flu vaccine can avoid it by requesting a vaccine supplied in a single-dose vial or syringe.

    Thimerosal is still used in vaccines in some other countries to ensure continued availability of necessary vaccines. The World Health Organization continues to affirm that there is no evidence of toxicity in infants, children or adults exposed to thimerosal-containing vaccines.

    Terri Levien does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. A preservative removed from childhood vaccines 20 years ago is still causing controversy today − a drug safety expert explains – https://theconversation.com/a-preservative-removed-from-childhood-vaccines-20-years-ago-is-still-causing-controversy-today-a-drug-safety-expert-explains-259442

    MIL OSI Analysis

  • MIL-OSI Analysis: A preservative removed from childhood vaccines 20 years ago is still causing controversy today − a drug safety expert explains

    Source: The Conversation – USA – By Terri Levien, Professor of Pharmacy, Washington State University

    A discredited study published in 1989 first alleged a link between thimerosal and autism. Flavio Coelho/Moment via Getty Images

    An expert committee that advises the Centers for Disease Control and Prevention on vaccines is meeting for the first time since Health Secretary Robert F. Kennedy Jr. abruptly replaced the committee’s 17 members with eight hand-picked ones on June 11, 2025.

    The committee, called the Advisory Committee on Immunization Practices, generally discusses and votes on recommendations for specific vaccines. For this meeting, taking place June 25-26, 2025, vaccines for COVID-19, human papillomavirus, influenza and other infectious diseases were on the schedule. According to an updated agenda, however, the committee is now also scheduled to hear a presentation on a chemical called thimerosal and to vote on proposed recommendations regarding its use in influenza vaccines.

    Public health experts have raised concerns about the presentation, noting that anti-vaccine advocates continue to promote confusion regarding the purported health risks of thimerosal despite extensive research demonstrating its safety.

    I’m a pharmacist and expert on drug information with 35 years of experience critically evaluating the safety and effectiveness of medications in clinical trials. No evidence supports the idea that thimerosal, used as a preservative in vaccines, is unsafe or carries any health risks.

    What is thimerosal?

    Thimerosal, also known as thiomersal, is a preservative that has been used in some drug products since the 1930s because it prevents contamination by killing microbes and preventing their growth.

    In the human body, thimerosal is metabolized, or changed, to ethylmercury, an organic derivative of mercury. Studies in infants have shown that ethylmercury is quickly eliminated from the blood.

    Even though thimerosal is no longer used in childhood vaccines, many parents still worry about whether it can harm their kids.

    Ethylmercury is sometimes confused with methylmercury. Methylmercury is known to be toxic and is associated with many negative effects on brain development even at low exposure. Environmental researchers identified the neurotoxic effects of mercury in children in the 1970s, primarily resulting from exposure to methylmercury in fish. In the 1990s, the Environmental Protection Agency and the Food and Drug Administration established limits for maximum recommended exposure to methylmercury, especially for children, pregnant women and women of childbearing age.

    Why is thimerosal controversial?

    Fears about the safety of thimerosal in vaccines spread for two reasons.

    First, in 1998, a now discredited report was published in a major medical journal called The Lancet. In it, a British doctor named Andrew Wakefield described eight children who developed autism after receiving the MMR vaccine, which protects against measles, mumps and rubella. However, the patients were not compared with a control group that was vaccinated, so it was impossible to draw conclusions about the vaccine’s effects. Also, the data report was later found to be falsified. And the MMR vaccine that children received in that report never contained thimerosal.

    Second, the federal guidelines on exposure limits for the toxic substance methylmercury came out about the same time as the Wakefield study’s publication. During that period, autism was becoming more widely recognized as a developmental condition, and its rates of diagnosis were rising. People who believed Wakefield’s results conflated methylmercury and ethylmercury and promoted the unfounded idea that ethylmercury in vaccines from thimerosal were driving the rising rates of autism.

    The Wakefield study was retracted in 2010, and Wakefield was found guilty of dishonesty and flouting ethics protocols by the U.K. General Medical Council, as well as stripped of his medical license. Subsequent studies have not shown a relationship between the MMR vaccine and autism, but despite the absence of evidence, the idea took hold and has proven difficult to dislodge.

    The Wakefield study severely damaged many parents’ faith in the MMR vaccine, even though its results were eventually shown to be fraudulent.
    Peter Dazeley/The Image Bank, Getty Images

    Have scientists tested whether thimerosal is safe?

    No unbiased research to date has identified toxicity caused by ethylmercury in vaccines or a link between the substance and autism or other developmental concerns – and not from lack of looking.

    A 1999 review conducted by the Food and Drug Administration in response to federal guidelines on limiting mercury exposure found no evidence of harm from thimerosal as a vaccine preservative other than rare allergic reactions. Even so, as a precautionary measure in response to concerns about exposure to mercury in infants, the American Academy of Pediatrics and the U.S. Public Health Service issued a joint statement in 1999 recommending removal of thimerosal from vaccines.

    At that time, just one childhood vaccine was available only in a version that contained thimerosal as an ingredient. This was a vaccine called DTP, for diphtheria, tetanus and pertussis. Other childhood vaccines were either available only in formulations without thimerosal or could be obtained in versions that did not contain it.

    By 2001, U.S. manufacturers had removed thimerosal from almost all vaccines – and from all vaccines in the childhood vaccination schedule.

    In 2004, the U.S. Institute of Medicine Immunization Safety Review Committee reviewed over 200 scientific studies and concluded there is no causal relationship between thimerosal-containing vaccines and autism. Additional well-conducted studies reviewed independently by the CDC and by the FDA did not find a link between thimerosal-containing vaccines and autism or neuropsychological delays.

    How is thimerosal used today?

    In the U.S., most vaccines are now available in single-dose vials or syringes. Thimerosal is found only in multidose vials that are used to supply vaccines for large-scale immunization efforts – specifically, in a small number of influenza vaccines. It is not added to modern childhood vaccines, and people who get a flu vaccine can avoid it by requesting a vaccine supplied in a single-dose vial or syringe.

    Thimerosal is still used in vaccines in some other countries to ensure continued availability of necessary vaccines. The World Health Organization continues to affirm that there is no evidence of toxicity in infants, children or adults exposed to thimerosal-containing vaccines.

    Terri Levien does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. A preservative removed from childhood vaccines 20 years ago is still causing controversy today − a drug safety expert explains – https://theconversation.com/a-preservative-removed-from-childhood-vaccines-20-years-ago-is-still-causing-controversy-today-a-drug-safety-expert-explains-259442

    MIL OSI Analysis

  • MIL-OSI Analysis: Checking in on New England fisheries 25 Years after ‘The Perfect Storm’ movie

    Source: The Conversation – USA – By Stephanie Otts, Director of National Sea Grant Law Center, University of Mississippi

    Filming ‘The Perfect Storm’ in Gloucester Harbor, Mass.
    The Salem News Historic Photograph Collection, Salem State University Archives and Special Collections, CC BY

    Twenty-five years ago, “The Perfect Storm” roared into movie theaters. The disaster flick, starring George Clooney and Mark Wahlberg, was a riveting, fictionalized account of commercial swordfishing in New England and a crew who went down in a violent storm.

    The anniversary of the film’s release, on June 30, 2000, provides an opportunity to reflect on the real-life changes to New England’s commercial fishing industry.

    Fishing was once more open to all

    In the true story behind the movie, six men lost their lives in late October 1991 when the commercial swordfishing vessel Andrea Gail disappeared in a fierce storm in the North Atlantic as it was headed home to Gloucester, Massachusetts.

    At the time, and until very recently, almost all commercial fisheries were open access, meaning there were no restrictions on who could fish.

    There were permit requirements and regulations about where, when and how you could fish, but anyone with the means to purchase a boat and associated permits, gear, bait and fuel could enter the fishery. Eight regional councils established under a 1976 federal law to manage fisheries around the U.S. determined how many fish could be harvested prior to the start of each fishing season.

    Fishing has been an integral part of coastal New England culture since its towns were established. In this 1899 photo, a New England community weighs and packs mackerel.
    Charles Stevenson/Freshwater and Marine Image Bank

    Fishing started when the season opened and continued until the catch limit was reached. In some fisheries, this resulted in a “race to the fish” or a “derby,” where vessels competed aggressively to harvest the available catch in short amounts of time. The limit could be reached in a single day, as happened in the Pacific halibut fishery in the late 1980s.

    By the 1990s, however, open access systems were coming under increased criticism from economists as concerns about overfishing rose.

    The fish catch peaked in New England in 1987 and would remain far above what the fish population could sustain for two more decades. Years of overfishing led to the collapse of fish stocks, including North Atlantic cod in 1992 and Pacific sardine in 2015.

    As populations declined, managers responded by cutting catch limits to allow more fish to survive and reproduce. Fishing seasons were shortened, as it took less time for the fleets to harvest the allowed catch. It became increasingly hard for fishermen to catch enough fish to earn a living.

    Saving fisheries changed the industry

    In the early 2000s, as these economic and environmental challenges grew, fisheries managers started limiting access. Instead of allowing anyone to fish, only vessels or individuals meeting certain eligibility requirements would have the right to fish.

    The most common method of limiting access in the U.S. is through limited entry permits, initially awarded to individuals or vessels based on previous participation or success in the fishery. Another approach is to assign individual harvest quotas or “catch shares” to permit holders, limiting how much each boat can bring in.

    In 2007, Congress amended the 1976 Magnuson-Stevens Fishery Conservation and Management Act to promote the use of limited access programs in U.S. fisheries.

    Ships in the fleet out of New Bedford, Mass.
    Henry Zbyszynski/Flickr, CC BY

    Today, limited access is common, and there are positive signs that the management change is helping achieve the law’s environmental goal of preventing overfishing. Since 2000, the populations of 50 major fishing stocks have been rebuilt, meaning they have recovered to a level that can once again support fishing.

    I’ve been following the changes as a lawyer focused on ocean and coastal issues, and I see much work still to be done.

    Forty fish stocks are currently being managed under rebuilding plans that limit catch to allow the stock to grow, including Atlantic cod, which has struggled to recover due to a complex combination of factors, including climatic changes.

    The lingering effect on communities today

    While many fish stocks have recovered, the effort came at an economic cost to many individual fishermen. The limited-access Northeast groundfish fishery, which includes Atlantic cod, haddock and flounder, shed nearly 800 crew positions between 2007 and 2015.

    The loss of jobs and revenue from fishing impacts individual family income and relationships, strains other businesses in fishing communities, and affects those communities’ overall identity and resilience, as illustrated by a recent economic snapshot of the Alaska seafood industry.

    When original limited-access permit holders leave the business – for economic, personal or other reasons – their permits are either terminated or sold to other eligible permit holders, leading to fewer active vessels in the fleet. As a result, the number of vessels fishing for groundfish has declined from 719 in 2007 to 194 in 2023, meaning fewer jobs.

    A fisherman unloads a portion of his catch for the day of 300 pounds of groundfish, including flounder, in January 2006 in Gloucester, Mass.
    AP Photo/Lisa Poole

    Because of their scarcity, limited-access permits can cost upward of US$500,000, which is often beyond the financial means of a small businesses or a young person seeking to enter the industry. The high prices may also lead retiring fishermen to sell their permits, as opposed to passing them along with the vessels to the next generation.

    These economic forces have significantly altered the fishing industry, leading to more corporate and investor ownership, rather than the family-owned operations that were more common in the Andrea Gail’s time.

    Similar to the experience of small family farms, fishing captains and crews are being pushed into corporate arrangements that reduce their autonomy and revenues.

    Consolidation can threaten the future of entire fleets, as New Bedford, Massachusetts, saw when Blue Harvest Fisheries, backed by a private equity firm, bought up vessels and other assets and then declared bankruptcy a few years later, leaving a smaller fleet and some local business and fishermen unpaid for their work. A company with local connections bought eight vessels from Blue Harvest along with 48 state and federal permits the company held.

    New challenges and unchanging risks

    While there are signs of recovery for New England’s fisheries, challenges continue.

    Warming water temperatures have shifted the distribution of some species, affecting where and when fish are harvested. For example, lobsters have moved north toward Canada. When vessels need to travel farther to find fish, that increases fuel and supply costs and time away from home.

    Fisheries managers will need to continue to adapt to keep New England’s fisheries healthy and productive.

    One thing that, unfortunately, hasn’t changed is the dangerous nature of the occupation. Between 2000 and 2019, 414 fishermen died in 245 disasters.

    Stephanie Otts receives funding from the NOAA National Sea Grant College Program through the U.S. Department of Commerce. Previous support for fisheries management legal research provided by The Nature Conservancy.

    ref. Checking in on New England fisheries 25 Years after ‘The Perfect Storm’ movie – https://theconversation.com/checking-in-on-new-england-fisheries-25-years-after-the-perfect-storm-movie-255076

    MIL OSI Analysis

  • MIL-OSI Analysis: Checking in on New England fisheries 25 Years after ‘The Perfect Storm’ movie

    Source: The Conversation – USA – By Stephanie Otts, Director of National Sea Grant Law Center, University of Mississippi

    Filming ‘The Perfect Storm’ in Gloucester Harbor, Mass.
    The Salem News Historic Photograph Collection, Salem State University Archives and Special Collections, CC BY

    Twenty-five years ago, “The Perfect Storm” roared into movie theaters. The disaster flick, starring George Clooney and Mark Wahlberg, was a riveting, fictionalized account of commercial swordfishing in New England and a crew who went down in a violent storm.

    The anniversary of the film’s release, on June 30, 2000, provides an opportunity to reflect on the real-life changes to New England’s commercial fishing industry.

    Fishing was once more open to all

    In the true story behind the movie, six men lost their lives in late October 1991 when the commercial swordfishing vessel Andrea Gail disappeared in a fierce storm in the North Atlantic as it was headed home to Gloucester, Massachusetts.

    At the time, and until very recently, almost all commercial fisheries were open access, meaning there were no restrictions on who could fish.

    There were permit requirements and regulations about where, when and how you could fish, but anyone with the means to purchase a boat and associated permits, gear, bait and fuel could enter the fishery. Eight regional councils established under a 1976 federal law to manage fisheries around the U.S. determined how many fish could be harvested prior to the start of each fishing season.

    Fishing has been an integral part of coastal New England culture since its towns were established. In this 1899 photo, a New England community weighs and packs mackerel.
    Charles Stevenson/Freshwater and Marine Image Bank

    Fishing started when the season opened and continued until the catch limit was reached. In some fisheries, this resulted in a “race to the fish” or a “derby,” where vessels competed aggressively to harvest the available catch in short amounts of time. The limit could be reached in a single day, as happened in the Pacific halibut fishery in the late 1980s.

    By the 1990s, however, open access systems were coming under increased criticism from economists as concerns about overfishing rose.

    The fish catch peaked in New England in 1987 and would remain far above what the fish population could sustain for two more decades. Years of overfishing led to the collapse of fish stocks, including North Atlantic cod in 1992 and Pacific sardine in 2015.

    As populations declined, managers responded by cutting catch limits to allow more fish to survive and reproduce. Fishing seasons were shortened, as it took less time for the fleets to harvest the allowed catch. It became increasingly hard for fishermen to catch enough fish to earn a living.

    Saving fisheries changed the industry

    In the early 2000s, as these economic and environmental challenges grew, fisheries managers started limiting access. Instead of allowing anyone to fish, only vessels or individuals meeting certain eligibility requirements would have the right to fish.

    The most common method of limiting access in the U.S. is through limited entry permits, initially awarded to individuals or vessels based on previous participation or success in the fishery. Another approach is to assign individual harvest quotas or “catch shares” to permit holders, limiting how much each boat can bring in.

    In 2007, Congress amended the 1976 Magnuson-Stevens Fishery Conservation and Management Act to promote the use of limited access programs in U.S. fisheries.

    Ships in the fleet out of New Bedford, Mass.
    Henry Zbyszynski/Flickr, CC BY

    Today, limited access is common, and there are positive signs that the management change is helping achieve the law’s environmental goal of preventing overfishing. Since 2000, the populations of 50 major fishing stocks have been rebuilt, meaning they have recovered to a level that can once again support fishing.

    I’ve been following the changes as a lawyer focused on ocean and coastal issues, and I see much work still to be done.

    Forty fish stocks are currently being managed under rebuilding plans that limit catch to allow the stock to grow, including Atlantic cod, which has struggled to recover due to a complex combination of factors, including climatic changes.

    The lingering effect on communities today

    While many fish stocks have recovered, the effort came at an economic cost to many individual fishermen. The limited-access Northeast groundfish fishery, which includes Atlantic cod, haddock and flounder, shed nearly 800 crew positions between 2007 and 2015.

    The loss of jobs and revenue from fishing impacts individual family income and relationships, strains other businesses in fishing communities, and affects those communities’ overall identity and resilience, as illustrated by a recent economic snapshot of the Alaska seafood industry.

    When original limited-access permit holders leave the business – for economic, personal or other reasons – their permits are either terminated or sold to other eligible permit holders, leading to fewer active vessels in the fleet. As a result, the number of vessels fishing for groundfish has declined from 719 in 2007 to 194 in 2023, meaning fewer jobs.

    A fisherman unloads a portion of his catch for the day of 300 pounds of groundfish, including flounder, in January 2006 in Gloucester, Mass.
    AP Photo/Lisa Poole

    Because of their scarcity, limited-access permits can cost upward of US$500,000, which is often beyond the financial means of a small businesses or a young person seeking to enter the industry. The high prices may also lead retiring fishermen to sell their permits, as opposed to passing them along with the vessels to the next generation.

    These economic forces have significantly altered the fishing industry, leading to more corporate and investor ownership, rather than the family-owned operations that were more common in the Andrea Gail’s time.

    Similar to the experience of small family farms, fishing captains and crews are being pushed into corporate arrangements that reduce their autonomy and revenues.

    Consolidation can threaten the future of entire fleets, as New Bedford, Massachusetts, saw when Blue Harvest Fisheries, backed by a private equity firm, bought up vessels and other assets and then declared bankruptcy a few years later, leaving a smaller fleet and some local business and fishermen unpaid for their work. A company with local connections bought eight vessels from Blue Harvest along with 48 state and federal permits the company held.

    New challenges and unchanging risks

    While there are signs of recovery for New England’s fisheries, challenges continue.

    Warming water temperatures have shifted the distribution of some species, affecting where and when fish are harvested. For example, lobsters have moved north toward Canada. When vessels need to travel farther to find fish, that increases fuel and supply costs and time away from home.

    Fisheries managers will need to continue to adapt to keep New England’s fisheries healthy and productive.

    One thing that, unfortunately, hasn’t changed is the dangerous nature of the occupation. Between 2000 and 2019, 414 fishermen died in 245 disasters.

    Stephanie Otts receives funding from the NOAA National Sea Grant College Program through the U.S. Department of Commerce. Previous support for fisheries management legal research provided by The Nature Conservancy.

    ref. Checking in on New England fisheries 25 Years after ‘The Perfect Storm’ movie – https://theconversation.com/checking-in-on-new-england-fisheries-25-years-after-the-perfect-storm-movie-255076

    MIL OSI Analysis

  • MIL-OSI Analysis: How’s the UK attempt to reach net zero going? There’s good news and bad news

    Source: The Conversation – UK – By John Barrett, Professor of Energy and Climate Policy, Deputy Director of the Priestly Centre for Climate Futures, Theme Lead for the UKRI Energy Demand Research Centre, University of Leeds

    BOY ANTHONY/Shutterstock

    Each year, the Climate Change Committee – the UK’s independent advisory body tasked with monitoring the country’s movement toward its legally binding climate goals – gives a report on the government’s progress over the last year.

    The Climate Change Committee’s new 2025 progress report is a mix of good and bad news about whether the UK is on track to meet its greenhouse gas emissions targets. These include a 68% reduction by 2030 and an 81% reduction by 2035, relative to 1990 levels.

    Meeting these targets requires long lead times. It takes years to develop and deploy low-carbon technologies, change social practices and align industrial and economic policy with net zero ambitions. The Climate Change Committee’s analysis goes beyond simply measuring emissions — it also evaluates whether the right policies are in place across sectors such as transport, buildings, energy and industry.

    So how is the UK doing? Between 1990 and 2024, the UK halved its greenhouse gas emissions, primarily by decarbonising the power sector, improving energy efficiency and shifts in the UK’s industrial base. This equates to an average annual reduction of 0.7%.

    Since the committee was established in 2008, the rate of reduction has more than doubled. In the last decade, since the Paris agreement was signed in 2015, the UK has decarbonised at around 3.4% per year. To meet the 2030 and 2035 targets, the pace of reduction has to continue at this level, but from a wider set of sectors.

    However, the analysis in the CCC report suggests that even this may not be fast enough. A major scientific review recently warned the world has just three years left in its global carbon budget if we are to stay within the 1.5°C temperature limit agreed in the Paris agreement.

    A mixed picture

    We are both involved with the committee and its work. Piers Forster, a climate scientist, has served on the committee since 2018 and is currently its chair. John Barrett provides key data on imported emissions and regularly provides analysis into the committee’s work.

    On the positive side, the UK continues to expand renewable energy capacity, which not only cuts emissions but lowers energy bills and improves energy security. Emissions from the energy supply sector decreased 17% last year.

    A fifth of new vehicles sold are now electric. For the first time, evidence shows that electric cars are causing transport emissions to decline, even as people are travelling more. Tree planting rates also increased by 56% last year, mainly in Scotland.

    However, this report highlights serious gaps. With only five years left until 2030, the Climate Change Committee estimates that 39% of the required emissions reductions are not adequately backed by government policy.

    Growing demand in high-carbon sectors like aviation is offsetting gains made in electricity generation. Aviation emissions are now scarily largely than those from electricity generation and rising fast.

    Time is running out and climate action is urgently required.
    banu sevim/Shutterstock

    Although nearly 100,000 heat pumps were installed last year, emissions from buildings are still rising. In road transport, while electric vehicle adoption is growing, there’s been little shift towards shared public transport options such as buses and trains. In industry, policies around resource efficiency and consumption remain underdeveloped.

    Critically, the Climate Change Committee notes that electricity currently accounts for just 18% of the UK’s total energy demand, and suggests that 80% of required emissions reductions must come from sectors beyond energy supply. The rates of decarbonisation need to more than double in these other sectors.

    Yet, policy to reduce overall energy demand remains weak. This is a broader agenda than reducing household energy bills but a more fundamental appreciation of how the UK’s energy demand can be shaped in the future.

    The UK cannot rely on technology alone. The climate transition can benefit from changes in how we live, move, consume and produce. Making such changes would make us less dependent on fossil fuel imports, put more money in our pockets from efficiency savings and make us healthier by improving air quality, increase exercise levels through more active travel such as walking and cycling and make our homes more comfortable in both hot and cold conditions.

    A truly credible response to the climate crisis demands a whole-system approach. That means aligning climate goals with economic and social policy, and recognising the broader benefits — from improved health to reduced inequality — that come with reducing energy demand.

    The window to act is closing. The UK has made progress, but without more ambitious and integrated action, it risks falling short when it matters most.

    According to the Climate Change Committee report, the UK can deliver both its legislated targets and its internationally-committed emission reduction targets if it takes decisive policy action. And with the right political will that’s possible in a cost-effective way that improves the lives of its citizens.


    Don’t have time to read about climate change as much as you’d like?

    Get a weekly roundup in your inbox instead. Every Wednesday, The Conversation’s environment editor writes Imagine, a short email that goes a little deeper into just one climate issue. Join the 45,000+ readers who’ve subscribed so far.


    John Barrett receives funding from UK Research and Innovation (UKRI) and the Department of Energy Security and Net Zero (DESNZ).

    Piers Forster receives funding from UK and European research councils. He is interim chair of the Climate Change Committee

    ref. How’s the UK attempt to reach net zero going? There’s good news and bad news – https://theconversation.com/hows-the-uk-attempt-to-reach-net-zero-going-theres-good-news-and-bad-news-259580

    MIL OSI Analysis

  • MIL-OSI Analysis: A chance discovery of a 365-million-year-old fossil reveals a new type of ray-finned fish

    Source: The Conversation – Canada – By Conrad Daniel Mackenzie Wilson, PhD candidate in Earth Sciences, Carleton University

    An artist’s rendition of the newly discovered fish, _Sphyragnathus tyche_. (C. Wilson), CC BY

    In 2015, two members of the Blue Beach Fossil Museum in Nova Scotia found a long, curved fossil jaw, bristling with teeth. Sonja Wood, the museum’s owner, and Chris Mansky, the museum’s curator, found the fossil in a creek after Wood had a hunch.

    The fossil they found belonged to a fish that had died 350 million years ago, its bony husk spanning nearly a metre on the lake bed. The large fish had lived in waters thick with rival fish, including giants several times its size. It had hooked teeth at the tip of its long jaw that it would use to trap elusive prey and fangs at the back to pierce it and break it down to eat.

    For the last eight years, I have been part of a team under the lead of paleontologist Jason Anderson, who has spent decades researching the Blue Beach area of Nova Scotia, northwest of Halifax, in collaboration with Mansky and other colleagues. Much of this work has been on the tetrapods — the group that includes the first vertebrates to move to land and all their descendants — but my research focuses on what Blue Beach fossils can tell us about how the modern vertebrate world formed.

    Blue Beach Fossil Museum curator Chris Mansky below the fossil cliffs.
    (C. Wilson), CC BY

    Birth of the modern vertebrate world

    The modern vertebrate world is defined by the dominance of three groups: the cartilaginous fishes or chondrichthyans (including sharks, rays and chimaeras), the lobe-finned fishes or sarcopterygians (including tetrapods and rare lungfishes and coelacanths), and the ray-finned fishes or actinopterygians (including everything from sturgeon to tuna). Only a few jawless fishes round out the picture.

    This basic grouping has remained remarkably consistent — at least for the last 350 million years.

    Before then, the vertebrate world was a lot more crowded. In the ancient vertebrate world, during the Silurian Period (443.7-419.2 MA) for example, the ancestors of modern vertebrates swam alongside spiny pseudo-sharks (acanthodians), fishy sarcopterygians, placoderms and jawless fishes with bony shells.

    Armoured jawless fishes had dwindled by the Late Devonian Period (419.2-358.9 MA), but the rest were still diverse. Actinopterygians were still restricted to a few species with similar body shapes.

    By the immediately succeeding early Carboniferous times, everything had changed. The placoderms were gone, the number of species of fishy sarcopterygians and acanthodians had cratered, and actinopterygians and chondrichthyans were flourishing in their place.

    The modern vertebrate world was born.

    A shortnose chimaera, belonging to the chondrichthyan group of vertebrates.
    (Shutterstock)

    A sea change

    Blue Beach has helped build our understanding of how this happened. Studies describing its tetrapods and actinopterygians have showed the persistence of Devonian-style forms in the Carboniferous Period.

    Whereas the abrupt end-Devonian decline of the placoderms, acanthodians and fishy sarcopterygians can be explained by a mass extinction, it now appears that multiple types of actinopterygians and tetrapods survived to be preserved at Blue Beach. This makes a big difference to the overall story: Devonian-style tetrapods and actinopterygians survive and contribute to the evolution of these groups into the Carboniferous Period.

    But significant questions remain for paleontologists. One point of debate revolves around how actinopterygians diversified as the modern vertebrate world was born — whether they explored new ways of feeding or swimming first.

    Comparing the jawbones of Sphyragnathus, Austelliscus and Tegeolepis.
    (C. Wilson), CC BY

    The Blue Beach fossil was actinopterygian, and we wondered what it could tell us about this issue. Comparison was difficult. Two actinopterygians with long jaws and large fangs were known from the preceding Devonian Period (Austelliscus ferox and Tegeolepis clarki), but the newly found jaw had more extreme curvature and the arrangement of its teeth. Its largest fangs are at the back of its jaw, but the largest fangs of Austelliscus and Tegeolepis are at the front.

    These differences were significant enough that we created a new genus and species: Sphyragnathus tyche. And, in view of the debate on actinopterygian diversification, we made a prediction: that the differences in anatomy between Sphyragnathus and Devonian actinopterygians represented different adaptations for feeding.

    Front fangs

    To test this prediction, we compared Sphyragnathus, Austelliscus and Tegeolepis to living actinopterygians. In modern actinopterygians, the difference in anatomy reflects a difference in function: front-fangs capture prey with their front teeth and grip it with their back teeth, but back-fangs use their back teeth.

    Since we couldn’t observe the fossil fish in action, we analyzed the stress their teeth would experience if we applied force. The back teeth of Sphyragnathus handled force with low stress, making them suited for a role in piercing prey, but the back teeth of Austelliscus and Tegeolepis turned low forces into significantly higher stress, making them best suited for gripping.

    We concluded that Sphyragnathus was the earliest actinopterygian adapted for breaking down prey by piercing, which also matches the broader predictions of the feeding-first hypothesis.

    Substantial work remains — only the jaw of Sphyragnathus is preserved, so the “locomotion-first” hypothesis was untested. But this represents the challenge and promise of paleontology: get enough tantalizing glimpses into the past and you can begin to unfold a history.

    As for the actinopterygians, research indicates they survived and diversified during Devonian times and had shifting roles during the birth of the modern vertebrate world — at least until more fossils are found that could determine whether that’s the case.

    Conrad Daniel Mackenzie Wilson receives funding from the Natural Sciences and Engineering Research Council of Canada, the Ontario Student Assistance Program, and the Society of Vertebrate Paleontology.

    ref. A chance discovery of a 365-million-year-old fossil reveals a new type of ray-finned fish – https://theconversation.com/a-chance-discovery-of-a-365-million-year-old-fossil-reveals-a-new-type-of-ray-finned-fish-254246

    MIL OSI Analysis

  • MIL-OSI Analysis: Hidden gems of LGBTQ+ cinema: A League of Their Own was always queer

    Source: The Conversation – UK – By Kate McNicholas Smith, Lecturer in Television Theory, University of Westminster

    Sports comedy drama film, A League of Their Own, directed by Penny Marshall, was released in 1992. In the same year, professor and film critic B Ruby Rich coined the term “new queer cinema” to describe a wave of independent films which represented LGBTQ+ people in new and unapologetic ways.

    Meanwhile on television, the decade saw some groundbreaking representations of LGBTQ+ characters. In 1997, US actor and TV presenter Ellen DeGeneres famously came out on and off screen.

    Yet, as a teenager coming of age (and coming out) in late 1990s Britain, Section 28 (a law prohibiting the “promotion” of homosexuality by local authorities and schools) was still firmly in place and representation felt scarce. So, I did what queer audiences have always done and found representation in interpretation, reimagining and reading the subtext.

    Queer viewers have long found pleasure and queer possibilities in popular culture. There are many examples of stars and screen characters who are not necessarily LGBTQ+ themselves but have come to be distinctly associated with queer culture. Take singer and actress Judy Garland, who is widely recognised as a gay icon (as depicted in the 2019 biographical film Judy).

    So big was her LGBTQ+ fandom that she likely inspired the historical code term “a friend of Dorothy”. This code references The Wizard of Oz, in which Garland plays Dorothy, and was used within the LGBTQ+ community to discreetly identify each other.


    This article is part of a series highlighting brilliant films that should be more widely known and firmly part of the canon of queer cinema .


    Film theorist Patricia White traces such viewing practices back to the introduction of the Motion Picture Production (or Hays) Code. The Code heavily restricted what could be shown on screen and prohibited LGBTQ+ representation, but in doing so encouraged audiences to engage in queer codes and subtexts.

    A League of Their Own tells the fictionalised true story of the All-American Girls Professional Baseball League. In 1988, Dottie Hinson (Geena Davis) is attending a celebration of the women at the Baseball Hall of Fame. We quickly flash back to 1943 and the formation of the league.

    The second world war is in full thrust and the men are away fighting, which threatens the shut down of major league baseball. However, Chicago Cubs owner Walter Harvey persuades his fellow owners to bankroll a women’s league.

    Making up the newly formed Rockford Peaches, there’s Davis as Dottie and Lori Petty as Kit, Dottie’s frustrated younger sister. Also on the team are “tomboy” Marla Hooch (Megan Cavanagh), “all the way” Mae Mordabito, played by Madonna (who once declared “I think everybody has a bisexual nature”), and Doris Murphy, played by lesbian comic, actor and talk show host, Rosie O’Donnell (although O’Donnell didn’t come out publicly until 2002).




    Read more:
    Hidden gems of LGBTQ+ cinema: Saving Face is a complicated romcom that tenderly depicts the experiences of queer Asians


    While the film remains determinedly heterosexual, the possibilities for queer readings abound. Characters like Dottie and Mae offer glamorous high femme looks and personas, while Kit and Marla represent outsiders who don’t quite fit in. The close relationship, styling and characterisations of best friends Doris and Mae (and the extra connotations of the actors) evoke a coded butch/femme couple. No surprise then that I am not alone in my love for the film. A League of their Own became a cult queer classic.




    Read more:
    Hidden gems of LGBTQ+ cinema: celebrating the wonderful slippery queerness of Penda’s Fen


    There may be, as reluctant Rockford Peaches manager Jimmy (Tom Hanks) shouts in one of the film’s most quoted lines, “no crying in baseball” – but the film never fails to leave me in tears.

    Everytime I watch Dottie leaving the league to return to her husband Bob – a narrative resolve that firmly forecloses the queer possibilities of the character – my heart is broken. The melancholy of the ending perhaps reflects the seeming impossibility of a queer future – both in 1940s US and to me at school in 1990s Britain. Of course, queerness was far from impossible in either decade, although it was often, as in the film, hidden from those who did not know where to look for it.

    Rockford Peach Dorothy “Dottie” Kamenshek was one of the inspirations for the fictional Dottie – she was also a lesbian and later married fellow player Margaret Wenzell. Another player in the women’s league at the time, Peoria Redwings catcher Terry Donahue, kept her relationship with Pat Henschel a secret for almost 70 years. In 2020, Netflix documentary, A Secret Love, told their story.

    Maybelle Blair, who also played for a time with the Peoria Redwings, came out publicly at 95 years old in 2022. She reflected on the women of the league: “Out of 650, I bet you 400 was gay.”

    In 2022, Amazon Prime released a television adaptation of A League of Their Own, co-created by Will Graham and Abbi Jacobson (Broad City). Like queer fan fiction come to life, the television show rewrites the central characters as canonically queer.

    What’s more, unlike the film, the series offers a diverse take on the racism and homophobia, as well as the sexism, of the era. This time round, the central characters included Maxine Chapman (Chanté Adams) – a black lesbian player who is rejected from the racially segregated league – and her black transmasculine uncle Bertie (Lea Robinson).

    In one episode, the queer teammates visit a lesbian bar run by none other than Rosie O’Donnell, now a 1940s butch with a wife. To gain entry they are asked: “Are you a friend of Dorothy’s?”

    Thus, the queer subtext of A League of Their Own, which so captured my queer teen heart, emerged firmly into view in the television adaptation, which was sadly cancelled after only one series. Watching the series, however, was validating, as what secretly made the film mean so much to me was made visible. Queerness in the show, like in my own life, was no longer an impossibility.

    Kate McNicholas Smith does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Hidden gems of LGBTQ+ cinema: A League of Their Own was always queer – https://theconversation.com/hidden-gems-of-lgbtq-cinema-a-league-of-their-own-was-always-queer-257061

    MIL OSI Analysis