Category: Academic Analysis

  • MIL-OSI Global: Why was it so hard for the GOP to pass its spending bill?

    Source: The Conversation – USA – By Charlie Hunt, Assistant Professor of Political Science, Boise State University

    U.S. Sen. John Fetterman of Pennsylvania was one of 10 Democrats who voted to break the filibuster on the GOP funding bill. Anna Moneymaker/Getty Images

    Facing a threat of imminent government shutdown, nine Democrats joined GOP Senate colleagues to defeat a filibuster, moving a six-month government funding bill to final passage in a late-day vote on March 14, 2025.

    Since January 2025, Republicans in Washington have enjoyed what’s commonly known as a governing “trifecta”: control over the executive branch via the president, combined with majorities for their party in both the House and the Senate.

    You might think that a trifecta, which is also referred to as “unified government” by political scientists, is a clear recipe for easy legislative success. In theory, when political parties have unified control over the House, the Senate and the presidency, there should be less conflict between them. Because these politicians are part of the same political party and have the same broad goals, it seems like they should be able to get their agenda approved, and the opposing minority party can do little to stop them.

    But not all trifectas are created equal, and not all are dominant. And several weaknesses in the Republicans’ trifecta made passing their six-month stopgap spending bill so difficult, and they help explain why the federal government came so close to shutting down completely.

    Research shows that political gridlock can still happen even under a unified government for reasons that have been on display ever since Republicans assumed leadership of Congress and the presidency in January.

    With a slim majority, will GOP House Speaker Mike Johnson, left, be able to pass Donald Trump’s priorities?
    Andrew Harnik/Getty Images

    Majority size matters

    A unified government clearly makes President Donald Trump’s ability to enact his agenda much easier than if, for example, Democrats controlled the U.S. House, as they did during the second half of his first term, from 2021-2022. But tight margins in both congressional chambers have meant that, even with a trifecta, it hasn’t been an easy.

    Trump was the sixth consecutive president with a trifecta on Day 1 of his second term. But history – and simple math – show that presidents with trifectas have an easier time passing partisan legislation with bigger majorities. Bigger majorities mean majority-party defections won’t easily sink controversial or partisan legislation. A bigger majority also means that individual members of Congress from either party have less leverage to water down the president’s policy requests.

    Trump also held a trifecta during the beginning of his first term in office; in particular, a big Republican majority in the House, which passed major legislation with relative ease and put pressure on Senate colleagues to comply. Trump signed a major tax reform package in 2017 that was the signature legislative achievement of his first term.

    But Trump has a much smaller advantage this time.

    Every president since Bill Clinton has entered office with a trifecta, but Trump’s seat advantage in the House on Day 1 of his second term was the smallest of all of them. This slim House margin meant that Republicans could afford to lose only a handful of their party’s votes on their spending bill in order for it to pass over unanimous Democratic opposition.

    And Trump’s relatively small advantage in the Senate meant that Republicans needed at least eight Democratic votes to break a filibuster. Nine Democrats ultimately voted to advance the bill to final passage.

    Majority party troubles

    In addition to opposition from Democrats in Congress, Trump and other Republican leaders have continued to confront internal divisions within their own party.

    In a closely divided House or Senate, there are plenty of tools that Democrats, even as the minority party, can use to stymie Trump’s agenda. This most notably includes the filibuster, which would have forced Republicans to garner 60 votes for their short-term spending bill. A small proportion of Democrats ultimately bailed out Senate Republicans in this case; but any major defections within the GOP would have required even more Democratic support, which Republicans were unlikely to get.**

    Even dominant legislative trifectas, again like the one former President Barack Obama enjoyed when he took office in 2009, can’t prevent divisions within political parties, as different politicians jockey for control of the party’s agenda.

    Despite entering office with a 17-vote advantage in the Senate, 11 more than Trump enjoys now, Obama’s signature legislative achievement – the Affordable Care Act, also sometimes known as Obamacare – had to be watered down significantly to win a simple majority after backlash from conservative Democrats.

    Obama’s trifecta was bigger in size; but in a polarized America, a large majority also means an ideologically diverse one.

    Just as Republican leaders did in the last Congress, Trump has faced similar pushback behind the scenes and in public from members of his own party in his second term. For the past two years, the Republican-led House has been repeatedly riven by leadership struggles and an often aimless legislative agenda, thanks to a lack of cooperation from the the party’s far-right flank.

    This group of ideologically driven lawmakers remains large enough to stall any party-line vote that Speaker Mike Johnson hopes to pass, and the spending bill very nearly fell victim to this kind of defection.

    Even though the GOP squeaked out a win on this spending bill, the potential for continued chaos is monumental, especially if Trump pursues more major reform to policy areas such as immigration.

    Competing pressures

    Despite Congress’ reputation as a polarized partisan body, members of Congress ultimately serve multiple masters. The lingering Republican divisions that made it so difficult to pass this resolution reflect the competing pressures of national party leaders in Washington and the local politics of each member’s district, which often cut against what party leaders want.

    For example, some Republicans represent heavily Republican districts and will be happy to go along with Trump’s agenda, regardless of how extreme it is. Others represent districts won by Kamala Harris in 2024 and might be more inclined to moderate their positions to keep their seats in 2026 and beyond. There admittedly aren’t many of this latter group; but likely enough to sink any party-line legislation Speaker Johnson has in mind.

    What’s next?

    Republicans managed to pass a hurried, stopgap spending bill on March 14, 2025 only by the skin of their teeth. Failing to do so would have driven the federal government into shutdown mode. Small margins, internal divisions and conflicting electoral pressures will continue to make legislating difficult over the next two years or more.

    Thanks to these complications, it may be that congressional Republicans will continue to rely on the executive branch, including Elon Musk and the efforts at the Department of Government Efficiency, or DOGE, to do the policymaking for them, even if it means handing over their own legislative power to Trump.

    This is an updated version of a story first published on Nov. 19, 2024.

    Charlie Hunt does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Why was it so hard for the GOP to pass its spending bill? – https://theconversation.com/why-was-it-so-hard-for-the-gop-to-pass-its-spending-bill-252257

    MIL OSI – Global Reports

  • MIL-OSI Global: Four small planets discovered around one of the closest stars to Earth – an expert explains what we know

    Source: The Conversation – UK – By Coel Hellier, Professor of Astrophysics, Keele University

    Barnard’s Star is a small, dim star, of the type that astronomers call red dwarfs. Consequently, even though it is one of the closest stars to Earth, such that its light takes only six years to get here, it is too faint to be seen with the naked eye. Now, four small planets have been found orbiting the star. Teams in America and Europe achieved this challenging detection by exploiting precision instruments on the world’s largest telescopes.

    Diminutive Barnard’s Star is closer in size to Jupiter than to the Sun. Only the three stars that make up the Alpha Centauri system lie closer to us.

    The planets newly discovered around Barnard’s Star are much too faint to be seen directly, so how were they found? The answer lies in the effect of their gravity on the star. The mutual gravitational attraction keeps the planets in their orbits, but also tugs on the star, moving it in a rhythmic dance that can be detected by sensitive spectrograph instruments. Spectrographs split up the star’s light into its component wavelengths. They can be used to measure the star’s motion.

    A significant challenge for detection, however, is the star’s own behaviour. Stars are fluid, with the nuclear furnace at their core driving churning motions that generate a magnetic field (just as the churning of Earth’s molten core produces Earth’s magnetic field). The surfaces of red dwarf stars are rife with magnetic storms. This activity can mimic the signature of a planet when there isn’t one there.

    The task of finding planets by this method starts with building highly sensitive spectrograph instruments. They are mounted on telescopes large enough to capture sufficient light from the star. The light is then sent to the spectrograph which records the data. The astronomers then observe a star over months or years. After carefully calibrating the resulting data, and accounting for stellar magnetic activity, one can then scrutinise the data for the tiny signals that reveal orbiting planets.

    In 2024, a team led by Jonay González Hernández from the Canary Islands Astrophysics Institute reported on four years of monitoring of Barnard’s Star with the Espresso spectrograph on the European Southern Observatory’s Very Large Telescope in Chile. They found one definite planet and reported tentative signals that indicated three more planets.

    Now, a team led by Ritvik Basant from the University of Chicago in a paper just published in Astrophysical Journal Letters, have added in three years of monitoring with the Maroon-X instrument on the Gemini North telescope. Analysing their data confirmed the existence of three of the four planets, while combining both the datasets showed that all four planets are real.

    Often in science, when detections push the limits of current capabilities, one needs to ponder the reliability of the findings. Are there spurious instrumental effects that the teams haven’t accounted for? Hence it is reassuring when independent teams, using different telescopes, instruments and computer codes, arrive at the same conclusions.

    The Gemini North telescope is located on Maunakea in Hawaii.
    MarkoBeg / Shutterstock

    The planets form a tightly packed, close-in system, having short orbital periods of between two and seven Earth days (for comparison, our Sun’s closest planet, Mercury, orbits in 88 days). It is likely they all have masses less than Earth’s. They’re probably rocky planets, with bare-rock surfaces blasted by their star’s radiation. They’ll be too hot to hold liquid water, and any atmosphere is likely to have been stripped away.

    The teams looked for longer-period planets, further out in the star’s habitable zone, but didn’t find any. We don’t know much else about the new planets, such as their estimated sizes. The best way of figuring that out would be to watch for transits, when planets pass in front of their star, and then measure how much starlight they block. But the Barnard’s Star planets are not orientated in such a way that we see them “edge on” from our perspective. This means that the planets don’t transit, making them harder to study.

    Nevertheless, the Barnard’s Star planets tell us about planetary formation. They’ll have formed in a protoplanetary disk of material that swirled around the star when it was young. Particles of dust will have stuck together, and gradually built up into rocks that aggregated into planets. Red dwarfs are the most common type of star, and most of them seem to have planets. Whenever we have sufficient observations of such stars we find planets, so there are likely to be far more planets in our galaxy than there are stars.

    Most of the planets that have been discovered are close to their star, well inside the habitable zone (where liquid water could survive on the planet’s surface), but that’s largely because their proximity makes them much easier to find. Being closer in means that their gravitational tug is bigger, and it means that they have shorter orbital periods (so we don’t have to monitor the star for as long). It also increases their likelihood of transiting, and thus of being found in transit surveys.

    The European Space Agency’s Plato mission, to be launched in 2026, is designed to find planets further from their stars. This should produce many more planets in their habitable zones, and should begin to tell us whether our own solar system, which has no close-in planets, is unusual.

    Coel Hellier has received research council grants for the discovery of exoplanets.

    ref. Four small planets discovered around one of the closest stars to Earth – an expert explains what we know – https://theconversation.com/four-small-planets-discovered-around-one-of-the-closest-stars-to-earth-an-expert-explains-what-we-know-252075

    MIL OSI – Global Reports

  • MIL-OSI Global: What food did the real St Patrick eat? Less corned beef and cabbage, more oats and stinky cheese

    Source: The Conversation – UK – By Regina Sexton, Food and culinary historian, University College Cork

    Every St Patrick’s day, thousands of Americans eat corned beef and cabbage as a way of connecting to Ireland. But this association sits uncomfortably with many Irish people.

    That’s because the dish, while popular in the past, has nothing to do with St Patrick himself. St Patrick (also known as Patricius or Pádraig) was born in Roman Britain in the 5th century. He is the patron saint of Ireland and in later biographies, legend and folklore, he is depicted as almost single-handedly converting the Irish to Christianity, and breaking the power of the druids.

    The entangled mix of history, myth and folklore that has been attached to the saint makes it difficult to isolate historical fact from hagiographical and folklore embellishments. So what, if anything, do the celebratory foods of today have to do with the real St Patrick? And would he have eaten any of those same foods himself?


    Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.


    The real St Patrick

    The little we know about the real Patrick comes from two, probably 5th-century, short Latin texts written by the saint himself. Those are the Confessio, which is believed to be Patrick’s autobiography, and the Epistola, a letter of excommunication to the soldiers of a British king Coroticus, after they killed and enslaved some of his converts.

    A St Patrick’s Day greeting card from 1909.
    Missouri History Museum

    In these texts, food is only mentioned in the context of hunger and the miraculous appearance of pigs that are slaughtered to sustain starving travellers.

    Other important biographies of St Patrick were written in the 7th and somewhere between the 9th and 12th century. The two 7th-century Latin texts were written by churchmen, Muirchú and Tírechán. The author of the later biography, The Tripartite Life of Saint Patrick, is not known, but it was written partly in Latin and partly in Irish. These hagiographies (writing on the lives of saints) were works in legend-building with little connection to the real Patrick.

    They do, however, give us a glimpse of the food culture of early medieval Ireland, when Patrick lived. They make references to dairy produce, salmon, bread, honey and meats, including beef, goat and a “ram for a king’s feast”.

    Herb gardens are discussed alongside details of the cooking culture with mention of copper cauldrons, kitchens and cooking women. Grain and dairy foods would have most common, with white meats abundant in summer, and grain – especially oats – associated with the winter and early spring.

    It is these foods, along with cultivated cabbage and onion-type vegetables and wild greens and fruit, that most likely would have sustained Patrick.

    Delicious miracles

    Food is frequently the subject of Saint Patrick’s miracles. As a child, he is said to have turned snow into butter and curds. On his missionary work, he was said to have changed water to honey, and cheese into stone and back to cheese again. In another miracle, he turned rushes into chives to satisfy a pregnant woman’s craving.

    The bountiful fish stocks of certain rivers are also attributed to the saint’s blessing. One such example is the River Bann in Northern Ireland which was known for its salmon.

    The food in Patrick’s world had a defined Irish signature. There is an emphasis in the hagiographies on a range of fresh, cultured and preserved dairy produce and the use of byproducts such as whey-water.

    Corned beef and cabbage has become a popular St Patrick’s Day meal, but bears little connection to the real Patrick.
    Brent Hofacker/Shutterstock

    The extensive and later abandoned Irish cheese-making tradition is referenced in mention of curds and fáiscre grotha (pressed curds). The differentiation between new milk and milk may indicate a skills-based culture of working with dairy in the preparation of a family of thickened, soured and fermented milks. The associated communities, of which Patrick would have been part, probably had a taste for highly flavoured and cultured milk and cheese products.

    These foods are typical of a self-sufficient agrarian economy, producing food that was suited to Irish soil and climatic conditions including wild and managed woodland, coastline and farmland. It is this vision of an untouched Ireland that continues to inspire Irish food culture today.

    Regina Sexton does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. What food did the real St Patrick eat? Less corned beef and cabbage, more oats and stinky cheese – https://theconversation.com/what-food-did-the-real-st-patrick-eat-less-corned-beef-and-cabbage-more-oats-and-stinky-cheese-251746

    MIL OSI – Global Reports

  • MIL-OSI Global: Cuts and caps to benefits have always harmed people, not helped them into work

    Source: The Conversation – UK – By Ruth Patrick, Professor in Social Policy, University of York

    fizkes/Shutterstock

    Keir Starmer’s government is expected to announce a host of cuts to sickness and disability support in the coming days. The UK’s ageing and increasingly unwell population has led to what has been described as “unsustainable” and “indefensible” spending on benefits.

    As researchers of poverty and welfare reform, we find it both shocking and sadly unsurprising that, after more than a decade of cuts to social security, the government seems to have once again decided that austerity is the answer to the economic pressures they are facing.

    We have spent many years documenting the real harms created by reforms to social security. It was disappointing to hear Starmer describe Britain’s social security system as an expensive way to “trap” people on welfare, rather than helping them find work.

    The expected proposals are intended to incentivise people into work, by reducing the generosity of support offered to people claiming disability-related benefits. But in reality, many of the measures already implemented to reduce spending by cutting or capping benefits have pushed people further away from the labour market.

    The relationship between welfare and work is more complex than it first appears. Around 37% of people on universal credit are currently in work.

    Approximately 23% of those out of work are engaging with advisers whose job is to support them back into the labour market. The majority of the rest of universal credit claimants are people who are not expected to be in work – often people who have health challenges that make it difficult for them to work most jobs.

    The UK’s social security payments cover a much smaller proportion of the average wage than most other countries in Europe.

    A single person’s allowance on universal credit is £393.45 per month if they are 25 or over, while under-25s receive £311.68. This averages out at less than £100 a week to meet all essential living costs, bar support with housing.

    Disabled people received additional support in the form of personal independence payments (Pip) or disability living allowance if you live in England, Wales or Northern Ireland, and adult or child disability payments in Scotland.

    This support is designed to help people meet the additional costs that come with disabilities and long-term health conditions. It is not means-tested, and is available to people in employment as well as those not currently working.

    Ministers are expected to make it more difficult to access Pip, freezing its value so this does not rise with inflation, and to reduce the amount of universal credit received by those judged unable to work. These proposals are likely to face strong opposition from many Labour MPs.


    Want more politics coverage from academic experts? Every week, we bring you informed analysis of developments in government and fact check the claims being made.

    Sign up for our weekly politics newsletter, delivered every Friday.


    Currently, if people are not able to engage in paid work for long periods, they are entitled to an additional payment through universal credit. This amount – equivalent to approximately £400 a month – could go down. The problem is that this is already not enough to live on, and often necessitates going without essentials, such as food or electricity.

    Families with dependent children receive additional support through child elements of universal credit, and through child benefit. But this support is subject to caps – the controversial and poverty-producing two-child limit, and the benefit cap, which restricts the support any household can receive where no one is working or claiming disability benefits.

    Our research has shown that these restrictions do not work. The two-child limit is not helping families get into work, and nor is it affecting whether families have more children.

    The benefit cap harms mental health, pushes people deep into poverty, and increases economic inactivity. Both policies are punitive and, in our view, need to be removed.

    Other reforms to disability-related social security have left people hungry, pushed people into economic inactivity, increased depression, and may have even raised the suicide rate.

    Getting Britain working?

    The government is trying to solve the wrong problem. They are focusing on those who are out of work, when it is increasingly clear that one big reason people with disabilities are not in employment is because work environments have fewer roles they can fill.

    While spending on disability-related support has gone up in recent years, the overall welfare bill has not. On top of that, the proportion of people who are not in work and who are claiming disability-related social security is actually about the same as it has been for the last 40 years. Indeed, the fact it is so low, given population ageing, could be read as good news.

    Research shows cutting access to benefits does not necessarily get people into work.
    Shutterstock

    There have also been wider changes in the labour market. There has been a rapid decline in “light work”, like lift attendants, cinema ushers, or low-physical exertion roles in factories. As work environments have become more intense, people with disabilities have found it increasingly difficult to stay in work.

    So, what would work to entice more people into work? The truth is we know far more about what does not work than what does.

    The best evidence we have right now suggests that making it more difficult to claim social security and placing more strenuous work-search requirements on claimants will simply push people with poor health (particularly mental ill-health) further away from the labour market.

    The welfare narrative

    Behind the cuts currently being trailed is a popular but ill-founded logic which views social security as the cause of the country’s economic woes. Welfare itself is seen as the problem, with whole generations supposedly left parked on what is depicted as too-easy-to-claim and too-generous support.

    But this narrative grossly misrepresents what it’s actually like to try and claim social security. It is, in fact, notoriously complex. Often, this complexity is intentional.

    Making accessing social security difficult is not necessarily (or always) about meanness, but this “nasty strategy” is a product of a system that assumes that many people are not eligible for the support they claim.

    The system has always assessed eligibility for benefits, but the way these assessments have been done in recent years has often been experienced as degrading and dehumanising. On the flip side, some have claimed that people are not being assessed regularly enough, and suggest that some people who have claimed benefits in the past may now be fit to work.

    Where this is true is unclear, but the failure to reassess is also a product of cuts to this system – so taking more money out will not address this problem either.

    Britain’s social security system has been stripped to the bones: it provides neither security nor enough support to those who receive it, and is ripe for reform. But the reform required is not of the type Labour is proposing, which will succeed only in further decimating what little remains of our social security safety net.

    This article was co-published with LSE Blogs at the London School of Economics.

    Ruth Patrick receives funding for her research from organisations including Nuffield Foundation, The Robertson Trust, Trust for London, Abrdn Financial Fairness Trust and Joseph Rowntree Foundation. Ruth is a member of the Labour Party.

    Aaron Reeves has received funding from the European Research Council, Nuffield Foundation, and the Wellcome Trust.

    ref. Cuts and caps to benefits have always harmed people, not helped them into work – https://theconversation.com/cuts-and-caps-to-benefits-have-always-harmed-people-not-helped-them-into-work-252110

    MIL OSI – Global Reports

  • MIL-OSI Global: Keir Starmer’s civil service reforms: what is mission-led government and why is it so hard to achieve?

    Source: The Conversation – UK – By Patrick Diamond, Professor of Public Policy, Queen Mary University of London

    All governments, it seems, are destined to go to war with Whitehall. The administration of Keir Starmer has been in power only nine months, but there are clear indications ministers are frustrated and dissatisfied with civil service performance.

    They have so far avoided the temptation to publicly vilify Whitehall officials for the government’s inability to deliver rapid progress. There is no repeat of the rhetoric that a hard rain is about to fall on the civil service, as Boris Johnson and his chief adviser, Dominic Cummings, threatened in the aftermath of Brexit.

    Yet it is obvious that behind the scenes, senior figures in the Starmer administration believe the civil service is not functioning as it should. We’ve seen a flurry of announcements on reforming the machinery of government.

    The Cabinet Office minister, Pat McFadden, unveiled plans to subject officials to performance reviews, while removing poorly performing civil servants from their posts. The prime minister made it clear he wants to cut back quangos (notably scrapping the health agency, NHS England) and ensure ministers, not regulators, take significant policy decisions.

    Meanwhile, there is a determination to unleash artificial intelligence, ensuring public sector productivity improves. Starmer believes the British state has become “flabby”, slow-moving and ineffectual.


    Want more politics coverage from academic experts? Every week, we bring you informed analysis of developments in government and fact check the claims being made.

    Sign up for our weekly politics newsletter, delivered every Friday.


    The apparent disconnect between ministers and the bureaucracy is scarcely surprising. Before coming to power, Labour had detailed plans to make British government “mission-orientated”.

    The Starmer administration declared in its first king’s speech that “mission-based government” would entail “a whole new way of governing” addressing “long-term, complex problems”. This mission mind-set is exemplified by the American general George S. Paton: “Never tell people how to do things. Tell them what you want them to achieve and they will surprise you with their ingenuity.”

    Missions are intended to galvanise UK government, involving the whole of society in the drive for once-in-a-generation reforms without micro-managing from the centre.

    At the outset, there was too little appreciation among officials of the challenge that mission-orientated government posed to traditional ways of working in Whitehall. Starmer’s first chief of staff, Sue Gray, was determined to emphasise a return to reciprocal partnership between ministers and mandarins given the turmoil and instability that afflicted British government in the Johnson/Liz Truss era.

    Yet the prime minister now appears more focused on change than continuity. The implications of mission-orientated governance are potentially transformational.

    Mission-led government in a nutshell

    The concept of mission-led government essentially rests on four principles:

    1. Bringing a long-term, strategic perspective to policy development. Missions focus on long-term goals for society, instead of short-term targets or milestones.

    2. Breaking down silos across the public sector. Different government services and agencies work together on missions, ensuring issues do not slip between the institutional cracks.

    3. Giving professionals working on the front line of public service delivery greater agency. The idea is that fewer rules and edicts mean staff can respond to pressing challenges, adapting organisations accordingly.

    4. Incorporating ideas and insights generated outside the civil service, challenging the traditional monopoly over policy and implementation. Missions involve external organisations at the outset.

    The reality on the ground

    Each of these ideas are important, yet there is too little recognition of the significant challenge they pose to the culture and practices of Whitehall.

    UK central government does not do strategy well – and the past 15 years have witnessed a cull of what strategic capability there was. Day-to-day operational management and cost-cutting has long been prized over long-term thinking.

    Breaking down silos is necessary, yet difficult to achieve. The problem isn’t so much the mindset or recalcitrance of civil servants, but the prevailing system of parliamentary accountability.

    Ministers are responsible for the public money that has been allocated to their department. This reinforces boundaries and makes shared working across departments less tenable. No government has resolved the problem of how to achieve joint working on key programmes with the right blend of incentives, including shared budgets.

    Moreover, civil servants, like ministers, are reluctant to give frontline staff greater autonomy. There is a culture of mistrust after 40 years of public management reform.

    There is also a prevailing belief that many public sector professionals are ultimately self-interested. Leaving professionals at the front line to get on with implementation is an attractive proposition, but difficult to achieve given Whitehall’s instinct to impose rules, regulations, oversight and monitoring.

    Constitutional arrangements are central to civil service reform.
    Shutterstock/Adam Cowell

    Meanwhile, many in Whitehall believe giving a voice to outside “interest groups” potentially corrupts the policy process. Officials view the ideas of thinktanks as flimsy and insubstantial (in fairness, proposals such as universal credit originated by the Centre for Social Justice in the late 2000s scarcely stood the test of time).

    None of this makes change in central government unattainable. But it emphasises that all governments need a concerted strategy for reform, including being willing to devote political resources, as few recent prime ministers have done.

    And, if the Starmer administration pursues a genuinely mission-orientated approach, it must confront the fundamental question of the constitutional relationship between ministers and civil servants. This is an issue successive governments have avoided since the late 1960s.

    There is a compelling argument that in delivering missions, senior officials ought to be publicly accountable for delivery, as is the case, for example, in New Zealand. Yet that would require the doctrine of ministerial responsibility to be overhauled. Many will agree it is an unhelpful facade that should have been dismantled a long time ago anyway.

    Patrick Diamond is a member of the Labour Party and the Fabian Society. He is a former government special adviser.

    ref. Keir Starmer’s civil service reforms: what is mission-led government and why is it so hard to achieve? – https://theconversation.com/keir-starmers-civil-service-reforms-what-is-mission-led-government-and-why-is-it-so-hard-to-achieve-252230

    MIL OSI – Global Reports

  • MIL-OSI Global: The government has revealed its plans to get Britain building again. Some of them might just work

    Source: The Conversation – UK – By Graham Haughton, Professor, Urban and Environmental Planning, University of Manchester

    SARAWUT KAEWKET/Shutterstock

    The UK government has published its planning and infrastructure bill, a cornerstone of its strategy for growth. The bill aims to “get Britain building again and deliver economic growth” and includes the hugely ambitious target of building 1.5 million homes in England over this parliament.

    The bill is ambitious in scope – 160 pages long and very technical. But what does it promise exactly?

    On infrastructure, it outlines reforms to limit vexatious repeat use of judicial review to block development. There are also some measures for a stronger electricity grid to ease the move towards renewable energy. While the plan to reward people living near new pylons with £250 off their bills grabbed headlines, just as important are measures for energy storage to level out peaks in demand and supply.

    On the planning side, planning departments will be allowed to charge more to those making applications. This should speed up decisions by funding more planning officer roles. But there are no measures to increase funding for drawing up local plans. This is important because councils often fall behind schedule in producing these. And where there is no up-to-date plan, there is a danger that developers will push through controversial proposals.

    The bill also provides for more decisions to be delegated to planning officials rather than planning committees – this means council staff rather than elected representatives. This already happens for smaller planning applications, so is not entirely new. But it does raise concerns about democratic scrutiny.

    The government argues that local democracy will not be undermined, as planning officers will be making their decisions in the context of democratically approved local plans as well as national legislation. But this could be misleading, unless planning authorities have the funds to update local plans regularly.

    There are also changes to existing development corporation legislation, to support the building of new towns. Particularly welcome is the responsibility on development corporations – government organisations dealing with urban development – to consider climate change and design quality. This is in order to hit net-zero targets and avoid cookie-cutter housing estates.

    Other measures are aimed at ensuring appropriate infrastructure is built to serve these new towns.




    Read more:
    Why building new towns isn’t the answer to the UK’s housing crisis


    There are changes planned too on when compulsory purchase orders can be used to buy sites that are broadly to be used for the public good. This could be for affordable homes, health or education facilities, for instance. It would work by reducing payments to the actual value of the land rather than its “hope value” (when landholders hold out for price rises once planning permission is granted).

    There is also a commitment to creating a nature restoration fund, which the government hopes will overcome some of the delays to approving new housing caused by potential threats to wildlife.

    The fund will aim to unblock development in general rather than specific sites, as happens at the moment, and will pool contributions from developers to fund nature recovery. Where there are concerns for wildlife, experts will develop a long-term mitigation plan that will be paid for by the fund while allowing the development to go ahead in the meantime.

    Will it work?

    As a professor of urban and environmental planning, the question for me is will the bill encourage development to progress more speedily? Almost certainly – probably mostly in terms of bringing forward improvements to critical national infrastructure schemes such as the electric grid. For residential development, some incremental speeding up is likely as developers crave certainty in planning decisions.

    But on their own, these measures are unlikely to be enough to provide the 1.5 million new homes set out in the government’s target. They offer nothing to tackle critical bottlenecks in terms of both labour and materials. It is also difficult to see the target being met without much more government involvement – by building social housing in particular.

    Will the bill result in better quality development? There is surprisingly little in the plans about improving design quality, other than in development corporation areas. This is disappointing, and a missed opportunity to ensure that developers raise their game in residential building and neighbourhood quality.

    And might it override local democracy? Arguably yes, but in practice not as much as some critics might argue. Most of the reforms are finessing existing practices, such as delegated powers to planning officers. Much depends on what the national government guidance turns out to be.

    The biggest concern is that it might increase invisible political pressures on planning officers by councillors and senior officials. It would have been good to have seen more measures to protect their independence and professional judgement.

    Hopefully the bill will speed up delivery of nationally important schemes for critical infrastructure. This means things like modernising the electricity grid and removing repeated use of judicial review to block a development. These elements should create jobs sooner and support economic growth.

    Where the bill will make absolutely no difference is in improving living standards for people with older homes. This bill is focused on new builds and has little to offer those hoping for support in retrofitting ageing housing stock with more energy-efficient features or creating green spaces in areas where new development is increasingly in demand.

    Development should be compatible with nature restoration.
    Nick Beer/Shutterstock

    Despite some of the ministerial bluster about removing red tape, much of the content of this bill is not about removing planning regulations. It is much more about improving them. Some measures will work better than others, but overall, given the government’s electoral mandate to deliver growth and protect the environment, this is a reasonable balancing act.

    It’s unlikely to deliver much growth in its own right, but as an enabler of growth, it is promising. More worrying is whether it will lead to poor-quality housing built at pace and massive scale to inadequate energy-efficiency and design standards. This would fail to deliver on net-zero and biodiversity ambitions. It is very much a minor win for facilitating growth, but for nature it is nothing more than maintaining the status quo.

    Graham Haughton does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. The government has revealed its plans to get Britain building again. Some of them might just work – https://theconversation.com/the-government-has-revealed-its-plans-to-get-britain-building-again-some-of-them-might-just-work-252231

    MIL OSI – Global Reports

  • MIL-OSI Global: Putin mulls over US-Ukrainian ceasefire proposal – but the initial signs aren’t positive

    Source: The Conversation – UK – By Jonathan Este, Senior International Affairs Editor, Associate Editor

    While Donald Trump’s special envoy was en route to Moscow to talk about a possible ceasefire deal with his opposite numbers in the Kremlin, Vladimir Putin enjoyed a meet-up with his old friend Alexander Lukashenko, the president of Belarus, and the atmosphere was reportedly congenial.

    According to the Guardian’s contemporaneous report, the pair even shared a macabre joke at a press conference after their meeting about Europe being “done for”. Putin hastened to clarify that when Lukashenko said if the US and Russia came to an agreement, Europe would be “done for” he had of course been enjoying a pun. Apparently, said Putin, “pipeline in Russian means also being done for, so this will be to Europe’s benefit, because they will get cheap Russian gas. So they will have a pipeline.”

    “That’s what I meant,” said Lukashenko. “Yes, that’s what I thought you did,” Putin replied. Smiles all round from the Russian media audience.

    Putin explained that while he’s technically in favour of a ceasefire, there were a few things that needed to be cleared up and that he and Donald Trump would have a phone call to do just that. Top of the list was “removing the root causes of this crisis”, which most observers are translating as Putin maintaining his demand for all four provinces Ukraine that Russian troops currently occupy and an undertaking by Kyiv never to join Nato.

    It’s unlikely to meet with the approval of Ukrainian president Volodymyr Zelensky. Zelensky has said he thinks that Putin will do “everything he can to drag out the war” – and Putin’s approach appears to bear this out. This accords with what Stefan Wolff and Tetyana Malyarenko wrote in reaction to the news that the US and Ukraine were at last seeing eye to eye, at least on the need for a halt to the killing.


    Sign up to receive our weekly World Affairs Briefing newsletter from The Conversation UK. Every Thursday we’ll bring you expert analysis of the big stories in international relations.


    Wolff and Malyarenko, professors of international security at the University of Birmingham and National University Odesa Law Academy respectively, believe Putin will want to keep hostilities going as long as he can while still keeping in with the US president. They see Russia following a “two-pronged approach” – engaging with the White House over the ceasefire proposal while also pushing for further battlefield gains. They write:

    The peculiar set-up of the negotiations also plays into the Kremlin’s hands here. Short of direct talks between Kyiv and Moscow, Washington has to shuttle between them, trying to close gaps between their positions with a mixture of diplomacy and pressure. This has worked reasonably well with Ukraine so far, but it is far less certain that this approach will bear similar fruit with Russia.




    Read more:
    US and Ukraine sign 30-day ceasefire proposal – now the ball is in Putin’s court


    In all this shuttle diplomacy, one question that you hear more rarely is what the Ukrainian public will be prepared to accept. Over the past three years Gerard Toal of Virginia Tech University, John O’Loughlin of the University of Colorado and Kristin M. Bakke of UCL have provided us with some valuable insights based on polling of the Ukrainian public. They believe that while the majority of Ukrainians are war-weary and willing to make concessions, even ceding territory in return for peace, they are not willing to compromise their country’s political independence. They also don’t trust Putin and see the war in existential terms.

    And, contrary to what Trump might have the world believe, Zelensky remains a popular leader. In fact the latest poll finds his support up ten points on the previous survey at 67%. (Incidentally, Trump posted on his TruthSocial website recently that Zelensky’s approval rating was 4%.) They conclude:

    It will be in large part down to ordinary Ukrainians to shape what happens afterwards. An ugly peace may be accepted by a war-weary population. But if it has little local legitimacy and acceptance, peace is likely to be unsustainable in the long run.




    Read more:
    Are Ukrainians ready for ceasefire and concessions? Here’s what the polls say


    Russia, meanwhile, has weathered the conflict remarkably well, certainly better than the analysts who forecast in the summer of 2022. It that stage, when Ukraine’s counter-offensive was pushing the invaders out of occupied territory, inflicting major casualties and destroying huge amounts of equipment, some observers thought that Russia’s economy would collapse under the weight of defeat and western sanctions.

    Not so, writes Alexander Hill of the University of Calgary. Hill, a military historian, observes the ways in which the Russian war machine has adapted to conditions over the past two years, ditching the recklessness which saw it suffer such grievous losses in 2022 and using more conservative tactics coupled with smart adoption of new technology to give it an edge on the battlefield. He concludes: “While the Russian army remains a relatively blunt instrument, it is not as blunt as it was in late 2022 and early 2023.”




    Read more:
    Why Russia’s armed forces have proven resilient in the war in Ukraine


    Turning off US aid

    Of course, when the US suspended its intelligence-sharing for a few days last week it was a major boost for the Russians. Without data from US satellite coverage and other intelligence traffic, Ukraine’s defenders were left virtually deaf and blind at a crucial time. It gave Russia the space to push its advantage even further as it races to take more territory ahead of a possible peace deal.

    The state of the conflict in Ukraine, March 10 2025.
    Institute for the Study of War

    It’s a bitter lesson for Ukraine to have to learn at this stage in the conflict, write Dafydd Townley and Matthew Powell, experts in international security and strategy at the University of Portsmouth. They believe relying too heavily on one ally for so much was never going to be a good idea and has been exposed as risky since Donald Trump returned to the White House. Perhaps even more risky, given the personality involved, is Ukraine’s dependence on data from ELon Musk’s Starlink satellite system. Musk himself has boasted that: “My Starlink system is the backbone of the Ukrainian army. Their entire front line would collapse if I turned it off.”

    Egotistical self-promotion aside, Musk is probably right about this, but less so when he says there’s no alternative. Townley and Powell believe that it’s in Ukraine’s best interests to look into other satellite systems available to them and note that shares in French-owned satellite company Eutelsat, a European rival to Starlink have recently climbed by almost 400%.




    Read more:
    The US has lifted its intelligence sharing pause with Ukraine. But the damage may already be done


    Many of us who are watching this conflict closely cringed when Trump announced he would cut off military assistance to Ukraine after his (one-sided, it has to be said) shouting match with Volodymyr Zelensky at the end of February. And the announcement that the Pentagon was halting intelligence-sharing as noted above simply made matters worse.

    It felt like a spiteful move. Psychologist Simon McCarthy-Jones of Trinity College, Dublin, has written a book about spite which delves into, among other things, exhibitions of spitefulness in the public arena. It’s a fascinating read. A spiteful approach to foreign policy, he writes, is when we abandon what he calls “humanity’s superpower” – cooperation.

    Trump’s approach, as exemplified by his treatment of Zelensky and also by his baffling decision to impose tariffs even on his friends and allies, “embraces selfishness, treating international relations as a zero-sum game where there can only be one winner”.




    Read more:
    Donald Trump’s foreign policy might be driven by simple spite – here’s what to do about it


    One of the sticking points between the US and Ukraine has been the question of security guarantees in case of a ceasefire or even a longer-term peace deal. It seems increasingly far-fetched that Ukraine will be allowed to join Nato any time soon, so Nato article 5 protections, which would mean that all other member states would be obliged to come to its defence, will not be an issue.

    Trump’s vice-president, J.D. Vance, has suggested that if Ukraine allows US companies access to its mineral resources this would in itself be a security guarantee feels equally improbable. And, in any case, how valuable have US security guarantees been in the past, asks historian Ian Horwood, of York St John University. Horwood pints to the Paris Peace accords of 1973 in which the Nixon administration promised to underwrite South Vietnam’s continued security, while withdrawing US combat troops. Within two years, North Vietnamese tanks were rolling into Saigon.

    More recently the Doha agreement between the first Trump administration and the Taliban was made without involving the Afghan government and didn’t even last long enough for US and Nato troops to get out of Kabul. This sorry history will no doubt have given Zelensky food for thought.




    Read more:
    What is the value of US security guarantees? Here’s what history shows


    Ukraine’s mineral wealth

    All the while many of us have been asking what’s so special about Ukraine’s minerals. We’ve long known about the country as the “bread basket of Europe”, but what is not as widely understood is Ukraine’s mineral wealth. Geologist Munira Raji of the University of Plymouth, says Ukraine has deposits containing 22 of 34 critical minerals identified by the European Union as essential for energy security. This, she says, positions Ukraine among the world’s most resource-rich nations.

    Much of this cornucopia of geological booty is contained in what is known as the “Ukrainian shield” which sits underneath much of the country, writes Raji. Here she walks us through the riches beneath Ukraine’s soil and why America is so keen to get its hands on them.




    Read more:
    What’s so special about Ukraine’s minerals? A geologist explains



    World Affairs Briefing from The Conversation UK is available as a weekly email newsletter. Click here to get updates directly in your inbox.


    ref. Putin mulls over US-Ukrainian ceasefire proposal – but the initial signs aren’t positive – https://theconversation.com/putin-mulls-over-us-ukrainian-ceasefire-proposal-but-the-initial-signs-arent-positive-252225

    MIL OSI – Global Reports

  • MIL-OSI Global: Two charts that explain why Reform isn’t being dented by its scandals

    Source: The Conversation – UK – By Paul Whiteley, Professor, Department of Government, University of Essex

    The spat between Nigel Farage, the leader of the Reform party, and Rupert Lowe, the MP for Great Yarmouth, burst into the open when Lowe was suspended from the party. The allegation was that he had threatened violence to the party leadership, which he denies. The matter is currently being investigated by the police.

    The row does not appear to have affected support for Reform in the polls. A YouGov poll completed on March 10, after Lowe’s suspension, shows Reform on 23% in vote intentions, compared with 24% for Labour and 22% for the Conservatives. It is still a three-party race at the top of British party politics.

    In the 2024 general election a good deal of Reform’s support came from protest voters. These are voters who dislike all the mainstream parties and so see a vote for the party as a way of choosing “none of the above”. They are not attached to any party and can easily switch support when circumstances change. So why has support for the party not been affected by this row?

    Protest politics and support for Reform

    The answer to this question is that while Reform attracted a lot of discontented protest voters in the election, it has since acquired a more stable niche in British party politics. It is primarily a party of English nationalism, equivalent to the SNP in Scotland and Plaid Cymru in Wales. These three parties differ greatly in outlook and politics, but they occupy a similar place in the public’s minds.


    Want more politics coverage from academic experts? Every week, we bring you informed analysis of developments in government and fact check the claims being made.

    Sign up for our weekly politics newsletter, delivered every Friday.


    To examine Reform’s support from protest voters we can look at the relationship between spoilt ballots in the 2024 general election and support for the party in the 632 constituencies in England, Scotland and Wales. Normally, observers of British elections pay little attention to spoilt ballots (or “invalid votes” as they are described in official statistics). However, it turns out that they played an important role in the 2024 election which has a bearing on support for Reform.

    Research shows that voters who spoil their ballots can be classified into two categories: those who simply make a mistake when filling in the ballot and those who are protesting about the current system.

    Mistakes are easy to make in countries with complex electoral systems. However, in Britain, the first-past-the-post system in which everyone has just one vote, ensures that this is not a significant factor because ballot papers are so simple. The bulk of spoilt ballots are protests of various kinds, taking the form of blank ballots, write-in candidates, or abusive messages about parties and candidates.

    This is illustrated in the Lancashire seat of Chorley, which is held by the speaker of the House of Commons, Lindsay Hoyle. By tradition none of the major parties challenge the Speaker by campaigning in his constituency. In the election there were no less than 1,198 spoilt ballots in his constituency. It is fairly clear that these were a result of some voters feeling disenfranchised by the absence of their preferred party on the ballot paper.

    The relationship between the Reform vote share and the number of spoilt ballots in constituencies in the 2024 election

    Protest voting takes different forms.
    P Whiteley, CC BY-ND

    There is a strong negative relationship (a correlation of -0.46) between the share of a constituency vote that went to Reform in 2024 and the number of ballots spoiled in that constituency. Where people were voting Reform, in other words, fewer people were spoiling their ballots. The implication is that the party picked up votes from people who would normally spoil their ballots or would not have voted at all if Reform had not stood in their constituency. These are the protest voters.

    Identity politics and support for Reform

    Not all support for Reform came from protest voters, however. The chart below compares the percentage of Reform voters with those who identified as English in the 2021 census in England. There is a strong relationship between the two measures (a correlation of 0.66). The more English identifiers there are in a constituency, the greater support for Reform. In effect, Reform has become an English national party.

    The relationship between Reform voting and English identity in 2024

    An English national party in the making.
    P Whiteley, CC BY-ND

    National identities can change over time, but the process of change is slow. There has been a growth in “Englishness” at the expense of “Britishness” over time and this is undoubtedly reinforcing support for Reform.

    It means the party has a relatively solid base of supporters to rely on in future elections. While the row between the party’s leader and one of his MPs could play out in any number of different directions at this early stage, it would be wrong to suggest that Reform isn’t thinking big picture and long term.

    Farage has clearly learnt from his past and will not let his current party disintegrate into chaos like UKIP or the Brexit party before it.

    Paul Whiteley has received funding from the British Academy and the ESRC.

    ref. Two charts that explain why Reform isn’t being dented by its scandals – https://theconversation.com/two-charts-that-explain-why-reform-isnt-being-dented-by-its-scandals-252201

    MIL OSI – Global Reports

  • MIL-OSI Global: Keir Starmer to abolish NHS England – the pros and cons

    Source: The Conversation – UK – By Peter Sivey, Reader in Health Economics, Centre for Health Economics, University of York

    The UK government has announced the abolition of NHS England, phased over two years. In practice, this will involve merging some functions and staff from NHS England into the Department for Health and Social Care (DHSC). As part of the change, the government has stated that it expects to reduce duplication and save hundreds of millions of pounds.

    NHS England was established under the Health and Social Care Act of 2012 (the Lansley reforms) and is responsible for commissioning care and overseeing the day-to-day running of the NHS. This involves negotiating budgets for local care provision with bodies like integrated care boards and hospitals; performance management such as monitoring waiting times and quality measures; and implementing national initiatives across NHS organisations.

    NHS England was established to provide operational autonomy, shielding the health service from daily political interference. It is an “arm’s-length body”, meaning it operates independently from the government but remains accountable to it. The DHSC sets strategic goals and oversees NHS England activities.

    In practice, NHS England and DHSC have distinct roles, although they overlap in some areas. DHSC staff typically have broader policy expertise – for example, many have worked in other areas of the civil service, whereas NHS England staff often have more detailed knowledge of how the NHS works on the ground.

    Risks

    The loss of expertise within NHS England is probably the largest risk of the abolition. Alongside very experienced NHS managers and analysts, NHS England employs senior doctors and other health care workers who contribute valuable practical knowledge from the NHS frontline into policy roles.

    A major risk of this move is the potential loss of this clinical expertise and operational insight into policymaking. Lord Darzi’s report on the NHS specifically cited the loss in management talent that occurred as a result of the 2012 reforms, and cautioned against further reorganisation that might repeat that disruption.

    Another risk is that bringing NHS England functions directly under ministerial control risks increased politicisation of day-to-day NHS management.

    The government will argue that other policy areas like defence, education and policing do not have such a large arm’s-length body between the department and the frontline. However, health and social care is a uniquely large (11% of GDP) and highly political organisation, with a fast-growing budget and faster-growing challenges.

    NHS policy is already highly politicised, but abolishing NHS England risks the DHSC and the ministers being on the hook for every operational decision. This could lead to operational decisions being made to appease public opinion rather than promoting public health.

    The government faces significant practical challenges in merging two organisations with different cultures, working practices and pay structures. Currently, NHS England (about 16,000 staff) is much larger than DHSC (about 3,000 staff). Many NHS England roles will have to move into the much smaller DHSC.

    The transition itself will require investment, so the promised savings are unlikely to be achieved in the short term.

    Opportunities

    The main opportunity of the abolition is the removal of duplication between DHSC and NHS England.

    Currently, both organisations maintain separate policy teams covering similar areas – for example, elective surgery waiting times or cancer care. And sometimes, it is unclear how well they work together or why both are necessary.

    By consolidating within the DHSC, there is an opportunity to strengthen policy analysis. With one strong policy team in the DHSC, policy advice to ministers (DHSC) and policy implementation on the ground (previously NHS England) could be better coordinated and aligned with the government’s objectives.

    Lord Darzi’s report on the NHS highlighted the growth of regulatory roles within NHS England, questioning whether too much accountability could be counterproductive.

    The abolition of NHS England is also an opportunity to streamline regulation while strengthening local management roles and valuable policy analysis.

    Another opportunity from the abolition of the organisation would be the strengthening of local NHS bodies like integrated care boards. These local bodies, designed to tailor healthcare to local area needs, may sometimes have been stymied by excessive central control.

    The health secretary, Wes Streeting, has already expressed his desire to see more devolution of power and responsibility within the NHS. This process provides the opportunity to enact that promise.

    What will happen next?

    The abolition of NHS England and the transfer of some responsibilities back to the DHSC will take time and incur significant costs and disruption. Any benefits are likely to emerge only in the long term.

    Before the introduction of NHS England, there were larger regional organisations (strategic health authorities) that were responsible for implementing policy at a regional level. Perhaps the re-emergence of similar regional bodies could smooth the transition from a central NHS England to a more decentralised health service.

    Peter Sivey receives funding from the National Institute for Health and Care Research.

    ref. Keir Starmer to abolish NHS England – the pros and cons – https://theconversation.com/keir-starmer-to-abolish-nhs-england-the-pros-and-cons-252237

    MIL OSI – Global Reports

  • MIL-OSI Global: See you in the funny papers: How superhero comics tell the story of Jewish America

    Source: The Conversation – Global Perspectives – By Miriam Eve Mora, Managing Director of the Raoul Wallenberg Institute, University of Michigan

    A five-story replica of a stamp of Superman in 1998 in Cleveland, home of the superhero’s creators, Jerry Siegel and Joe Shuster. AP Photo/Tony Dejak, File

    Nearly a hundred years ago, a hastily crafted spaceship crash-landed in Smallville, Kansas. Inside was an infant – the sole survivor of a planet destroyed by old age. Discovering he possessed superhuman strength and abilities, the boy committed to channeling his power to benefit humankind and champion the oppressed.

    This is the story of Superman: one of the most recognizable characters in history, who first reached audiences in the pages of Action Comics in 1938 – what many fans consider the most important single comic in history.

    As a historian of American immigration and ethnicity – and a lifelong comics fan – I read this well-known bit of fiction as an allegory about immigration and the American dream. It is, at its core, the ultimate story of an immigrant in the early 20th century, when many people saw the United States as a land with open gates, providing such orphans of the world an opportunity to reach their fullest potential.

    Taken in and raised by a rural family under the name Clark Kent, the baby was imbued with the best qualities of America. But, like all immigrant stories, Kent’s is a two-parter. There is also the emigrant story: the story of how Kal-El – Superman’s name at birth – was driven from his home on Planet Krypton to embrace a new land.

    That origin story reflects the heritage of Superman’s creators: two of the many Jewish American writers and artists who ushered in the Golden Age of comic books.

    Jewish history…

    A card from 1909, found in the Jewish Museum of New York, depicts Jewish Americans welcoming Jews emigrating from Russia.
    Heritage Images/Hulton Archive via Getty Images

    The American comics industry was largely started by the children of Jewish immigrants. Like most publishing in the early 20th century, it was centered in New York City, home to the country’s largest Jewish population. Though they were still a very small minority, immigration had swelled the United States’ Jewish population more than a thousandfold: from roughly 3,000 in 1820 to roughly 3,500,000 in 1920.

    Comic books had not yet been devised, but strip comics in newspapers were a regular feature. They began in the late 19th century with popular stories featuring recurring characters, such as Richard F. Outcault’s “Yellow Kid” and “the Little Bears” by Jimmy Swinnerton.

    A few Jewish creators were able to break into the industry, such as Harry Hershfield and his comic “Abie the Agent.” Hershfield’s success was exceptional in three ways: He broke into mainstream newspaper comics, his titular character was also Jewish, and he never adopted an anglicized pen name – as many other Jewish creators felt they must.

    Shoppers and vendors outside of haberdasheries on Hester Street in a Jewish neighborhood of New York’s Lower East Side around 1900.
    Photo by Hulton Archive/Getty Images

    Generally, however, Jews were barred from the more prestigious jobs in newspaper cartooning. A more accessible alternative was the cheaper, second-tier business of reprinting previously published works.

    In 1933, second-generation Jewish New Yorker Max Gaines – born Maxwell Ginzburg – began a new publication, “Funnies on Parade.” “Funnies” pulled together preexisting comic strips, reproducing them in saddle-stitched pamphlets that became the standard for the American comics industry. He went on to found All-American Comics and Educational Comics.

    Another publisher, Malcolm Wheeler-Nicholson, founded National Allied Publications in 1934 and published the first comic book to feature entirely new material, rather than reprints of newspaper strips. He joined forces with two Jewish immigrants, Harry Donenfeld and Jack Leibowitz. At National, they created and distributed Detective and Action Comics – the precursors to DC, which would become one of the two largest comics distributors in history.

    It was at Action Comics that Jerry Siegel and Joe Shuster, two second-generation immigrants from a Jewish neighborhood in Cleveland, found a home for Superman. It would also be where two Jewish kids from the Bronx, Bob Kane and Bill Finger – born Robert Kahn and Milton Finger – found a home for their character, Batman, in 1939.

    Jerry Siegel and Joe Shuster, creators of Superman, pictured in the 1940s.
    New Yorker/Wikimedia Commons

    The success of these characters inspired another prominent second-generation Jewish New Yorker, pulp magazine publisher Moses “Martin” Goodman, to enter comics production with his line, “Timely Comics.” The 1939 debut featured what would become two of the early industry’s most well-known superheroes: the Sub-Mariner and the Human Torch. These characters would be mainstays of Goodman’s company, even when it became better known as Marvel Comics.

    Thus were born the “big two,” Marvel and DC, from humble Jewish origins.

    …and Jewish stories

    The creation and popularization of superhero comics isn’t Jewish just because of its history. The content was, too, reflecting the values and priorities of Jewish America at the time: a community influenced by its origins and traditions, as well as the American mainstream.

    Some of the most foundational early comics echo Jewish history and texts, such as Superman’s story, which parallels the Jewish hero Moses. The biblical prophet was born in Egypt, where the Israelites were enslaved, and soon after Pharaoh ordered the murder of all their newborn sons. Similarly, Superman’s people, the Kryptonians, faced an existential threat: the destruction of their planet.

    Moses’ life is saved when his mother floats him down the Nile in a hastily constructed and tarred basket. Kal-El, too, is sent away to safety in a hastily constructed craft. Both boys are raised by strangers in a strange land and destined to become heroes to their people.

    Comics also reflected the feelings and fears of Jews in a moment in time. For example, in the wake of Kristallnacht – the 1938 night of widespread organized attacks on German Jews and their property, which many historians see as a turning point toward the Holocaust – Finger and Kane debuted Batman’s Gotham City. The city is a dark contrast to Superman’s shining metropolis, a place where villains lurked around every corner and reflected the darkest sides of modern humanity.

    Some comic artists and writers used their platform to make political statements. Jack Kirby – born Kurtzberg – and Hymie “Joe” Simon, creators of Captain America, explained that they “knew what was going on over in Europe. World events gave us the perfect comic-book villain, Adolf Hitler, with his ranting, goose-stepping and ridiculous moustache. So we decided to create the perfect hero who would be his foil.” The comic debut of Captain America in 1941 featured a brightly colored cover with the brand-new hero punching Adolf Hitler in the face.

    In later generations, characters penned by Jewish authors continued to grapple with issues of outsider status, hiding aspects of their identity, and maintaining their determination to better the world in spite of rejection from it. Think of Spider-Man, the Fantastic Four and X-Men. All of these were created by Stan Lee – another Jewish creator, born Stanley Martin Lieber – who was hired into Timely Comics at just 17 years old.

    With so many of the most popular comics written by New York Jews, and centered in the city, much of New York’s Yiddish-tinged, recognizably Jewish language made its way onto the pages. Lee’s Spider-Man, for example, frequently exclaims “oy!” or calls bad guys “putz” or “shmuck.”

    In later years, Jewish authors such as Chris Claremont and Brian Michael Bendis introduced or took over mainstream characters who were overtly Jewish – reflecting an emerging comfort with a more public Jewish ethnic identity in America. In X-Men, for example, Kitty Pryde recounts her encounters with contemporary antisemitism. Magneto, who is at times friend but often foe of the X-Men, developed a backstory as a Holocaust survivor.

    History is never solely about retelling; it’s about gaining a better understanding of complex narratives. Trends in comics history, particularly in the superhero genre, offer insight into the ways that Jewish American anxieties, ambitions, patriotism and sense of place in the U.S. continually changed over the 20th century. To me, this understanding makes the retelling of these classic stories even more meaningful and entertaining.

    Miriam Eve Mora does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. See you in the funny papers: How superhero comics tell the story of Jewish America – https://theconversation.com/see-you-in-the-funny-papers-how-superhero-comics-tell-the-story-of-jewish-america-248218

    MIL OSI – Global Reports

  • MIL-OSI Global: Saudi Arabia’s role as Ukraine war mediator advances Gulf nation’s diplomatic rehabilitation − and boosts its chances of a seat at the table should Iran-US talks resume

    Source: The Conversation – Global Perspectives – By Kristian Coates Ulrichsen, Fellow for the Middle East at the Baker Institute, Rice University

    Saudi Arabia is 2,000 miles from Ukraine and even more politically distant, so at first glance it might seem like it has nothing to do with the ongoing war there. But the Gulf state has emerged as a key intermediary in the most serious ceasefire negotiations since Russia invaded its neighbor three years ago.

    While it is U.S. officials who are undoubtedly leading the efforts for an agreement, it is the Saudi capital of Riyadh that has been staging the crucial talks.

    In a flurry of diplomatic activity on March 10, 2025, Saudi Crown Prince Mohammed bin Salman, the country’s top political authority, hosted separate meetings with Ukrainian President Volodymyr Zelenskyy and a U.S. delegation led by Secretary of State Marco Rubio and national security adviser Mike Waltz.

    The following day, senior Saudi officials facilitated face-to-face meetings between U.S. and Ukrainian delegations.

    The resulting agreement, which is now being mulled in Moscow, is all the more notable given that it followed a diplomatic breakdown just weeks before at the Oval Office between Zelenskyy, President Donald Trump and Vice President JD Vance.

    Whether the proposed interim 30-day ceasefire materializes is still uncertain. On March 14, Russian President Vladimir Putin said he agreed with the proposal in principle, but he added that a lot of the details needed to be sorted out.

    Should a deal be reached, there is every reason to believe it will be inked in Saudi Arabia, which has hosted not only the latest U.S.-Ukrainian talks but earlier rounds of high-level Russian-U.S. meetings.

    But why is a Gulf nation playing mediator in a conflict in Eastern Europe? As an expert on Saudi politics, I believe the answer to that lies in the kingdom’s diplomatic ambitions and its desire to present a more positive image to the world. And in the background is the goal of better positioning the nation in the event of diplomatic maneuvers in its own region, notably in regards to any talks between U.S. and Iran.

    The diplomatic convertion of MBS

    Saudi Arabia’s growing diplomatic role has been a feature of the kingdom’s foreign policy since 2022.

    Crown Prince Mohammed, who that year succeeded his father as prime minister, views Saudi Arabia as the convening power in the Arab and Islamic world.

    Accordingly, officials in the kingdom have been directed to lead regional diplomacy over a number of pressing issues, including the conflicts in Gaza and Sudan.

    At the same time, Saudis have started the process of reconciliation with Iran, which has long been perceived as the chief regional rival to Saudi influence.

    This turn to diplomacy marks a shift away from the confrontational policies adopted by the crown prince during his rise to power in Saudi Arabia between 2015 and 2018. Policies such as Saudi Arabia’s military intervention in Yemen, its blockade of Qatar, the detention of Lebanon’s Prime Minister Saad Hariri and the conversion of the Ritz-Carlton hotel in Riyadh into a makeshift prison all fed an image of the young prince as an impulsive decision-maker. Then in 2018 came the murder of journalist Jamal Khashoggi in the Saudi Consulate in Istanbul.

    This approach brought little in the way of stability. Rather, it left the country ensnared in an unwinnable war in Yemen, a fruitless row with Qatar, and diplomatic isolation by Western officials.

    A friend to Ukraine and Russia

    In regards to the war in Ukraine, Saudi Arabia’s intermediary role is helped by a perception of the kingdom as a neutral nation on the conflict.

    Saudi officials, in common with their counterparts in the other Gulf states, have long sought to avoid taking sides in the emerging era of great power competition and strategic rivalry. As such, the kingdom has maintained working relations with both Russia and pro-Western Ukraine since the outbreak of war in Europe.

    In 2022, for example, Saudi Arabia and Russia – both leaders of OPEC+ – coordinated oil production cuts to cushion Moscow from the effects of global sanctions the West imposed after it invaded Ukraine. Yet just months later, Saudi Arabia invited Zelenskyy to address an Arab League summit in the Saudi city of Jeddah.

    It was a prelude to a 2023 international summit, also in Jeddah, which brought together representatives from 40 countries to discuss the ongoing war.

    Despite failing to produce a breakthrough, the meeting illustrated the convening reach of the crown prince and his intention to act as a diplomatic go-between in the Ukraine-Russia war.

    Saudi Arabia and neighboring United Arab Emirates later facilitated occasional prisoner exchanges between the two countries – rare diplomatic successes in three years of conflict.

    Staging ground for diplomacy

    Direct engagement in high-stakes international diplomacy over the largest war in Europe since 1945 is undoubtedly a step up in Saudi ambitions. But the country’s efforts aren’t purely altruistic. Riyadh believes there’s mileage to be gained in such diplomatic endeavors.

    The advent of a Trump presidency has fit Saudi desires. Trump has made his desire to be seen as a dealmaker and peacemaker clear, but he needs a neutral venue in which the hard work of diplomacy can flourish.

    Just weeks into the new U.S. administration, the Saudi capital hosted the first meeting between a U.S. secretary of state and Russian foreign minister since Russia invaded in 2022.

    It yielded an agreement to “re-establish the bilateral relationship” and establish a consultation mechanism to “address irritants” in ties.

    The two rounds of dialogue in Riyadh – first with Russia, then Ukraine – have positioned the Saudi leadership firmly in the diplomatic process. It has also gone some way to rehabilitate Mohammed bin Salman’s image.

    The sight of the crown prince warmly greeting Zelenskyy contrasted sharply with the images from a fractious White House meeting that went around the world, presenting the crown prince as a statesmanlike figure.

    Turning to Tehran

    Such positive optics would have seemed inconceivable as recently as 2019, when the crown prince was shunned and then presidential candidate Joe Biden labeled the country a “pariah” state.

    Changing this negative global perception of Saudi Arabia is crucial if the kingdom is to attract the tens of millions of visitors that are pivotal to the success of the “giga-projects” – sports, culture and tourism events that the Saudis hope will drive its economy and allow the kingdom to be less economically dependent on fossil fuel exports.

    Whereas easing tensions with Iran and supporting Yemen’s fragile truce are about derisking the kingdom’s vulnerability to regional volatility, facilitating diplomacy over Ukraine is a relatively cost-free way to reinforce the changing narratives about Saudi Arabia.

    After all, any breakdown in the Russia-U.S.-Ukraine negotiations is unlikely to be blamed on the Saudis.

    Indeed, Saudi officials may view their engagement with U.S. officials over Ukraine as the prelude to further diplomatic cooperation. And this will be especially true if Crown Prince Mohammed is able to establish himself as an indispensable partner in the eyes of Trump.

    Saudi officials were excluded from the last major talks between Iran and the U.S., which also involved several other major world powers and led to the 2016 Iran nuclear deal. Trump withdrew from the deal shortly after assuming office for the first time in 2017, and U.S.-Iranian relations have been moribund since then.

    The U.S. administration has already mooted the idea of a resumption of negotiations with Tehran over its nuclear capabilities.

    Placing Saudi Arabia in the middle of any attempts to secure a new nuclear agreement that would replace or supersede that earlier deal would be a high-risk move, given the intensity of feeling on both the U.S. and Iranian sides and the uneasy coexistence between Tehran and Riyadh.

    But doing so would give the kingdom what it most desires: a seat at the table.

    Kristian Coates Ulrichsen does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Saudi Arabia’s role as Ukraine war mediator advances Gulf nation’s diplomatic rehabilitation − and boosts its chances of a seat at the table should Iran-US talks resume – https://theconversation.com/saudi-arabias-role-as-ukraine-war-mediator-advances-gulf-nations-diplomatic-rehabilitation-and-boosts-its-chances-of-a-seat-at-the-table-should-iran-us-talks-resume-252035

    MIL OSI – Global Reports

  • MIL-OSI Global: Why are suicide rates so high in bipolar disorder, and what can we do about it?

    Source: The Conversation – UK – By Marcos del Pozo Banos, Senior Research Data Analyst, Swansea University

    Heston Blumenthal, the celebrity chef known for his experimental cuisine, recently shared his experience of being sectioned under the UK’s Mental Health Act, saying it was “the best thing” that could have happened to him. His openness about living with bipolar disorder highlights the little-discussed fact that people with this condition face one of the highest suicide risks of any mental illness.

    Bipolar disorder is a severe mental illness characterised by episodes of mania (high energy, impulsivity) and depression (hopelessness, fatigue). Suicidal thoughts and behaviour are a core feature of the disorder, with fluctuating risk that can persist over long periods.

    Although bipolar disorder affects around 2% of the population, studies suggest that up to 50% of people with the condition attempt suicide at least once, and 15-20% die by suicide – a rate much higher than in the general population. Unlike global suicide rates, suicide deaths in bipolar disorder have not declined.

    Understanding why suicide is so common in people with this disorder is difficult. But one major factor is mood instability. Rapid shifts between emotional highs and lows, as well as mixed states where symptoms of mania (impulsivity) and depression (despair) occur together, can be particularly dangerous.

    Social and economic factors also play a role. Research we conducted at Swansea University shows that the population suffering from bipolar disorder has become poorer over the last two decades. Financial strain, social isolation and poorer access to healthcare all lead to worse outcomes. Beyond suicide, people with the condition die up to 20 years earlier than the general population, often from preventable health problems such as heart disease.

    While bipolar disorder cannot be cured, it can be managed. The most commonly used drug, lithium, has been found to reduce suicide risk significantly in some patients. However, people with the condition struggle to take it regularly.

    The drug’s side-effects can affect the kidneys, thyroid, metabolism, cognition and cardiovascular health. Managing these side-effects requires regular blood tests and continuous monitoring, making long-term treatment difficult.

    Many people stop taking their medication during manic phases, believing they are cured.

    Other treatments, such as antipsychotics, mood stabilisers and electroconvulsive therapy (where electric currents are passed through the brain while the patient is under anaesthesia), can also be effective in some types and phases of bipolar – for example, in states of mixed mania and depression where there is a high risk of suicide – but they come with their own harms and limitations.

    Some psychiatrists now question whether continuous lifelong treatment is necessary for all patients.

    Even when people seek help, healthcare systems often fail to intervene effectively. Suicide risk is highest in the days following discharge from a psychiatric hospital. Many people who later die by suicide have recently visited emergency rooms after hurting themselves, but the help they received was either delayed or not enough to prevent further harm.

    Existing tools to identify and measure suicide risk, such as checklists, questionnaires and structured interviews, are ineffective. Many people with bipolar disorder who die by suicide are assessed as “low risk” shortly beforehand, exposing a crucial gap between doctor and patient perceptions. This is in great part because these tools rely too heavily on past factors such as suicide attempts (which may not be disclosed), rather than dynamic, real-time distress or mood instability.

    Despite the significant effect that bipolar disorder has on individuals, families and society, the development of new drugs has been frustratingly slow. Lithium, first used in the 1940s, remains the go-to treatment, while most other drugs were originally designed to treat schizophrenia. No truly new treatments have emerged in decades.

    Not a single disorder

    One difficulty is that bipolar is not a single disorder but a spectrum of conditions, rendering the one-size-fits-all approach inadequate — lithium is effective in only about one in three patients.

    Drug development for bipolar disorder is particularly challenging. The complexity of bipolar disorder calls for equally complex trials that need to consider patient variability, ethical concerns and strict safety requirements. New treatments also face strict approval hurdles because lithium – despite its limitations – is highly effective for some patients. This results in slow treatment development, leaving patients with limited options.

    Research is also slowed by concerns about whether it’s ethical to involve patients in trials. But it’s important to include people with the disorder who have experienced suicidal thoughts and behaviour, to better understand their mindset and decision-making.

    However, new approaches offer hope. Several research projects, such as Datamind, are developing artificial intelligence platforms to help find new drugs quicker and to personalise treatments based on patients’ genetic and clinical profiles. AI could lead to faster, more effective therapies tailored to individual needs.

    Blumenthal’s story highlights that being sectioned, while traumatic, can save lives and keep people safe. Yet the stigma around psychiatric hospitalisation prevents many from seeking care. There is a widespread belief that hospitalisation should be avoided at all costs – but for some, it can be the difference between life and death.

    However, hospitalisation alone is not enough. The mental health system must do better to ensure that people with bipolar disorder receive long-term care, particularly during high-risk periods like hospital discharge. To prevent suicide, we need to rethink how risk is assessed, improve follow-up care, and reduce barriers to treatment.

    While the statistics on bipolar are alarming, the message should be one of hope. The condition is treatable and suicide is preventable, but only if we commit to improving access to care, reducing stigma and advancing research.

    Marcos del Pozo Banos research is funded by UKRI – Medical Research Council through the DATAMIND Hub (MRC reference: MR/W014386/1), and the Wolfson Centre for Young People’s Mental Health (established with support from the Wolfson Foundation).

    Ann John receives funding from Health and Care Research Wales, NIHR, Wolfson Foundation and MRC (DATAMIND).

    Tania Gergel works for Bipolar UK as the Director of Research. She receives research funding from National Institute of Health Research, the Medical Research Council and King’s College London. She is also on the Board of the National Centre for Mental Health in Wales, and is an Honorary Visiting Professor at Cardiff University and Honorary Senior Research Fellow in the Division of Psychiatry at University College London.

    ref. Why are suicide rates so high in bipolar disorder, and what can we do about it? – https://theconversation.com/why-are-suicide-rates-so-high-in-bipolar-disorder-and-what-can-we-do-about-it-251376

    MIL OSI – Global Reports

  • MIL-OSI Global: Treatment for Parkinson’s disease and restless leg syndrome is linked with risky behaviour – here’s what you need to know

    Source: The Conversation – UK – By Dipa Kamdar, Senior Lecturer in Pharmacy Practice, Kingston University

    Orawan Pattarawimonchai/Shutterstock

    Getting a headache and feeling sick are common side-effects for many medicines. Indulging in risky sexual behaviour or pathological gambling – not so common.

    But a BBC investigation has highlighted that some drug treatments for restless leg syndrome and Parkinson’s disease can lead to such risky behaviour.

    Over 150,000 people in the UK live with Parkinson’s – a degenerative condition that affects the brain. The main part of their brain that is damaged is the area that produces dopamine, a chemical messenger that regulates movement. Less dopamine in the brain can lead to symptoms such as tremors, muscle stiffness, slow movements and problems with balance.

    Another movement disorder is restless legs syndrome (RLS), which affects between 5% and 10% of people in the UK, US and Europe. Twice as many women as men have RLS among those aged over 35.




    Read more:
    Restless legs syndrome is incurable – here’s how to manage the symptoms


    People with RLS feel they need to uncontrollably move their legs, and may experience a crawling, creeping or tingling sensation in them. Usually, the symptoms are worse at night when dopamine levels tend to be lower. Although the exact cause of RLS is unknown, it has been linked to genes, underlying health conditions, and an imbalance of dopamine.

    One of the main treatments for movement disorders is a group of drugs called dopamine-receptor agonists, which include cabergoline, ropinirole, bromocriptine and pramipexole. Dopamine-receptor agonists increase the levels of dopamine in the brain and help regulate movement.

    Dopamine is known as the “happy” hormone because it is part of the brain’s reward system. When people do something fun or pleasurable, dopamine is released in their brain. But using dopamine-receptor agonist drugs can elevate these feelings, leading to impulsive behaviour.

    While common side-effects include headaches, feeling sick and sleepiness, these drugs are also linked with the more unusual side-effect of impulse-control disorders. These include risky sexual behaviour (hypersexuality), pathological gambling, compulsive shopping, and binge eating. Hypersexuality encompasses behaviour such as a stronger-than-usual urge to have sexual activity, or being unable to resist performing a sexual act that may be harmful.

    Previous reported cases include a 53-year-old woman taking ropinirole and exhibiting impulsive behaviour such as accessing internet pornography, using sex chat rooms, meeting strangers for sexual intercourse, and compulsive shopping. Another case highlighted a 32-year-old man who, after taking ropinirole, started binge eating and gambling compulsively, such that he lost his life savings.

    When the drug was first being prescribed in the early 2000s, it was thought that impulse-control disorders were a rare side-effect associated with these drugs. But in 2007, a UK Medicines and Healthcare Products Regulatory Agency (MHRA) public assessment report advised that “healthcare professionals should warn patients that compulsive behaviour with dopamine agonists may be dose-related”.

    Between 6% and 17% of people with RLS who take dopamine agonists develop some form of impulse-control disorder, while up to 20% of people living with Parkinson’s may experience impulse control disorders.

    But the true figures may be even higher, as many some patients may not associate changes in behaviour with their medication, or may be too embarrassed to report it. Case reports show that in most instances, impulsive behaviour stops when the drug is stopped.

    Lawsuits

    There have been several individual and class-action lawsuits against pharmaceutical companies including GlaxoSmithKline, which produces ReQuip® (ropinirole), and Pfizer, which makes Cabaser® (cabergoline). Patients taking action against these companies claimed they were unaware of these impulsive behaviour side-effects.

    For example, in 2012, a French court ordered GlaxoSmithKline to pay £160,000 in damages to Didier Jambart, after he experienced “devastating-side effects” when taking the firm’s Parkinson’s drug Requip. And in 2014, an Australian federal court approved a settlement against Pfizer for a class-action lawsuit regarding its Parkinson’s drug, Cabaser. 150 patients claimed they did not have warning of potential side-effects – including increased gambling, sex addiction and other high-risk activities – of taking Cabaser.

    It is now clearer in the patient information leaflets given with all prescribed medication for movement disorders that impulsive behaviour can occur in some patients.

    In 2023, the MHRA advised there had been increased reports of pathological gambling with a drug called aripiprazole. This antipsychotic drug, used in the treatment of schizophrenia and mania, partly acts as a dopamine-receptor agonist.

    Any drug that increases dopamine levels could theoretically be linked to impulse control disorders, and it is important to keep monitoring patients and their behaviour in such cases.

    Not everyone will experience side-effects. Before you begin any course of treatment, your doctor or pharmacist should explain the potential side-effects – but it is also important to read the information leaflet with any medicine. And if you experience any impulsive behaviours with these medicines, speak to your doctor or pharmacist immediately.

    Dipa Kamdar does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Treatment for Parkinson’s disease and restless leg syndrome is linked with risky behaviour – here’s what you need to know – https://theconversation.com/treatment-for-parkinsons-disease-and-restless-leg-syndrome-is-linked-with-risky-behaviour-heres-what-you-need-to-know-252079

    MIL OSI – Global Reports

  • MIL-OSI Global: Opus: clunky satire about an evil celebrity cult has plenty to say – it just doesn’t know how to say it

    Source: The Conversation – UK – By Daniel O’Brien, Lecturer, Department of Literature Film and Theatre Studies, University of Essex

    Opus, the film debut of former GQ editor-turned-director Mark Anthony Green has been described as a horror-musical. And while this new hybrid-genre film clearly has something to say, what that is remains frustratingly unclear.

    Produced by independent film company A24, often a hallmark of quality, the film follows Ariel Ecton (Ayo Edebiri), a young writer striving to make her mark in entertainment journalism. While it gestures toward themes of celebrity culture and the toxicity of extreme fandom, the film ultimately feels tangled in a jumble of unfocused ideas and derivative references to other – arguably stronger – works.

    Despite talent and determination, Ariel struggles with her boss Stan (Murray Bartlett) who redeploys her ideas to other senior colleagues and is often too self-absorbed to nurture her career development.

    The very watchable Edebiri eases into centre stage after catapulting to global fame in the TV show The Bear (2022-present), for which she has received a Golden Globe and an Emmy.


    Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.


    In contrast to the achievements of The Bear’s Sydney, her character Ariel’s success as a writer seems out of reach in Opus. In an early scene, she articulates her frustrations to a friend who responds by pointing to Ariel’s ordinariness and comfortable upbringing. Apparently, her lack of disadvantage is precisely what’s holding her back, leaving her “too middle” to be noticed, promoted or considered.

    Here we have the first clue that Ariel will be destined to experience trauma which will come by way of the “final girl” horror trope (a reference to the last woman standing) by the end of the film.

    To Ariel’s surprise, she is selected to accompany Stan to a remote desert compound with other journalists to cover the story of reclusive pop legend Alfred Moretti (John Malkovich, returning to the big screen for the first time in five years).

    Coincidentally, Moretti is about to make a return to public life after a 30-year hiatus and reset his reputation with a new album. Malkovich seems to relish the role, cranking up his flamboyant eccentricity in what feels like a mash-up of Ziggy Stardust and Frank-N-Furter.

    Moretti’s ostentatiousness in contrast to Ariel’s subdued “middle-ness”, seems to be one of several binaries that the film explores, with an epilogue that discusses the left and right sides of the brain, and the division between destruction and creativity.

    The theme of creativeness is a driving force in the film, with Moretti’s and Ariel’s respective musical and literary artistry used as fuel in the narrative, from a director with a similar writing background to Ariel.

    Unfortunately, the film often feels more derivative than creative because of the numerous sources it takes as its inspiration. Moretti’s compound turns out, of course, to be a cult where Ariel, Stan and other invited guests will find something even more sinister than Malkovich’s rhythmic hip thrusts.

    The rules of the compound mean that all guests must hand over their phones and electronic devices, so that in typical horror fashion, the characters are completely cut off from the outside world.

    The knowing nod to this horror cliché is perhaps done for comedic value, but becomes another of the film’s weak spots, in the sense that it never really commits to any one thing. It’s not quite a comedy, a horror or a musical but something that is more fragmentary, borrowing elements of each.

    It’s as if the director has assembled his favourite genres, but only in notes that have not yet been successfully put together. For example, there is an explicit recreation of a very distinct scene from Takashi Miike’s harrowing Audition (1999), while other parts are heavily influenced by Ari Aster’s disturbing Midsommar, (2019) a folk horror film also made by A24.

    There are also nods to Mark Mylod’s The Menu (2022) in which an eccentric celebrity chef creates a meal for a group of sycophant critics with lethal consequences. As a dark comedy-horror, The Menu succeeds in satirising the absurdity of reality cooking shows, where competitiveness and TV chefs are caricatured.

    However, Green’s attempt at satire in Opus doesn’t really work. That’s not to imply that the film hasn’t got something to say – Green appears to be interested in the relationship between celebrity culture and fandom. However, that idea doesn’t feel fully fleshed out, particularly when other films like Brandon Cronenberg’s dangerously underrated Antiviral (2012) was addressing this idea with visceral originality more than a decade ago.

    Moretti’s songs have a deliberately dated sound which seems to be inspired by Michael Jackson, particularly around the time of his 2001 Invincible tour and album, which both failed to return the singer to his “king of pop” status.
    Again, films such as Coralie Fargeat’s The Substance (2024) tackle the idea of the ageing celebrity with more clarity and originality, even while clearly being inspired by other movies.

    Consequently, Opus has quite a 1990s feel to it, perhaps aided by the casting of Malkovich and Juliette Lewis, both huge stars during that decade. The film also gets a bit meta, nodding to Spike Jonze’s Being John Malkovich (1999) through a similar use of star cameos and a puppet show – both interesting elements, but again which feel disjointed in Opus.

    I think Green has stronger films in him to come but, although his work raises interesting points, there are too many ideas here for a convincing film to properly materialise. I was unclear on a number of things including Moretti’s motives and his contempt for critics, including the positive ones.

    Opus perhaps bites off more than it can chew, leaving me feeling that Green’s directorial opus is still to come.

    Daniel O’Brien does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Opus: clunky satire about an evil celebrity cult has plenty to say – it just doesn’t know how to say it – https://theconversation.com/opus-clunky-satire-about-an-evil-celebrity-cult-has-plenty-to-say-it-just-doesnt-know-how-to-say-it-252118

    MIL OSI – Global Reports

  • MIL-OSI Global: Who are the Baloch Liberation Army? Pakistan train hijacking was fuelled by decades of neglect and violence

    Source: The Conversation – UK – By Sameen Mohsin Ali, Lecturer in International Development, University of Birmingham

    Pakistan’s army has freed hundreds of hostages from a passenger train that was seized by armed militants in the south-western province of Balochistan on Tuesday, March 11. A number of those on board were military officials and police personnel travelling from Balochistan’s capital, Quetta, to Peshawar further north.

    The Baloch Liberation Army (BLA) quickly claimed responsibility for the hijacking. In a written statement sent to the Guardian, the group said its actions were “a direct response to Pakistan’s decades-long colonial occupation of Balochistan and the relentless war crimes committed against the Baloch people”.

    Ever since 1948, when Balochistan became a province of Pakistan months after partition from India, this territory has been marginalised by the Pakistani state. The authorities have struggled to accommodate the diverse ethnic and linguistic groups within Balochistan, leading to several rounds of insurgency.

    During the recent hijack, the BLA demanded that Pakistan’s military release Baloch activists, missing people and political prisoners, and threatened to kill many of the hostages if the authorities did not comply. The subsequent military operation, which lasted two days, resulted in the deaths of all 33 militants, as well as 21 hostages and four army personnel.

    The brazen nature and scale of the attack has raised difficult questions for the Pakistani state about how it addresses escalating discontent and militancy in Balochistan.

    Who are the BLA?

    The BLA is a separatist group that emerged in the early 2000s. It is considered a terrorist organisation by the Pakistani authorities and several western countries.

    Unlike more moderate Baloch nationalist groups, which are committed to remaining part of the Pakistani state despite longstanding grievances with it, the BLA aims to achieve an independent Balochistan.

    Some of the grievances expressed by the Baloch include a lack of representation both in the federal government and the armed forces. Baloch nationalists also allege the Pakistani state has exploited the province’s coal, gold, copper and gas resources while providing very little for the Baloch people in return.

    Revenues from the Saindak gold and copper mine, for example, are largely shared between the Chinese company that operates it and the Pakistani government. The Balochistan provincial government only receives around 5% of the mine’s revenue.

    Chaghi, the mineral-rich district of Balochistan that hosts the Saindak mine, remains one of the most underdeveloped areas of the country. Local people employed at the mine claim they are only offered menial jobs and work in unsafe conditions.

    Balochistan’s persistent underdevelopment means a poor quality of life for its citizens. It consistently ranks as the Pakistani province with the lowest human development index (HDI) rating, scoring 0.421 in 2017. This index is a summary rating between 0 (low) and 1 (high) based on measures of health, education and standard of living. Punjab has the highest HDI rating at 0.732.

    Balochistan is located in south-west Pakistan.
    Calligraphy786 / Shutterstock

    The separatist movement in Balochistan intensified after Nawab Akbar Bugti, a prominent Baloch nationalist leader, was killed in a military operation in 2006. The BLA was soon banned by the Pakistani government, and the military’s operations intensified in the province.

    Baloch human rights defenders and activists have persistently accused Pakistan’s security forces of harassment and relying on excessive force. Protesters believe there have been thousands of enforced disappearances and extrajudicial killings, which the Pakistani authorities have denied.

    The issue has been raised by human rights organisations both in Pakistan and abroad. Families of missing people have filed cases against the government with the Pakistan Supreme Court, and disappearances have been investigated through special commissions of inquiry.

    Supreme Court rulings have held the state responsible for enforced disappearances. While some missing people have been traced as a result of these rulings and inquiries, the International Commission of Jurists notes that “there has been no apparent effort made to fix responsibility for this heinous crime”.

    Attacking foreign investments

    The BLA’s tactics have typically involved carrying out attacks against state installations. However, in recent years, attacks against Chinese citizens and infrastructure have become the group’s focus.

    Balochistan has a strategically important coastline, providing access to the Indian Ocean. China has invested heavily in the region as part of its Belt and Road Initiative, including in a deep-sea port at Gwadar. But these investments have failed to benefit local people, fuelling accusations by many in the province that the Pakistani state is systematically neglecting their needs.

    The BLA’s suicide squad was responsible for an attack that injured three Chinese engineers working in the Balochistan city of Dalbandin in 2018. Later that year, BLA militants attacked the Chinese consulate in Karachi – though Chinese nationals remained safe in that attack.

    The group seems to have no difficulty attracting young and well-educated Baloch people, who see the state’s actions and Chinese presence in Balochistan as exploitative. In 2022, a female graduate student carried out a suicide attack on behalf of the BLA that killed three Chinese teachers at the University of Karachi.

    The BLA’s activities have expanded substantially in recent years. It has conducted more than 150 attacks in the past year alone, including on Quetta railway station and on a convoy carrying Chinese workers near Karachi airport.

    However, experts have noted that the train hijacking was unprecedented in scale. It represents a significant escalation by the BLA in terms of the planning, resources and intelligence required to execute such an operation.

    The Pakistani government and military appear to have mishandled Balochistan’s security situation. But they have also failed to address the growing resentment and alienation that is driving people to groups like the BLA.

    According to Farzana Sheikh, an associate fellow at Chatham House, Pakistan’s military continues to favour “a heavy-handed security response to deal with what is widely judged to be a political crisis”.

    Accusations of state exploitation and neglect will not go away until the Pakistani state radically alters its stance on Balochistan, starting by ensuring accountability for perpetrators of human rights violations. Only then can trust be rebuilt with the people of this province who, according to the Human Rights Commission of Pakistan, live in “a climate of fear”.

    Sameen Mohsin Ali does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Who are the Baloch Liberation Army? Pakistan train hijacking was fuelled by decades of neglect and violence – https://theconversation.com/who-are-the-baloch-liberation-army-pakistan-train-hijacking-was-fuelled-by-decades-of-neglect-and-violence-252120

    MIL OSI – Global Reports

  • MIL-OSI Global: The White House press pool became a way to control journalists – Trump is taking this to new levels

    Source: The Conversation – UK – By Colin Alexander, Senior Lecturer in Political Communications, Nottingham Trent University

    The recently appointed White House press secretary, Karoline Leavitt, has begun her tenure combatively, aggressively defending the Trump administration’s policies and, at times, mimicking Donald Trump’s methods of dealing with the mainstream news media.

    Faced recently with a legitimate question by an Associated Press (AP) reporter who challenged Trump’s introduction of tariffs against several countries, she accused the reporter of doubting her knowledge of economics. She then dismissed him, saying: “I now regret giving a question to the Associated Press.”

    AP is one of the key media organisations reporting on the White House. The largest news agency in the US, its stories are carried by news groups around the world. But recently, AP was ejected from the “press pool” that covers White House business

    It was excluded in mid-February for refusing to call the Gulf of Mexico “the Gulf of America”, after Trump changed its name by executive order. This was followed by an announcement that the White House would take greater control of the press pool and choose which outlets would be given most access to the president. This is likely to be based on favourable coverage rather than quality of reporting.

    To appreciate how significant this is, it is important to first state the fundamental purpose of journalism in a democratic society, which is to hold the powerful to account. This is known as its “watchdog” function.

    The work of Washington Post reporters Bob Woodward and Carl Bernstein in exposing the Watergate scandal during the 1970s is often held up as the gold standard of watchdog journalism. It ultimately led to the resignation of Richard Nixon as president and the imprisonment of his lawyer, John Dean.

    “Pooling” describes the process by which a prominent organisation or individual attempts to oversee journalistic scrutiny by managing access. King Charles, for example, also operates a press pool.

    It works in two stages. First, news organisations or individual journalists apply to be members of the pool. Then, a handful of journalists from the pool are selected each day or week for access. These journalists – through their pool contract – are required to share the information they gather with the other journalists in the pool, which often leads to a genericisation of the content.

    Thus, while political organisations or elite individuals might claim the pooling system is used as a benign and fair tool to manage consistent press interest, in reality it is a weapon of communications control.

    The White House’s press pool was first established under President Dwight Eisenhower as a reflection of the growing number of journalists based in Washington. But in the modern era, the use of pooling was most controversial during and after the first Gulf War of the early 1990s.

    Rather than roaming the battlefields of Iraq and Kuwait, most western reporters spent the conflict at the media centre in Dhahran, Saudi Arabia, some 250 miles from the Kuwait border. Here they were fed the information that the US military wanted the public to know. A small number of pooled journalists were then occasionally accompanied by US troops to the battlefield in what was a clear case of censorship by access and perspective limitation.

    This military-media power dynamic – and the subsequent mismatch between the actuality of the war and the reporting of it – led the French philosopher Jean Baudrillard to declare in a 1991 essay, published by Liberation and The Guardian, that “The Gulf war did not take place”.

    General “Stormin” Norman Schwarzkopf’s famous “luckiest man in Iraq” briefing is indicative of the close relationship that developed between military and media professionals during the conflict. Schwarzkopf showed journalists footage taken through the crosshairs of a US bomber of an Iraqi private car driving over a bridge moments before a US airstrike destroys it. You can hear the journalists laughing with Schwarzkopf as they watch this lucky escape.

    Legacy of Vietnam

    Despite widespread understanding that scrutiny is an important part of public officialdom, the legacy of the Vietnam War – a conflict the US was perceived both at home and around the world to have lost – led to a significant amount of distrust of journalists. US media analyst Daniel Hallin referred to Vietnam as the “uncensored war”. By this he meant that journalists enjoyed an unprecedented amount of freedom – exacerbated by the relatively new medium of television, which brought stark images of war directly into people’s living rooms.

    By February 1968, the US military’s daily briefings from the Rex Hotel in Saigon had become known as the “five o’clock follies”, on account of the gulf between official claims of the war’s “progress” and what was being reported by journalists who had ventured into the field. The military consistently presented a positive narrative – in stark contrast to the esteemed CBS reporter Walter Cronkite’s analysis that: “To say that we are mired in stalemate seems the only realistic, yet unsatisfactory, conclusion.”

    Vietnam could have been an opportunity for governments to think about their obligation to truth and the requirement to be more ethical in their approach. Instead, the feeling in Washington was that unfavourable press coverage had lost the war, and that journalists needed to be curtailed.

    Controlling the message

    The recent decision by the Trump administration to take over selection of pool journalists from the notionally independent White House Correspondents’ Association is unsurprising. The approach is consistent with the first Trump presidency’s refusal to answer questions from journalists who tried to carry out the press’s watchdog function.

    It also fits with Trump’s electioneering approach during 2024 when he shunned traditional news outlets, focusing instead on social media and appearing on the podcasts of Joe Rogan and Andrew Schulz, for example.

    To this end, the White House’s decision amounts to a power grab against the institution of modern journalism – even if much of the US media has been in thrall to the powerful ever since Vietnam.

    Colin Alexander does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. The White House press pool became a way to control journalists – Trump is taking this to new levels – https://theconversation.com/the-white-house-press-pool-became-a-way-to-control-journalists-trump-is-taking-this-to-new-levels-250960

    MIL OSI – Global Reports

  • MIL-OSI Global: As Mark Carney is sworn in, America’s democratic decline has critical lessons for Canadian voters

    Source: The Conversation – Canada – By Matthew Lebo, Professor, Department of Political Science, Western University

    Prime Minister Mark Carney and his cabinet have been sworn in, ending Justin Trudeau’s time in office and paving the way for a spring election. Canadians are soon heading to the polls as they watch American democracy crumble.

    United States President Donald Trump recently argued “he who saves his country does not violate any Law” as he ignores Congress and the courts, governs by executive order and threatens international laws and treaties.




    Read more:
    Is Donald Trump on a constitutional collision course over NATO?


    Once stable democratic institutions are failing to hold an authoritarian president in check.

    What lessons are there to protect Canadian democracy as the federal election approaches?

    Elites lead the way

    First, it’s important to delve into how so many Americans have become tolerant of undemocratic actions and politics in the first place. It’s not that Republican voters first became more extreme and then chose a representative leader. Rather, public opinion and polarization are led by elites.

    Republican leaders moved dramatically to the right, and the primary system allowed the choice of an extremist. Republican voters then aligned their opinions with his. Trump’s disdain for democratic fundamentals spread quickly. Partisans defending their team slid away from democratic values.

    Canada’s more centrist ideological spectrum is not foolproof against this type of extremism. Public opinion can be moved when our leaders take us there.

    Decline can start slowly and then accelerate. America’s democratic backsliding in the first weeks of Trump’s second presidency follows the erosion of democratic norms over decades. Republican attacks on institutions, the opposition, the media and higher education corrosively undermined public faith in the truth, including election results.

    Trust in government is holding steady in Canada, however. That provides an important guardrail for Canadian democracy.

    The dangers of courting the far right

    There are also lessons for our political parties. To maximize their seats, Republicans accepted extremists like Marjorie Taylor Greene, but soon needed those types of politicians for key votes.

    The so-called Freedom Caucus, made up of MAGA adherents, forced the choice of a new, more extreme, leader of the House of Representatives. This provides a clear lesson that history has shown many times: it is dangerous for the party on the political right to accommodate the far right, which can quickly take control.

    Once established within the ruling party, extremists can hold their party hostage.

    At a recent meeting of the Munich Security Conference, Vice-President JD Vance pushed European parties to include far-right parties, and Elon Musk outright endorsed the far-right Alternative for Germany party.

    Austria recently avoided the inclusion of the far right in its new coalition, and now Germany is working to do the same. As Canada’s Conservatives look for every vote, courting far-right voters and candidates risks destabilizing the system.

    Can it happen in Canada?

    How safe is Canada’s Westminster-style parliamentary democracy?

    The fusion of legislative and executive power in parliamentary systems like Canada’s seems prone to tyranny. America’s Constitutional framers thought so when they designed a system with separate legislative, executive and judicial branches that could check each other’s power.

    They clearly did not imagine party loyalty negating the safeguards that protect democracy from an authoritarian-minded president. The Constitution gives Congress the power to legislate and impeach, limits the executive’s power to spend and make appointments, gives the judiciary power to hold an executive accountable and contains the 25th amendment allowing cabinet to remove a president.

    But when one party controls the legislative and executive branches during a time of hyper-partisanship, these mechanisms may not constrain an authoritarian. Today, Republican loyalty has eroded these checks and balances and American courts are struggling to step up to their heightened role.

    Although counter-intuitive, parliamentary systems like Canada’s are usually less susceptible to authoritarianism than presidential ones because the cabinet or the House of Commons can turn against a lawless leader.

    Still, if popular, authoritarian leaders can still retain their party’s support — and then things can slide quickly. The rightward pull of extremists seen in the U.S. House would be more dangerous here since the Canadian House of Commons includes our executive.

    Guarding against xenophobia

    Lastly, Canada should be wary of xenophobic rhetoric.

    America First” is not simply shopping advice. It began as an isolationist slogan during the First World War but was soon adopted by pro-fascists, American Nazis and the Ku Klux Klan. These entities questioned who is really American and wanted not only isolationism, but racist policies, immigration restrictions and eugenics.

    Trump did not revive the phrase accidentally. It’s a call to America’s fringes. Alienating domestic groups is a sure sign of democratic decline.

    “Canada First” mimics that century-long dark theme in America. In combination with contempt for the opposition, it questions the right of other parties to legitimately hold power if used as a message by one party.

    Also, asserting that “Canada is broken” — as Conservative Leader Pierre Poilievre often does — mimics Trump’s talk of American carnage, language and imagery he uses to justify extraordinary presidential authority.

    Such language erodes citizens’ trust in democratic institutions and primes voters to support undemocratic practices in the name of patriotism. Canadian parties and politicians should exit that road.

    Ultimately, institutions alone do not protect a country from the rise of authoritarianism. Democracy can be fragile. As a federal election approaches in Canada, it’s important to know the warning signs of extremism and anti-democratic practices that are creeping into our politics.

    Matthew Lebo does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. As Mark Carney is sworn in, America’s democratic decline has critical lessons for Canadian voters – https://theconversation.com/as-mark-carney-is-sworn-in-americas-democratic-decline-has-critical-lessons-for-canadian-voters-251544

    MIL OSI – Global Reports

  • MIL-OSI Global: When algorithms take the field – inside MLB’s robo-umping experiment

    Source: The Conversation – USA – By Arthur Daemmrich, Professor of Practice in the School for the Future of Innovation in Society, Arizona State University

    MLB’s automated ball-strike technology could be used in big league games as soon as 2026. Rich Schultz/Getty Images

    Baseball fans tuning into spring training games may have noticed another new wrinkle in a sport that’s experienced a host of changes in recent years.

    Batters, pitchers and catchers can challenge a home plate umpire’s ball or strike call. Powered by Hawk-Eye ball-tracking technology, the automated ball-strike system replays the pitch trajectory to determine whether the umpire’s call was correct.

    To minimize disruptions, Major League Baseball permits each team a maximum of two failed challenges per game but allows unlimited challenges as long as they’re successful. For now, the technology will be limited to the spring exhibition games. But it could be implemented in the regular season as soon as 2026.

    Count future Hall of Famer Max Scherzer among the skeptics.

    “We’re humans,” the Toronto Blue Jays hurler said after a spring training game in which he challenged two calls and lost both to the robo umps. “Can we just be judged by humans?”

    Technological advances that lead to fairer, more accurate calls are often seen as triumphs. But as co-editors of the recently published volume “Inventing for Sports,” which includes case studies of over 20 sports inventions, we find that new technology doesn’t mean perfect precision – nor does it necessarily lead to better competition from the fan perspective.

    Cue the cameras

    While playing in a cricket match in the 1990s, British computer scientist Paul Hawkins fumed over a bad call. He decided to make sure the same mistake wouldn’t happen again.

    Drawing on his doctoral training in artificial intelligence, he designed an array of high-speed cameras to capture a ball’s flight path and velocity, and a software algorithm that used the data to predict the ball’s likely future path.

    He founded Hawk-Eye Innovations Ltd. in 2001, and his first clients were cricket broadcasters who used the technology’s trajectory graphics to enhance their telecasts.

    By 2006, professional tennis leagues began deploying Hawk-Eye to help officials adjudicate line calls. Cricket leagues followed in 2009, incorporating it to help umpires make what are known as “leg before wicket” calls, among others. And professional soccer leagues started using the technology in 2012 to determine whether balls cross the goal line.

    A technician uses the Hawk-Eye system as part of a broadcast trial for the technology during the 2005 Masters Tennis tournament in London.
    Julian Finney/Getty Images

    Reaction to Hawk-Eye has been mixed. In tennis, players, fans and broadcasters have generally embraced the technology. During a challenge, spectators often clap rhythmically in anticipation as the Hawk-Eye official cues up the replayed trajectory.

    “As a player, and now as a TV commentator,” tennis legend Pam Shriver said in 2006, “I dreamed of the day when technology would take the accuracy of line calling to the next level. That day has now arrived.”

    But Hawk-Eye isn’t perfect. In 2020 and 2022, the firm publicly apologized to fans of professional soccer clubs after its goal-line technology made errant calls after players congregated in the goal box and obstructed key camera sight lines.

    Perfection isn’t possible

    Critics have also raised more fundamental concerns.

    In their 2016 book “Bad Call,” researchers Harry Collins, Robert Evans and Christopher Higgins reminded readers that Hawk-Eye is not a replay of the ball’s actual position; rather, it produces a prediction of a trajectory, based on the ball’s prior velocity, rotation and position.

    The authors lament that Hawk-Eye and what they term “decision aids” have undermined the authority of referees and umpires, which they consider bad for the games.

    Ultimately, there are no purely objective standards for fairness and accuracy in technological officiating. They are always negotiated. Even the most precise officiating innovations require human consensus to define and validate their role. Technologies like photo-finish cameras, instant replay and ball-tracking systems have improved the precision of officiating, but their deployment is shaped – and often limited – by human judgment and institutional decisions.

    For example, today’s best race timing systems are accurate to 0.0001 seconds, yet Olympic sports such as swimming, track and field, and alpine skiing report results in increments of only 0.01 seconds. This can lead to situations – such as Dominique Gisin and Tina Maze’s gold medal tie in the women’s downhill ski race at the 2014 Sochi Olympics – in which the timing officials admitted that their equipment could have revealed the actual winner. But they were forced to report a dead heat under the rules established by the ski federation.

    With slow-motion instant replays, determining a catch or a player’s intention for a personal foul can actually be distorted by low-speed replay, since humans aren’t adept at adjusting to shifting replay speeds.

    One of the big issues with baseball’s automated ball-strike system has to do with the strike zone itself.

    MLB’s rule book defines the strike zone as the depth and width of home plate and the vertical distance between the midpoint of a player’s torso to the point just below his knees. The interpretation of the strike zone is notoriously subjective and varies with each umpire. For example, human umpires often call a strike if the ball crosses the plate in the rear corner. However the automated ball-strike system uses an imaginary plane that bisects the middle – not the front or the rear – of home plate.

    There are more complications. Since every player has a unique height, each has a unique strike zone. At the outset of spring training, each player’s height was measured – standing up without cleats – and then confirmed through a biomechanical analysis.

    Eddie Gaedel, the shortest player in major league baseball history, had a much smaller strike zone than his peers. He drew a walk in his only at-bat.
    Bettmann/Getty Images

    But what if a player changes their batting stance and decides to crouch? What if they change their cleats and raise their strike zone by an extra quarter-inch?

    Of course, as has been the case in tennis, soccer and other sports, Hawk-Eye can help rectify genuinely bad calls. By allowing teams to correct the most disputed calls without eliminating the human element of umpiring, MLB hopes to strike a balance between tradition and change.

    Fans have the final say

    Finding a balance between machine precision and the human element of baseball is crucial.

    Players’ and managers’ efforts to work the umpires to contract or expand the strike zone have long been a part of the game. And fans eagerly cheer or jeer players and managers who argue with the umpires. When ejections take place, more yelling and taunting ensues.

    Though often unacknowledged in negotiations between leagues and athletes, fan enthusiasm is a key component of whether to adopt new technology.

    For example, innovative “full-body” swimsuits contributed to a wave of record-breaking finishes in the sport between 2000 and 2009. But uneven access to the newest gear raised the specter of what some called “technological doping.” World Aquatics worried that as records fell simply due to equipment innovations, spectators would stop watching and broadcast and sponsorship revenue would dry up. The swimming federation ended up banning full-body swimsuits.

    When managers argue balls and strikes, it can make for great TV.

    Of course, algorithmic officiating differs from technologies that enhance performance and speed. But it runs a similar risk of turning off fans. So MLB, like other sports leagues, is being thrust into the role of managing technological change.

    Assessing technologies for their immediate and long-term impact is difficult enough for large government agencies. Sports leagues lack those resources, yet are nonetheless being forced to carefully consider how they introduce and regulate various innovations.

    MLB, to its credit, is proceeding incrementally. While the logical conclusion to the current automated ball-strike experiment would be fully electronic officiating, we think fans and players will resist going that far.

    The league’s challenge system is a test. But the real umpires will ultimately be the fans.

    Arthur Daemmrich receives funding from the National Science Foundation and The Lemelson Foundation.

    For the research underlying this article, Eric S. Hintz and the Smithsonian Institution received funding from the National Science Foundation, the Lemelson Foundation, the United States Patent and Trademark Office, Nike, Inc., the Patrick J. McGovern Foundation, the Shō Foundation, ConocoPhillips, and the Hopper-Dean Family Fund.

    Any opinions, findings, conclusions, or recommendations expressed are the authors and do not necessarily reflect the views
    of the National Science Foundation or any other funder.

    ref. When algorithms take the field – inside MLB’s robo-umping experiment – https://theconversation.com/when-algorithms-take-the-field-inside-mlbs-robo-umping-experiment-251094

    MIL OSI – Global Reports

  • MIL-OSI Global: The push to restore semiconductor manufacturing faces a labor crisis − can the US train enough workers in time?

    Source: The Conversation – USA – By Michael Moats, Professor of Metallurgical Engineering, Missouri University of Science and Technology

    Semiconductors power nearly every aspect of modern life – cars, smartphones, medical devices and even national defense systems. These tiny but essential components make the information age possible, whether they’re supporting lifesaving hospital equipment or facilitating the latest advances in artificial intelligence.

    It’s easy to take them for granted, until something goes wrong. That’s exactly what happened when the COVID-19 pandemic exposed major weaknesses in the global semiconductor supply chain. Suddenly, to name just one consequence, new vehicles couldn’t be finished because chips produced abroad weren’t being delivered. The semiconductor supply crunch disrupted entire industries and cost hundreds of billions of dollars.

    The crisis underscored a hard reality: The U.S. depends heavily on foreign countries – including China, a geopolitical rival – to manufacture semiconductors. This isn’t just an economic concern; it’s widely recognized as a national security risk.

    That’s why the U.S. government has taken steps to invest in semiconductor production through initiatives such as the CHIPS and Science Act, which aims to revitalize American manufacturing and was passed with bipartisan support in 2022. While President Donald Trump has criticized the CHIPS and Science Act recently, both he and his predecessor, Joe Biden, have touted their efforts to expand domestic chip manufacturing in recent years.

    Yet, even with bipartisan support for new chip plants, a major challenge remains: Who will operate them?

    Minding the workforce gap

    The push to bring semiconductor manufacturing back to the U.S. faces a significant hurdle: a shortage of skilled workers. The semiconductor industry is expected to need 300,000 engineers by 2030 as new plants are built. Without a well-trained workforce, these efforts will fall short, and the U.S. will remain dependent on foreign suppliers.

    This isn’t just a problem for the tech sector – it affects every industry that relies on semiconductors, from auto manufacturing to defense contractors. Virtually every military communication, monitoring and advanced weapon system relies on microchips. It’s not sustainable or safe for the U.S. to rely on foreign nations – especially adversaries – for the technology that powers its military.

    For the U.S. to secure supply chains and maintain technological leadership, I believe it would be wise to invest in education and workforce development alongside manufacturing expansion.

    Building the next generation of semiconductor engineers

    Filling this labor gap will require a nationwide effort to train engineers and technicians in semiconductor research, design and fabrication. Engineering programs across the country are taking up this challenge by introducing specialized curricula that combine hands-on training with industry-focused coursework.

    Clean rooms, a vital part of semiconductor factories, are also where the next generation of tech innovators conduct research. Here, a Ph.D. candidate is seen in an air shower room before entering a clean room at Tokyo University on May 1, 2024.
    Yuichi Yamazaki/Getty Images

    Future semiconductor workers will need expertise in chip design and microelectronics, materials science and process engineering, and advanced manufacturing and clean room operations. To meet this demand, it will be important for universities and colleges to work alongside industry leaders to ensure students graduate with the skills employers need. Offering hands-on experience in semiconductor fabrication, clean-room-based labs and advanced process design will be essential for preparing a workforce that’s ready to contribute from Day 1.

    At Missouri University of Science of Technology, where I am the chair of the materials science and engineering department, we’re launching a multidisciplinary bachelor’s degree in semiconductor engineering this fall. Other universities across the U.S. are also expanding their semiconductor engineering options amid strong demand from both industry and students.

    A historic opportunity for economic growth

    Rebuilding domestic semiconductor manufacturing isn’t just about national security – it’s an economic opportunity that could benefit millions of Americans. By expanding training programs and workforce pipelines, the U.S. can create tens of thousands of high-paying jobs, strengthening the economy and reducing reliance on foreign supply chains.

    And the race to secure semiconductor supply chains isn’t just about stability – it’s about innovation. The U.S. has long been a global leader in semiconductor research and development, but recent supply chain disruptions have shown the risks of allowing manufacturing to move overseas.

    If the U.S. wants to remain at the forefront of technological advancement in artificial intelligence, quantum computing and next-generation communication systems, it seems clear to me it will need new workers – not just new factories – to gain control of its semiconductor production.

    Michael Moats does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. The push to restore semiconductor manufacturing faces a labor crisis − can the US train enough workers in time? – https://theconversation.com/the-push-to-restore-semiconductor-manufacturing-faces-a-labor-crisis-can-the-us-train-enough-workers-in-time-245516

    MIL OSI – Global Reports

  • MIL-OSI Global: When humans use AI to earn patents, who is doing the inventing?

    Source: The Conversation – USA – By W. Keith Robinson, Professor of Law, Wake Forest University

    Only humans can be awarded patents, but AIs can do a lot of the work to earn them. lineartestpilot/iStock via Getty Images

    The advent of generative artificial intelligence has sent shock waves across industries, from the technical to the creative. AI systems that can generate viable computer code, write news stories and spin up professional-looking graphics have inspired countless headlines asking whether they will take away jobs in technology, journalism and design, among many other fields.

    And these new ways of doing work and making things raise another question: In the era of AI, what does it mean to be an inventor?

    Among technologists who build digital tools or programs, it is increasingly common to use AI as part of design and development processes. But as deep learning models flex their technical muscles more and more, even highly skilled researchers who are using AI in their work have begun to express concerns about becoming obsolete.

    There is much debate about whether AI can augment human creativity, but emerging data suggests that the technology can boost research and development where creativity typically plays an important role. A recent study by MIT economics doctoral student Aidan Toner-Rodgers found that scientists using AI tools increased their patent filings by 39% and created 17% more prototypes than when they worked without such tools.

    While this study indicates that AI seemed to help humans be more productive, it also showed there was a downside: 82% of the surveyed researchers felt less satisfied with their jobs since implementing AI in their workflows. “I couldn’t help feeling that much of my education is now worthless,” one researcher said.

    This emerging dynamic leads to a related question: If a scientist uses AI in order to build something new, does the output still qualify as an invention? As a legal scholar who studies technology and intellectual property law, I see the growing power of AI shifting the legal landscape.

    Natural persons

    In 2020, the United States Patent and Trademark Office refused to list the AI system DABUS, which purportedly designed a food container and a flashing emergency beacon, as an inventor on patent applications. Subsequent court rulings clarified that under current U.S. law, only humans can be listed as inventors, but they left open the question of whether inventions developed by scientists with the help of AI qualify for patent protection.

    The concept of inventorship and legal protections for inventions have deep roots in the U.S. The Constitution explicitly protects the “exclusive rights” of authors and inventors “to their respective writings and discoveries,” reflecting the framers’ strong conviction that the state should protect and encourage original ideas.

    The first U.S. patent, granted in 1790 and signed by George Washington.
    United States Patent and Trademark Office

    U.S. law today defines an inventor as a natural person who has conceived of a complete and operative invention that can be used without extensive research or experimentation. An inventor must do more than follow routine instructions – they must make an intellectual contribution in producing something novel.

    That contribution can be a key idea that sparks the invention or a crucial insight that turns the concept into a working product. If a person’s input is routine or just explains what’s already known, they are not an inventor.

    Role of AI

    To what extent can or should AI become part of the invention process? The release of AI applications such as ChatGPT in 2022 introduced the public to large language models and sparked renewed debate about whether and how AI should be used in the inventive process. That same year, the U.S. Court of Appeals for the Federal Circuit heard a case that tested whether AI could be named as an inventor on a patent application.

    The court concluded that under U.S. law, inventors must be human beings. The ruling reaffirmed the idea that Congress intended to encourage human beings, not machines, to invent. This idea remains foundational to current patent policy.

    In light of the court’s decision, in 2024 the United States Patent and Trademark Office updated its guidance to clarify the role of AI in the inventive process. The guidance reaffirms that an inventor must be human. However, the Patent and Trademark Office explained that the policy did not preclude inventors from using AI tools to assist in the research and development of inventions. This approach acknowledges how the rapid development of AI technologies has allowed researchers to make exciting breakthroughs.

    Policymakers seem to understand that if the U.S. is to continue to lead the world in innovation, the mythology of a sole inventor toiling away in a garage and relying on pure intellect must evolve to account for the value of AI tools that research has proven make humans more productive.

    Nevertheless, since only human beings can be named as inventors on a patent, current policy does not quite answer the question of who or what should get credit for doing the work. Despite a growing trend where researchers are expected to disclose whether they’ve used AI tools, for example in academic papers, the U.S. patent system makes no such demand.

    Regardless of AI’s role in the research and development process, a U.S. patent will list only the names of human inventors so long as those humans made a significant contribution to the invention. As a result, current policy is not concerned with how to recognize the contributions of AI. AI is considered a tool like a microscope or a Bunsen burner.

    Personal ingenuity in the age of AI

    Given this shifting legal landscape, I see that U.S. innovation policy is at a crossroads. The Patent and Trademark Office’s guidance reaffirming human inventorship and simultaneously embracing AI as an innovation tool is only a year old. It is unclear how the Trump administration’s forthcoming action plan to “enhance America’s global AI dominance” will affect this guidance.

    Some observers expect the rate of scientific discovery to increase dramatically with the assistance of AI tools. But if the majority of those same productive researchers enjoy their jobs less, is the act of inventing being encouraged as the framers envisioned?

    Current U.S. policy attempts to strike a balance and recognize the concept of personal ingenuity, stemming from the principle that for an invention to be patented in the U.S., a human must have led the way. Yet the guidance also implicitly acknowledges that AI can lend a helping hand in modern research and development. Whether and how policymakers maintain this balance – and how leaders in industry and science respond – will help shape the next chapter of American innovation.

    W. Keith Robinson does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. When humans use AI to earn patents, who is doing the inventing? – https://theconversation.com/when-humans-use-ai-to-earn-patents-who-is-doing-the-inventing-248216

    MIL OSI – Global Reports

  • MIL-OSI Global: Why parents of ‘twice-exceptional’ children choose homeschooling over public school

    Source: The Conversation – USA – By Rachael Cody, Postdoctoral Scholar in the Department of Education, Oregon State University

    More Americans are homeschooling their kids. Chris Hondros/Newsmakers via Getty Images

    Homeschooling has exploded in popularity in recent years, particularly since the pandemic. But researchers are still exploring why parents choose to homeschool their children.

    While the decision to homeschool is often associated with religion, a 2023 survey found that the two top reasons people cited as most important were a concern about the school environment, such as safety and drugs, and a dissatisfaction with academic instruction.

    I studied giftedness, creativity and talent as part of my Ph.D. program focusing on students who are “twice exceptional” – that is, they have both learning challenges such autism or attention-deficit/hyperactivity disorder as well as advanced skills. A better understanding of why parents choose homeschooling can help identify ways to improve the public education system. I believe focusing on twice-exceptional students can offer insights beyond this subset of the homeschooled population.

    What we know about homeschooling

    The truth is researchers don’t know much about homeschooling and homeschoolers.

    One problem is regulations involving homeschooling differ dramatically among states, so it is often hard to determine who is being instructed at home. And many families are unwilling to talk about their experiences homeschooling and their reasons for doing so.

    But here’s what we do know.

    The share of children being homeschooled has surged since 2020, rising from 3.7% in the 2018-2019 school year to 5.2% in 2022-2023 – the latest data available from the National Center for Education Statistics. Over 3 million students were homeschooled in 2021-22, according to the National Home Education Research Institute.

    And the population of homeschoolers is becoming increasingly diverse, with about half of families reporting as nonwhite in a 2023 Washington Post-Schar School poll. In addition, homeschooling families are just as likely to be Democrat as Republican, according to that same Post-Schar survey, a sharp shift from previous surveys that suggested Republicans were much more likely to homeschool.

    As for why parents homeschool, 28% of those surveyed in 2023 by the Institute of Education Sciences said the school environment was their biggest reason, followed by 17% that cited concerns about academic instruction. Another 17% said providing their kids with moral or religious instruction was most important.

    But not far behind at 12% was a group of parents who prioritized homeschooling for a different reason: They have a child with physical or mental health problems or other special needs.

    This group would include parents of twice-exceptional children, who may be especially interested in pursuing homeschooling as an alternative method of education for three reasons in particular.

    Some families have devoted significant resources, such as by creating home libraries, to homeschool their children.
    AP Photo/Charles Krupa

    1. The ‘masking’ problem

    These parents may notice that their child’s needs are being overlooked in the public education system and may view homeschooling as a way to provide better individualized instruction.

    Students who are twice exceptional often experience what researchers call the “masking” phenomenon. This can occur when a child’s disabilities hide their giftedness. When this occurs, teachers tend to provide academic support but hesitate to give these children the challenging material they may require.

    Masking can also occur in reverse, when a student’s gifts tend to hide disabilities. In these cases, teachers provide challenging material, but they do not provide the needed accommodations that allow the gifted child to access the materials. Either way, masking can be a problem for students and parents who must advocate for teachers to address their unique range of academic needs.

    While either type of masking is challenging for the student, it may be particularly frustrating for parents of twice-exceptional students to watch classroom teachers focus only on their child’s weaknesses rather than helping them develop their advanced abilities.

    2. Individualized instruction

    By the time a child enters school, parents have spent years observing their child’s development, comparing their progress with that of others their age. They’re also likely to be aware of their child’s unique interests.

    While this may not be true for all parents, those who choose to homeschool may do so because they feel they have more of an ability and interest in catering to their child’s unique needs than a classroom teacher who is tasked with teaching many students simultaneously. Parents of students who demonstrate exceptional ability have expressed concerns about their child’s future educational opportunities in a public school setting.

    Additionally, parents may become exhausted by their efforts to advocate for their child’s unique needs in the school system. Parents of students who demonstrate advanced abilities often pull their children out of public school after repeated efforts to improve communication between home and school.

    3. Behavioral and emotional needs

    Gifted students who have emotional or behavioral disabilities may find it difficult to demonstrate their abilities in the classroom.

    All too often, teachers may be more focused on disciplining these students rather than addressing their academic needs. For example, a child who is bored with the class material may be loud and attempt to distract others as well.

    Rather than recognizing this as signaling a need for more advanced material, the teacher might send the child to a separate area in the classroom or in the school to refocus or as punishment. Parents may feel better equipped than teachers to address both their child’s challenging behaviors and their gifted abilities, given the knowledge they have about their child’s history, interests, strengths and areas needing improvement.

    Supporting students’ needs

    Gaining a better understanding of the motivations driving parents to take their children out of the public school system is an important step toward improving schools so that fewer will feel the need to take this path.

    Additionally, strengthening educators’ and policymakers’ understanding about twice-exceptional homeschooled students may help communities provide more support to their families – who then may not feel homeschooling is the only or best option. My research shows that many schools can do a better job providing these types of students and their parents with the support they need to thrive.

    Rachael Cody does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Why parents of ‘twice-exceptional’ children choose homeschooling over public school – https://theconversation.com/why-parents-of-twice-exceptional-children-choose-homeschooling-over-public-school-244385

    MIL OSI – Global Reports

  • MIL-OSI Global: Big cuts at the Education Department’s civil rights office will affect vulnerable students for years to come

    Source: The Conversation – USA – By Erica Frankenberg, Professor of Education and Demography, Penn State

    Senate Minority Leader Chuck Schumer and fellow Democrats criticize President Donald Trump’s plan to shutter the Education Department on March 6, 2025. AP Photo/J. Scott Applewhite

    The U.S. Department of Education cut its workforce by nearly 50% on March 11, 2025, when it laid off about 1,315 employees. The move follows several recent directives targeting the Cabinet-level agency.

    Within the department, the Office for Civil Rights – which already experienced layoffs in February – was especially hard hit by cuts.

    The details remain unclear, but reports suggest that staffs at six of the 12 regional OCR offices were laid off. Because of the office’s role in enforcing civil rights laws in schools and universities, the cuts will affect students across the country.

    As education policy scholars who study how laws and policies shape educational inequities, we believe the Office for Civil Rights has played an important role in facilitating equitable education for all students.

    The latest cuts further compound funding and staffing shortages that have plagued the office. The full effects of these changes on the most vulnerable public school students will likely be felt for many years.

    Few staff members

    The Education Department, already the smallest Cabinet-level agency before the recent layoffs, distributed roughly US$242 billion to students, K-12 schools and universities in the 2024 fiscal year.

    About $160 billion of that money went to student aid for higher education. The department’s discretionary budget was just under $80 billion, a sliver compared with other agencies.

    By comparison, the Department of Health and Human Services received nearly $2.9 trillion in fiscal year 2024.

    Within the Education Department, the Office for Civil Rights had a $140 million budget for fiscal year 2024, less than 0.2% of discretionary funding, which requires annual congressional approval.

    It has lacked financial support to effectively carry out its duties. For example, amid complaints filed by students and their families, the OCR has not had an increase in staff. That leaves thousands of complaints unresolved.

    The office’s appropriated budget in fiscal year 2017 was one-third of the budget of the Equal Employment Opportunity Commission – a federal agency responsible for civil rights protection in the workplace – despite the high number of discrimination complaints that OCR handles.

    Support for OCR

    Despite this underfunding, the office has traditionally received bipartisan support.

    Former Secretary of Education Betsy DeVos, for example, requested a funding decrease for the office during the first Trump administration. Congress, however, overrode her budget request and increased appropriations.

    Likewise, regardless of changing administrations, the office’s budget has remained fairly unchanged since 2001.

    It garners attention for investigating and resolving discrimination-related complaints in K-12 and higher education. And while administrations have different priorities in how to investigate these complaints, they have remained an important resource for students for decades.

    But a key function that often goes unnoticed is its collection and release of data through the Civil Rights Data Collection.

    The CRDC is a national database that collects information on various indicators of student access and barriers to educational opportunity. Historically, only 5% of the OCR’s budget appropriations has been allocated for the CRDC.

    Yet, there are concerns among academic scholars that the continued collection and dissemination of the CRDC might be affected by staff cuts and contract cancellations worth $900 million at the Department of Education’s research arm, the Institute of Education Science.

    That’s because the CRDC often relies on data infrastructure that is shared with the institute.

    The history of the CRDC

    The CRDC originated in the late 1960s as required by the Civil Rights Act of 1964. The data questionnaire, which poses questions about civil rights concerns, is usually administered to U.S. public school districts every two years.

    It provides indicators on student experiences in public preschools and K-12 schools. That includes participation rates in curricular opportunities like Advanced Placement courses and extracurricular activities. It also provides data on 504 plans for students with disabilities and English-learner instruction.

    Although there have been some changes to questions over the years, others have been consistent for 50 years to allow for examining changes over time. Some examples are counts of students disciplined by schools’ use of corporal punishment or out-of-school suspension.

    The U.S. Department of Education building is seen in Washington on Dec. 3, 2024.
    AP Photo/Jose Luis Magana

    During the Obama administration, the Office for Civil Rights prioritized making the CRDC more accessible to the public. The administration created a website that allows the public to view information for particular schools or districts, or to download data to analyze.

    Why the CRDC matters

    Our research focuses on how the CRDC has been used and how it could be improved. In an ongoing research project, we identified 221 peer-reviewed publications that have analyzed the CRDC.

    Articles focusing on school discipline – out-of-school suspensions, for example – are the most common. But there are many other topics that would be difficult to study without the CRDC.

    That’s especially true when making comparisons between districts and states, such as whether students have access to advanced coursework or participation in gifted and talented programs.

    The data has also inspired policy changes.

    The Obama administration, informed by the data on the use of seclusion and restraint to discipline students, issued a policy guidance document in 2016 regarding its overuse for students with disabilities.

    Additionally, the data helps examine the effects of judicial decisions and laws – desegregation laws in the South, for example – that have improved educational opportunities for many vulnerable students.

    Amid the Education Department’s continued cancellation of contracts of federally funded equity assistance centers, we believe research partnerships with policymakers and practitioners drawing on CRDC data will be more important than ever.

    Erica Frankenberg and Maithreyi Gopalan received funding from the Student Experience Research Network.

    Maithreyi Gopalan has received research grants and fellowships from various foundations such as the Student Experience Research Network (New Venture Fund), Federation of American Scientists, and others.

    ref. Big cuts at the Education Department’s civil rights office will affect vulnerable students for years to come – https://theconversation.com/big-cuts-at-the-education-departments-civil-rights-office-will-affect-vulnerable-students-for-years-to-come-249716

    MIL OSI – Global Reports

  • MIL-OSI Global: Simple strategies can boost vaccination rates for adults over 65 − new study

    Source: The Conversation – USA – By Laurie Archbald-Pannone, Associate Professor of Medicine and Geriatrics, University of Virginia

    Many older adults are not up to date on their vaccines. Morsa Images via Getty Images

    Knowing which vaccines older adults should get and hearing a clear recommendation from their health care provider about why a particular vaccine is important strongly motivated them to get vaccinated. That’s a key finding in a recent study I co-authored in the journal Open Forum Infectious Diseases.

    Adults over 65 have a higher risk of severe infections, but they receive routine vaccinations at lower rates than do other groups. My colleagues and I collaborated with six primary care clinics across the U.S. to test two approaches for increasing vaccination rates for older adults.

    In all, 249 patients who were visiting their primary care providers participated in the study. Of these, 116 patients received a two-page vaccine discussion guide to read in the waiting room before their visit. Another 133 patients received invitations to attend a one-hour education session after their visit.

    The guide, which we created for the study, was designed to help people start a conversation about vaccines with their providers. It included checkboxes for marking what made it hard for them to get vaccinated and which vaccines they want to know more about, as well as space to write down any questions they have. The guide also featured a chart listing recommended vaccines for older adults, with boxes where people could check off ones they had already received.

    In the sessions, providers shared in-depth information about vaccines and vaccine-preventable diseases and facilitated a discussion to address vaccine hesitancy.

    In a follow-up survey two months later, patients reported that the most significant barriers they faced were knowing when they should receive a particular vaccine, having concerns about side effects and securing transportation to a vaccination appointment.

    The percentage of patients who said they wanted to get a vaccine increased from 68% to 79% after using the vaccine guide. Following each intervention, 80% of patients reported they discussed vaccines more in that visit than they had in prior visits.

    Of the 14 health care providers who completed the follow-up survey, 57% reported increased vaccination rates following each approach. Half of the providers felt that the use of the vaccine guide was an effective strategy in guiding conversations with their patients.

    A pamphlet at the doctor’s office can empower older patients to ask about vaccines.

    Why it matters

    Only about 15% of adults ages 60-64 and 26% of adults 65 and older are up to date on all the vaccines recommended for their age, according to CDC data from 2022. These include vaccines for COVID 19, influenza, tetanus, pneumococcal disease and shingles.

    Yet studies consistently show that getting vaccinated reduces the risk of complications from these conditions in this age group.

    My research shows that strategies that equip older adults with personalized information about vaccines empower them to start the conversation about vaccines with their clinicians and enable them to be active participants in their health care.

    What’s next

    In the future, we will explore whether engaging patients on this topic earlier is even more helpful than doing so in the waiting room before their visit.

    This might involve having clinical team members or care coordinators connect with patients ahead of their visit, either by phone or through telemedicine that is designed specifically for older adults.

    My research team plans to conduct a pilot study that tests this approach. We hope to learn whether reaching out to these patients before their clinic visits and helping them think through their vaccination status, which vaccines their provider recommends and what barriers they face in getting vaccinated will improve vaccination rates for this population.

    The Research Brief is a short take on interesting academic work.

    Laurie Archbald-Pannone has received funding from Virginia Department of Health and PRIME education. This activity is supported by an independent educational grant from GSK.

    ref. Simple strategies can boost vaccination rates for adults over 65 − new study – https://theconversation.com/simple-strategies-can-boost-vaccination-rates-for-adults-over-65-new-study-250246

    MIL OSI – Global Reports

  • MIL-OSI Global: The psychology behind anti-trans legislation: How cognitive biases shape thoughts and policy

    Source: The Conversation – USA – By Julia Standefer, Ph.D. Student in Psychology, Iowa State University

    Protesters fill the Iowa state Capitol to denounce a bill that will strip the state civil rights code of protections based on gender identity. AP Photo/Charlie Neibergall

    A state law signed Feb. 28, 2025, removes gender identity as a protected status from the Iowa Civil Rights Act, leaving transgender people vulnerable to discrimination. The rights of transgender people – those who present gender characteristics that differ from what has historically been expected of someone based on their biological sex traits – are under political attack across the United States. There are now hundreds of anti-trans bills at various points in the legislative process.

    But why?

    Reasons given usually center on protecting children, protecting cisgender women’s rights in bathrooms and sports competitions, and on removing funding for gender-affirming care. Some efforts appear to stem from fear-driven motives that are not supported by evidence.

    Bias against trans people may not always feel like bias. For someone who believes it to be true, saying there can only be biological men who identify as men and biological women who identify as women may feel like a statement of fact. But research shows that gender is a spectrum, separate from biological sex, which is also more complex than the common male-female binary.

    We are social psychologists who study and teach about the basic social, cognitive and emotion-based processes people use to make sense of themselves and the world. Research reveals psychological processes that bias people in ways they usually aren’t aware of. These common human tendencies can influence what we think about a particular group, influence how we act toward them, and prompt legislators to pass biased laws.

    Root of negative views of transgender people

    Social psychology theory and research point to several possible sources of negative views of transgender people.

    Part of forming your own identity is defining yourself by the traits that make you unique. To do this, you categorize others as belonging to your group – based on characteristics that matter to you, such as race, age, culture or gender – or not. Psychologists call these categories in-groups and out-groups.

    There is a natural human tendency to have inherent negative feelings toward people who aren’t part of your in-group. The bias you might feel against fans of a rival sports team is an example. This tendency may be rooted deep in evolutionary history, when favoring your own safe group over unknown outsiders would have been a survival advantage.

    A trans person’s status as transgender may be the most salient thing about them to an observer, overshadowing other characteristics such as their height, race, profession, parental status and so on. As a small minority, transgender people are an out-group from the mainstream – making it likely out-group bias will be directed their way.

    Anti-trans feeling may also result from fear that transgender people pose threats to one’s personal or group identity. Gender is part of everyone’s identity. If someone perceives their own gender to be determined by their biological sex, they may perceive other people who violate that “rule” as a threat to their own gender identity. Part of identity formation is not just out-group derogation but in-group favoritism. A cisgender person may engage in “in-group boundary protection” by making sure the parameters of “gender” are well defined and match their own beliefs.

    Once you hold negative feelings about someone in an out-group, there are other social psychological processes that may solidify and amplify them in your mind.

    The illusion of a causal connection

    People tend to form illusory correlations between objects, people, occurrences or behaviors, particularly when those things are infrequently encountered. Two distinctive things happening at the same time makes people believe that one is causing the other.

    Some superstitions result from this phenomenon. For example, you might attribute an unusual success such as winning money to wearing a particular shirt, which you now think of as your lucky shirt.

    If a person only ever hears about negative events when they see or hear about a transgender person, an immigrant or a member of some other minority group, then an illusory correlation can form between the negative events and the minority group. That connection is the starting point for prejudice: automatic, negative feelings toward a group of people without justification.

    Of course, it is possible that individuals from the group in question have committed some offense. But to take one individual’s bad deed and attribute it to an entire group of people isn’t justified. This kind of extrapolation is the natural human tendency of stereotyping, which can bias people’s actions.

    ‘That’s exactly what I thought’

    Human minds are biased to confirm the beliefs they already hold, including stereotypes about trans people. A few interconnected processes are at play in what psychologists call confirmation bias.

    First, there’s a natural tendency to seek out information that fits with what you already believe. If you think a shirt is lucky, then you’re more likely to look for positive things that happen when you wear it than you are to look for negative events that would seem to disconfirm its luckiness.

    If you think transgender people are dangerous, you are more likely to conduct an internet search for “transgender people who are dangerous” than “transgender people are victims of crime.”

    There’s a second, more passive process in play as well. Rather than actively seeking out confirming information, people also simply pay attention to information that confirms what they thought in the first place and ignore contradictory information. This can happen without you even realizing.

    People also tend to interpret ambiguous events in line with their beliefs – “I must be having a good day, despite some setbacks, because I’m wearing my lucky shirt.” That confirmation bias could explain someone with anti-trans attitudes thinking “that transgender person holding hands with a child must be a pedophile” instead of “that transgender mother is showing love and care for her kid.”

    Finally, people tend to remember things that confirm their beliefs better than things that challenge them.

    Confirmation bias can strengthen an illusory correlation, making it even more likely to influence subsequent actions – whether compulsively wearing a lucky shirt to an anxiety-inducing appointment or not hiring someone because of discriminatory thoughts about the group they belong to.

    Moving past biases

    Awareness of biases is the first step in avoiding them. Setting bias aside allows people to make fair decisions, based on accurate information, and in line with their values.

    However, this is not an easy task in the face of another social psychological process called group polarization. This phenomenon occurs when individuals’ beliefs become more extreme as they talk and listen only to people who hold the same beliefs they do. Think of the social media bubbles that result from interacting only with people who share your perspective.

    Efforts to stifle or prohibit educators’ and librarians’ ability to teach and discuss gender and sexuality topics, openly and fairly, add another challenge. Education through access to impartial, evidence-based information can be one way to help neutralize inherent bias.

    Montana state Rep. Zooey Zephyr, who is transgender, in discussion with a colleague.
    AP Photo/Tommy Martino

    As a final, hopeful point, social psychological research has identified one strategy for overcoming intergroup conflict: forming close contacts with individuals from the “other” group. Having a friend, loved one or trusted and valued colleague who belongs to the out-group can help you recognize their humanity and overcome the biases you hold against that out-group as a whole.

    A relevant and recent example of this scenario came when two transgender state representatives convinced their fellow lawmakers to vote against two extreme anti-trans bills in Montana by making the issue personal.

    All of these decision-making biases influence everyone, not just the lawmakers currently in power. And they can be quite complex, with particular in-group and out-group memberships being hard to define – for instance, factions within religious groups who disagree on particular political issues.

    But understanding and overcoming the biases everyone falls prey to means that optimal decisions can be made for everyone’s well-being and economic vitality. After all, psychology research has repeatedly demonstrated that diversity is good for the bottom line while it simultaneously promotes an equitable and inclusive society. Even from a solely financial perspective, discrimination is bad for all Americans.

    The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    ref. The psychology behind anti-trans legislation: How cognitive biases shape thoughts and policy – https://theconversation.com/the-psychology-behind-anti-trans-legislation-how-cognitive-biases-shape-thoughts-and-policy-251691

    MIL OSI – Global Reports

  • MIL-OSI Global: Radioisotope generators − inside the ‘nuclear batteries’ that power faraway spacecraft

    Source: The Conversation – USA – By Benjamin Roulston, Assistant Professor of Physics, Clarkson University

    Voyager 1, shown in this illustration, has operated for decades thanks to a radioisotope power system. NASA via AP

    Powering spacecraft with solar energy may not seem like a challenge, given how intense the Sun’s light can feel on Earth. Spacecraft near the Earth use large solar panels to harness the Sun for the electricity needed to run their communications systems and science instruments.

    However, the farther into space you go, the weaker the Sun’s light becomes and the less useful it is for powering systems with solar panels. Even in the inner solar system, spacecraft such as lunar or Mars rovers need alternative power sources.

    As an astrophysicist and professor of physics, I teach a senior-level aerospace engineering course on the space environment. One of the key lessons I emphasize to my students is just how unforgiving space can be. In this extreme environment where spacecraft must withstand intense solar flares, radiation and temperature swings from hundreds of degrees below zero to hundreds of degrees above zero, engineers have developed innovative solutions to power some of the most remote and isolated space missions.

    So how do engineers power missions in the outer reaches of our solar system and beyond? The solution is technology developed in the 1960s based on scientific principles discovered two centuries ago: radioisotope thermoelectric generators, or RTGs.

    RTGs are essentially nuclear-powered batteries. But unlike the AAA batteries in your TV remote, RTGs can provide power for decades while hundreds of millions to billions of miles from Earth.

    Nuclear power

    Radioisotope thermoelectric generators do not rely on chemical reactions like the batteries in your phone. Instead, they rely on the radioactive decay of elements to produce heat and eventually electricity. While this concept sounds similar to that of a nuclear power plant, RTGs work on a different principle.

    Most RTGs are built using plutonium-238 as their source of energy, which is not usable for nuclear power plants since it does not sustain fission reactions. Instead, plutonium-238 is an unstable element that will undergo radioactive decay.

    Radioactive decay, or nuclear decay, happens when an unstable atomic nucleus spontaneously and randomly emits particles and energy to reach a more stable configuration. This process often causes the element to change into another element, since the nucleus can lose protons.

    Plutonium-238 decays into uranium-234 and emits an alpha particle, made of two protons and two neutrons.
    NASA

    When plutonium-238 decays, it emits alpha particles, which consist of two protons and two neutrons. When the plutonium-238, which starts with 94 protons, releases an alpha particle, it loses two protons and turns into uranium-234, which has 92 protons.

    These alpha particles interact with and transfer energy into the material surrounding the plutonium, which heats up that material. The radioactive decay of plutonium-238 releases enough energy that it can glow red from its own heat, and it is this powerful heat that is the energy source to power an RTG.

    The nuclear heat source for the Mars Curiosity rover is encased in a graphite shell. The fuel glows red hot because of the radioactive decay of plutonium-238.
    Idaho National Laboratory, CC BY

    Heat as power

    Radioisotope thermoelectric generators can turn heat into electricity using a principle called the Seebeck effect, discovered by German scientist Thomas Seebeck in 1821. As an added benefit, the heat from some types of RTGs can help keep electronics and the other components of a deep-space mission warm and working well.

    In its basic form, the Seebeck effect describes how two wires of different conducting materials joined in a loop produce a current in that loop when exposed to a temperature difference.

    The Seeback effect is the principle behind RTGs.

    Devices that use this principle are called thermoelectric couples, or thermocouples. These thermocouples allow RTGs to produce electricity from the difference in temperature created by the heat of plutonium-238 decay and the frigid cold of space.

    Radioisotope thermoelectric generator design

    In a basic radioisotope thermoelectric generator, you have a container of plutonium-238, stored in the form of plutonium-dioxide, often in a solid ceramic state that provides extra safety in the event of an accident. The plutonium material is surrounded by a protective layer of foil insulation to which a large array of thermocouples is attached. The whole assembly is inside a protective aluminum casing.

    An RTG has decaying material in its core, which generates heat that it converts to electricity.
    U.S. Department of Energy

    The interior of the RTG and one side of the thermocouples is kept hot – close to 1,000 degrees Fahrenheit (538 degrees Celsius) – while the outside of the RTG and the other side of the thermocouples are exposed to space. This outside, space-facing layer can be as cold as a few hundred degrees Fahrenheit below zero.

    This strong temperature difference allows an RTG to turn the heat from radioactive decay into electricity. That electricity powers all kinds of spacecraft, from communications systems to science instruments to rovers on Mars, including five current NASA missions.

    But don’t get too excited about buying an RTG for your house. With the current technology, they can produce only a few hundred watts of power. That may be enough to power a standard laptop, but not enough to play video games with a powerful GPU.

    For deep-space missions, however, those couple hundred watts are more than enough.

    The real benefit of RTGs is their ability to provide predictable, consistent power. The radioactive decay of plutonium is constant – every second of every day for decades. Over the course of about 90 years, only half the plutonium in an RTG will have decayed away. An RTG requires no moving parts to generate electricity, which makes them much less likely to break down or stop working.

    Additionally, they have an excellent safety record, and they’re designed to survive their normal use and also be safe in the event of an accident.

    RTGs in action

    RTGs have been key to the success of many of NASA’s solar system and deep-space missions. The Mars Curiosity and Perseverance rovers and the New Horizons spacecraft that visited Pluto in 2015 have all used RTGs. New Horizons is traveling out of the solar system, where its RTGs will provide power where solar panels could not.

    However, no missions capture the power of RTGs quite like the Voyager missions. NASA launched the twin spacecraft Voyager 1 and Voyager 2 in 1977 to take a tour of the outer solar system and then journey beyond it.

    The RTGs on the Voyager probes have allowed the spacecraft to stay powered up while they collect data.
    NASA/JPL-Caltech

    Each craft was equipped with three RTGs, providing a total of 470 watts of power at launch. It has been almost 50 years since the launch of the Voyager probes, and both are still active science missions, collecting and sending data back to Earth.

    Voyager 1 and Voyager 2 are about 15.5 billion miles and 13 billion miles (nearly 25 billion kilometers and 21 billion kilometers) from the Earth, respectively, making them the most distant human-made objects ever. Even at these extreme distances, their RTGs are still providing them consistent power.

    These spacecraft are a testament to the ingenuity of the engineers who first designed RTGs in the early 1960s.

    Benjamin Roulston does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Radioisotope generators − inside the ‘nuclear batteries’ that power faraway spacecraft – https://theconversation.com/radioisotope-generators-inside-the-nuclear-batteries-that-power-faraway-spacecraft-248504

    MIL OSI – Global Reports

  • MIL-OSI Global: Keir Starmer promises more ‘democratic control’ of the NHS – how do other European countries do it?

    Source: The Conversation – UK – By Nick Fahy, Director of the Health and Care Research Group, RAND Europe

    Sir Keir Starmer, the UK prime minister, announced on March 13 that the government will move to abolish NHS England in the next two years. During this period, the government plans to bring its functions under the UK’s health ministry, with the aim of bringing the health service “into democratic control”. What does this mean, and what difference will it make?

    When the NHS was established in 1948, part of the aim was to make the local health problems of patients across the country the concern of the national government. The plan succeeded. Today, the NHS is politically highly important – it matters enormously to patients and the public, and has one of the largest spending budgets in the UK.

    At the same time, it is technically difficult to manage, with local needs and opportunities and complex organisation that are hard and sometimes inefficient to manage centrally.

    Striking the balance between delivering high-quality patient care and addressing the technical complexity of doing so is a continual challenge for governments. The solution chosen as part of the 2012 health and welfare reforms was to establish NHS England as an organisationally independent government body to provide technical and operational leadership for the NHS – leaving ministers insulated from those day-to-day issues and free to set an overall strategy.

    The government’s decision to abolish NHS England marks a change back to direct ministerial grip on the system. This may reflect high public concern about the NHS and pressure on its services, as well as a desire by the recently elected government to exercise more direct control over the health service.

    How does this compare to other health systems?

    The NHS has long been an unusually centralised system. Although the English NHS covers more than 55 million people, it has historically been run by central government, which this change reinforces.

    In contrast, although Spain has a similar NHS-style system, the Spanish health system is run by the 17 regional governments through their departments of health, with the largest covering 8.6 million people.

    Europe’s other large national health system, in Italy, now also has a decentralised system. The national government sets the overall principles and benefits, but the actual services are under the control of regional governments.

    Italy also has a decentralised health system.
    Massimo Todaro/Shutterstock

    These decentralised systems strike a different balance between political control and operational management, by bringing them together at a more local level.

    If the UK government was to extend its aim of bringing the NHS into democratic control by taking a similar decentralisation approach to other NHS-style systems in Europe, what would this look like?

    The NHS already has 42 integrated care systems at the local level. These already work with upper-tier local authorities, such as county councils, and are mostly aligned with their boundaries, but are under the control of central government.

    Other countries already decentralise their health systems to similar levels. In Sweden, for example, the 21 counties are responsible for financing, purchasing and providing their health services, under the democratic control of the county councillors. While there might be questions about the capacity of local government in England to take on such a role, experience from elsewhere shows that it should be possible.

    Compared with those decentralised systems, the abolition of NHS England is a relatively minor change. It puts ministers more directly in charge of the English NHS, but does not change the basic structure of the service nor its control by central government.

    Examples from other countries suggest that if the ambition is to bring the health service more into democratic control, there are options for much more profound change. This would strike a whole new balance between political control and local management.

    Tom Ling is a member of the Labour party.

    Hampton Toole and Nick Fahy do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    ref. Keir Starmer promises more ‘democratic control’ of the NHS – how do other European countries do it? – https://theconversation.com/keir-starmer-promises-more-democratic-control-of-the-nhs-how-do-other-european-countries-do-it-252313

    MIL OSI – Global Reports

  • MIL-OSI Global: Abolishing NHS England could shift power from the centre – but health service overhauls rarely go well

    Source: The Conversation – UK – By Judith Smith, Professor of Health Policy and Management, University of Birmingham

    The UK prime minister, Keir Starmer, has announced plans to abolish NHS England, the organisation that oversees and manages the NHS in England, employing 19,000 people.

    He declared he was bringing the NHS back under “democratic control” and cutting unnecessary bureaucracy by moving oversight of the NHS back into the Department of Health and Social Care (DHSC). This will reverse plans put in place by the Conservative-led coalition government in 2013 when it tried to “take the politics out of the NHS” by having NHS England as an independent body.

    The NHS is the largest public sector organisation in England, seeing 1.7 million people each day including in patients’ own homes, local GP surgeries, pharmacies and hospitals. It employs 1.7 million people, is funded largely out of general taxation, and has an annual budget of about £190 billion.

    The NHS is, however, one of the most centrally organised health systems in the world. This contrasts with many European and other countries where there is typically a national ministry of health to set strategy, with the detail of how this is implemented being left to regional and local councils, health authorities and hospitals.

    Some analysts have suggested that the NHS has become even more centrally managed in recent years, but the truth is it has always been held very close by its political masters.

    On the face of it, there are advantages to abolishing NHS England, allowing DHSC to focus on clarifying politicians’ priorities for how and on what NHS funding will be spent. These will include reducing waiting lists for operations, making it easier to get an appointment with a GP, and ensuring that emergency departments can deal quickly with patients without resorting to “corridor care”.

    In turn, local NHS organisations such as integrated care boards (who among other things organise GP, dental, pharmacy and optometry services) and NHS trusts (who run hospitals, community, mental health and ambulance services) can concentrate on making sure these policy priorities are put into practice in ways that work best for local communities.

    NHS England has a range of other important roles that will need to be reallocated, whether to an expanded DHSC or elsewhere. These include planning the training of healthcare staff, organising vaccination and screening programmes, purchasing medicines, and collating huge amounts of data about NHS activity and performance.

    The government has also announced plans to halve staffing in the 42 local integrated care boards, so any move of former NHS England roles to this level will probably only happen if these local boards merge, which now seems likely.

    The government appears therefore to have signalled another NHS management “redisorganisation” – something the NHS has suffered on a periodic basis, a consequence of its highly centralised and political nature. Research evidence is clear that management reorganisations struggle to achieve their objectives, causing instead significant distraction away from work to improve services for patients.

    In his major review of the NHS for the new Labour government in September 2024, Lord Ara Darzi – a former Labour health minister – highlighted the urgent need for more skilled and effective managers to support NHS staff in restoring and improving the service after years of economic austerity and the challenges of the pandemic. This seems to run counter to recent announcements about “cutting bureaucracy”.

    With careful planning, there is, however, potential for the abolition of NHS England to lead to a slimmer DHSC (more akin to some of its European counterparts) with a smaller number of well-resourced and managed integrated care boards who could effectively steer, support and monitor local NHS trusts and primary care services.

    In 2002, Alan Milburn, then secretary of state for health in Tony Blair’s government, issued a white paper called Shifting the Balance of Power Within the NHS. Milburn is now a leading figure in the Starmer government’s health team, so it is perhaps not surprising that we have these new plans to slim the policy centre, shift power and decision-making more locally, and enable stronger accountability to politicians and the public.

    What is likely to happen?

    What will matter as much as what is done is how these changes are made. The government has Lord Darzi’s clear and comprehensive diagnosis of the NHS’s problems. It now needs to prioritise what should be done first and what can wait, and has made a good start on this with its recent planning guidance to the NHS.

    What will be much more difficult will be to decide exactly how to reduce and then abolish NHS England – doing this in a way that ensures important roles are moved smoothly to DHSC, integrated care boards and NHS trusts.

    History is not encouraging. There is a big risk that NHS managers will find themselves focusing too much attention on handling a major reorganisation when they (and patients) would rather they concentrate on improving services.

    The government clearly wants to hold on to setting policy direction for the NHS while letting go of the detail of implementation to local level. But ultimately, it will be held to account by a population impatient for improvements to NHS services.

    Judith Smith receives funding from the National Institute for Health and Care Research for research and evaluation of health services. She has been funded by the Health Foundation to provide expert primary care policy advice. Judith is Trustee and Chair of Health Services Research UK and Director of Health Services Research with Birmingham Health Partners. She is a Senior Associate of the Nuffield Trust.

    ref. Abolishing NHS England could shift power from the centre – but health service overhauls rarely go well – https://theconversation.com/abolishing-nhs-england-could-shift-power-from-the-centre-but-health-service-overhauls-rarely-go-well-252240

    MIL OSI – Global Reports

  • MIL-OSI Global: Waiting lists, crumbling buildings, staff burnout: five years on, COVID is still hurting the financial health of the NHS

    Source: The Conversation – UK – By Catia Nicodemo, Professor of Health Economics, Brunel University of London

    The NHS was hit hard by COVID. And no amount of appreciative clapping or painted rainbows could distract from the vulnerabilities which were exposed by the pandemic – or the challenges it created.

    Some of those challenges – like the staggering backlog in patient care, or the huge mental and physical toll experienced by staff – will take years to overcome.

    And anyone compelled to attend a hospital in the UK at the moment can see the evidence at first hand. Wards are very busy and staff are overstretched.

    This is part of the legacy of a fast-spreading virus which killed 232,112 people in the UK and left an estimated 2 million suffering from the effects of long-COVID. It demanded urgent action from hospitals and health workers and brought immediate and widespread disruption to routine care, with appointments for elective surgery, cancer screenings and chronic disease management all delayed.

    One 2024 study I worked on analysed appointment cancellations for cancer patients during the pandemic, and found that they waited an average of 19 days longer than before for rescheduled appointments. (Mortality rates remained stable though, indicating that the NHS effectively prioritised the most urgent cases.)

    This kind of disruption has left the healthcare system facing a monumental backlog, with treatment waiting lists soaring to record levels. According to the British Medical Association, there are over 7.5 million people now on waiting lists (compared to 4.5 million before the pandemic) – and those waiting times are longer.

    Cutting this waiting list is apparently one of the prime ministers’s priorities. But there is no easy fix.

    The basic infrastructure of the NHS – the buildings, IT equipment, offices – is creaking, with outdated facilities, insufficient beds and a lack of specialised equipment. And one study suggests that capital funding – investment in assets that will be used for more than a year – for NHS trusts in England is down by 21% over the past five years.

    This is primarily because the Department of Health and Social Care has been diverting long-term investment funds to cover day-to-day operational costs such as staff salaries and medicines.

    Since 2019, £500 million of capital investment has been cancelled or postponed. And while overall NHS budgets have been growing, the increased spending has often been absorbed by inflation, rising demand and the need to address immediate pressures. This leaves little for infrastructure upgrades, new equipment or technological advancements.

    The Health Foundation has warned that the lack of a long-term capital funding strategy could further jeopardise patient care in the future. Many NHS facilities no longer meet the needs of a modern health service, with some hospitals requiring complete refurbishment or replacement rather than just repairs.

    And of course, treating patients is not just about equipment and buildings. Nurses and doctors are under extreme pressure, facing unprecedented levels of stress, burnout and trauma. A recent survey revealed that one in three NHS doctors are experiencing extreme tiredness, impairing their ability to treat patients effectively.

    NHS key workers wave from inside Chelsea and Westminster Hospital, May 2020.
    Guy William/Shutterstock

    A similar number said their ability to practice medicine may have been negatively affected by fatigue, with some even reporting cases of patient harm or a near-miss incident.

    Stressed NHS

    And although the NHS workforce has actually grown over the past five years, it has not been sufficient to reduce waiting lists, deal with growing demand, or improve staff morale. Anxiety, stress and depression accounted for for over 624,300 working days lost in one month last year.

    Without a healthy and motivated workforce, the NHS’s recovery efforts will remain severely hampered. Other contributing factors include increased demand for healthcare services, partly due to an ageing population and the growing prevalence of chronic conditions.

    To address these challenges, the NHS needs a modernised approach to patient care. Research suggests that technology including telemedicine (online consultations) and AI-driven diagnostics, could streamline services and reduce waiting times.

    Other possible steps include the expansion of community diagnostic centres, to ease access to tests, and screenings, to improve efficiency.

    Overall, the pandemic has underscored the critical importance of a robust and resilient healthcare system. As the NHS navigates its own path to recovery, it must prioritise both immediate solutions to the backlog crisis and long-term strategies. This will require significant investment, but also a commitment to innovation and the wellbeing of healthcare workers.

    The road ahead for the NHS will be tricky, but with the right measures in place, it could emerge stronger and more resilient than ever. The lessons learned from COVID should serve as a catalyst for transformative change, ensuring that the UK’s healthcare system is better prepared to face whatever the future may hold.

    Catia Nicodemo does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Waiting lists, crumbling buildings, staff burnout: five years on, COVID is still hurting the financial health of the NHS – https://theconversation.com/waiting-lists-crumbling-buildings-staff-burnout-five-years-on-covid-is-still-hurting-the-financial-health-of-the-nhs-251637

    MIL OSI – Global Reports

  • MIL-OSI Global: People in this career are better at seeing through optical illusions

    Source: The Conversation – UK – By Martin Doherty, Associate Professor in Psychology, University of East Anglia

    fran_kie/Shutterstock

    Optical illusions are great fun, and they fool virtually everyone. But have you ever wondered if you could train yourself to unsee these illusions? Our latest research suggests that you can.

    Optical illusions tell a lot about how people see things. For example, look at the picture below.

    The Ebbinghaus illusion.
    Hermann Ebbinghaus

    The two orange circles are identical, but the one on the right looks bigger. Why?
    We use context to figure out what we are seeing. Something surrounded by smaller things is often quite big. Our visual system takes context into account, so it judges the orange circle on the right as bigger than the one on the left.

    This illusion was discovered by German psychologist Herman Ebbinghaus in the 19th century. This and similar geometrical illusions have been studied by psychologists ever since.

    How much you are affected by illusions like these depends on who you are. For example, women are more affected by the illusion than men – they see things more in context.

    Young children do not see illusions at all. To a five-year-old, the two orange circles look the same. It takes time to learn how to use context cues.

    Neurodevelopmental conditions similarly affect illusion perception. People with autism or schizophrenia are less likely to see illusions. This is because these people tend to pay greater attention to the central circle, and less to the surrounding ones.

    The culture you grew up in also affects how much you attend to context. Research has found that east Asian perception is more holistic, taking everything into account. Western perception is more analytic, focusing on central objects.

    These differences would predict greater illusion sensitivity in east Asia. And true enough, Japanese people seem to experience much stronger effects than British people in this kind of illusion.

    This may also depend on environment. Japanese people typically live in urban environments. In crowded urban scenes, being able to keep track of objects relative to other objects is important. This requires more attention to context. Members of the nomadic Himba tribe in the almost uninhabited Namibian desert do not seem to be fooled by the illusion at all.

    Gender, developmental, neurodevelopmental and cultural differences are all well established when it comes to optical illusions. However, what scientists did not know until now is whether people can learn to see illusions less intensely.

    A hint came from our previous work comparing mathematical and social scientists’ judgements of illusions (we work in universities, so we sometimes study our colleagues). Social scientists, such as psychologists, see illusions more strongly.

    Researchers like us have to take many factors into account. Perhaps this makes us more sensitive to context even in the way we see things. But also, it could be that your visual style affects what you choose to study. One of us (Martin) went to university to study physics, but left with a psychology degree. As it happens, his illusion perception is much stronger than normal.

    Training your illusion skills

    Despite all these individual differences, researchers have always thought that you have no choice over whether you see the illusion. Our recent research challenges this idea.

    Radiologists need to be able to rapidly spot important information in medical scans. Doing this often means they have to ignore surrounding detail.

    Radiologists train extensively, so does this make them better at seeing through illusions? We found it does. We studied 44 radiologists, compared to over 100 psychology and medical students.

    Below is one of our images. The orange circle on the left is 6% smaller than the one on the right. Most people in the study saw it as larger.

    The orange circle on the left is actually smaller.
    Radoslaw Wincza, CC BY-NC-ND

    Here is another image. Most non-radiologists still saw the left one as bigger. Yet, it is 10% smaller. Most radiologists got this one right.

    Does the left orange circle look bigger or smaller to you?
    Radoslaw Wincza, CC BY-NC-ND

    It was not until the difference was nearly 18%, as shown in the image below, that most non-radiologists saw through the illusion.

    Most people get this one right.
    Radoslaw Wincza, CC BY-NC-ND

    Radiologists are not entirely immune to the illusion, but are much less susceptible. We also looked at radiologists just beginning training. Their illusion perception was no better than normal. It seems radiologists’ superior perception is a result of their extensive training.

    According to current theories of expertise, this shouldn’t happen. Becoming an expert in chess, for example, makes you better at chess but not anything else. But our findings suggest that becoming an expert in medical image analysis also makes you better at seeing through some optical illusions.

    There is plenty left to find out. Perhaps the most intriguing possibility is that training on optical illusions can improve radiologists’ skills at their own work.

    So, how can you learn to see through illusions? Simple. Just five years of medical school, then seven more of radiology training and this skill can be yours too.

    Martin Doherty received funding from the British Academy/Leverhulme Trust who partially supported this work. He continues to receive funding from the Leverhulme Trust.

    Radoslaw Wincza does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. People in this career are better at seeing through optical illusions – https://theconversation.com/people-in-this-career-are-better-at-seeing-through-optical-illusions-251984

    MIL OSI – Global Reports

  • MIL-OSI Global: China’s dwindling marriage rate is fuelling demand for brides trafficked from abroad

    Source: The Conversation – UK – By Ming Gao, Research Scholar of East Asia Studies in History Division, Lund University

    Fewer people in China are opting to get married. imtmphoto / Shutterstock

    China’s marriage rate is in steep decline. There were 6.1 million marriage registrations nationwide in 2024, down from 7.7 million the previous year. This decline has prompted Chen Songxi, a Chinese national political adviser, to propose lowering the legal marriage age from 22 to 18.

    The drop in China’s marriage rate has been driven by a combination of factors. These include increased economic pressures, evolving social attitudes towards marriage, and higher levels of education.

    Urban Chinese women, in particular, are increasingly pushing back against traditional gender expectations, which emphasise marriage and childbearing as essential life milestones. Rising living costs are also making it increasingly difficult for many young people to afford to get married.

    At the same time, China is grappling with a longstanding gender imbalance, a legacy of the country’s sweeping one-child policy and cultural preference for male children. In the early 2000s, when the imbalance was at its peak, China’s sex ratio at birth reached 121 boys for every 100 girls. For every 100 girls born in some provinces, there were more than 130 boys.

    The gender imbalance is particularly pronounced among those born in the 1980s, a generation I belong to. This is due to the widespread use of ultrasound technology from the mid-1980s onward, which offered parents the ability to terminate pregnancies if their child was female.

    Unmarried men in China have become part of the so-called “era of leftover men” (shengnan shidai in Chinese). This is an internet term that loosely refers to the period between 2020 and 2050, when an estimated 30 million to 50 million Chinese men are expected to be unable to find a wife.

    A Chinese couple walk through Beijing with their child in 2015.
    TonyV3112 / Shutterstock

    The conundrum is that many of these “leftover” men want to marry – I know this firsthand. Some of my peers from primary and secondary school have been desperately searching for a wife, but have struggled to find a spouse. A widely used phrase in China, “difficulty in getting married” (jiehun nan), encapsulates this struggle.

    Unable to find a domestic spouse, some Chinese men have turned to “purchasing” foreign brides. The growing demand for these brides, particularly in rural areas, has fuelled a rise in illegal marriages. This includes marriages involving children and women who have been trafficked into China primarily from neighbouring countries in south-east Asia.

    According to a Human Rights Watch report released in 2019 on bride trafficking from Myanmar to China “a porous border and lack of response by law enforcement agencies on both sides [has] created an environment in which traffickers flourish”.

    The Chinese government has now pledged to crack down on the industry. In March 2024, China’s Ministry of Public Security launched a campaign against the transnational trafficking of women and children, calling for enhanced international cooperation to eliminate these crimes.

    ‘Purchased’ foreign brides

    These marriages are often arranged through informal networks or commercial agencies, both of which are illegal according to China’s state council.

    Human Rights Watch says that women and girls in neighbouring countries are typically tricked by brokers who promise well-paid employment in China. They find themselves at the mercy of the brokers once they reach China, and are sold for between US$3,000 (£2,300) and US$13,000 to Chinese men.

    Determining the extent of illegal cross-border marriages in China is challenging due to the clandestine nature of these activities. But the most recent data from the UK’s Home Office suggests that 75% of Vietnamese human-trafficking victims were smuggled to China, with women and children making up 90% of cases.

    The Woman from Myanmar, an award-winning documentary from 2022, follows the story of a trafficked Myanmar woman who was sold into marriage in China. The film exposes the harsh realities faced by many trafficked brides.

    It captures not only the coercion and abuse many of these women endure, but also their struggle for autonomy and survival in a system that treats them as commodities. Larry, a trafficked woman who features in the documentary, explained that she saw her capacity to bear children as her pathway to survival.

    The Chinese authorities constantly warn of scams involving brides purchased from abroad. In November 2024, for example, two people were prosecuted over their involvement in an illegal cross-border matchmaking scheme. Chinese men were lured into extremely expensive “marriage tours” abroad with promises of “affordable” foreign wives.

    There have also been cases where the undocumented brides themselves have disappeared with large sums of money before marriage arrangements are completed.

    Most of the foreign brides are trafficked into China from neighbouring countries in south-east Asia.
    MuchMania / Shutterstock

    China’s marriage crisis has far-reaching implications for the country’s demographic future. A shrinking and ageing population is often cited as the greatest challenge for Chinese economic growth and social stability. Beijing has resisted this characterisation, saying that constant technological innovations will continue to drive economic growth.

    The labour force is undoubtedly important when it comes to economic growth. But according to Justin Lin Yifu, a member of the Chinese People’s Political Consultative Conference advisory body, what matters more is effective labour – the product of both the quantity and quality of the labour force.

    China has increased its investment in education continually over recent years in anticipation of future challenges surrounding its ageing population.

    But, notwithstanding this, an even greater concern is the large number of leftover men, as this could pose a serious threat to social stability. Studies have found a positive correlation between high male-to-female sex ratios and crime rates both in China and India, where there is also a significant gender imbalance.

    In China, research has found that skewed male sex ratios have accounted for around 14% of the rise in crime since the mid-1990s. And in India, modelling suggests that a 5.5% rise in the male sex ratio would increase the odds of unmarried women being harassed by more than 20%.

    The question of who China’s leftover men will marry is becoming a pressing issue for Beijing. The government’s response will shape the country’s future for decades to come.

    Ming Gao receives funding from the Swedish Research Council. This research was produced with support from the Swedish Research Council grant “Moved Apart” (nr. 2022-01864). Ming Gao is a member of Lund University Profile Area: Human Rights.

    ref. China’s dwindling marriage rate is fuelling demand for brides trafficked from abroad – https://theconversation.com/chinas-dwindling-marriage-rate-is-fuelling-demand-for-brides-trafficked-from-abroad-250860

    MIL OSI – Global Reports