Category: The Conversation

  • MIL-OSI Analysis: What a 19th-century atlas teaches me about marine ecosystems

    Source: The Conversation – UK – By Ruth H. Thurstan, Associate Professor in Marine and Historical Ecology, University of Exeter

    Ruth Thurstan holds the Piscatorial Atlas Credit: Lee Raby, CC BY-NC-ND

    What stands out most about the book I’m carrying under my arm, as I meander through the exhibits at the National Maritime Museum Cornwall in Falmouth, is its awkwardly large size. The Piscatorial Atlas, authored by Ole Theodor Olsen and published in 1883, contains 50 beautifully illustrated charts of the seas around Great Britain. These show the locations exploited at that time for a variety of fish species, alongside the typical vessels or fishing gear used. This information was collated from fishermen in the decade before the atlas was published.

    The atlas isn’t a book made for travel. Luckily, it can be readily admired online. But leafing through its carefully curated pages, which contain the collective knowledge of so many people who have long since passed away, feels special, and is why I chose it to show to the programme producers today.

    I’ve always loved old books, but I never imagined they would become such an integral part of my work. My interest in marine historical ecology – the use of historical archives to make sense of how our ocean ecosystems are changing – started 18 years ago when I read The Unnatural History of the Sea by Professor Callum Roberts. Within its pages it details how historical perspectives provide critical insights into the deteriorating health of our seas.



    Local science, global stories.

    This article is part of a series, Secrets of the Sea, exploring how marine scientists are developing climate solutions.

    In collaboration with the BBC, Anna Turns travels around the West Country coastline to meet ocean experts making exciting discoveries beneath the waves.


    In recent decades, fishery declines, degradation of coastal habitats and the loss of large predators show that exploitation, coastal development, pollution and climate change are exacting their toll on marine ecosystems.

    Yet information extracted from old books, reports, and even newspaper articles, show us that many of these issues started long ago. We have exploited the seas for thousands of years, but in Britain, the 19th-century introduction of steam power was a watershed moment. A point in time when our ability to exploit the seas abruptly and dramatically increased. My research aims to uncover how our use of this technological advance – and those that followed – have affected the functioning of marine ecosystems and their continued ability to support our needs.

    Transformation of the seas

    These negative effects are profound. Towards the end of the Piscatorial Atlas is a page dedicated to the native oyster (Ostrea edulis). It is my favourite of the charts. A gradation of colour indicates where oysters were found in abundance at this time. Colour surrounds the coastal seas of Britain and further afield. Strikingly, there is an enormous area of oyster ground delineated in the southern North Sea.

    Today, the native oyster ecosystem is defined as collapsed. The decline of nearshore oyster reefs was well underway by the time the Piscatorial Atlas was published, and the loss of the large North Sea oyster ground – so clear on Olsen’s chart – swiftly followed. As those with the knowledge of these once prolific grounds passed away, the memory of the once vast oyster habitats was lost. This problem was further compounded by science. In the late 19th century, studies of oyster grounds were rare, and scientific surveys almost always occurred after the habitat had been destroyed. Low densities of oysters became the scientific norm.

    Recent research I was involved in with a team of experts used historical sources from across Europe to show just how much change has occurred. We showed that reported native oyster habitat once covered tens of thousands of square kilometres and was a dominant feature of some coastal ecosystems. Multiple layers of old oyster shell, consolidated by a layer of living oysters, provided raised reefs that supported a diverse range of species.

    The economic and cultural significance of oysters created a more visible historical record than many other species. Yet, the history of marine declines is not limited to oysters. Historical sources quote fishermen concerned about the expansion of trawling and fishing effort. They described the efficiency with which sail trawlers and early steam-powered vessels extracted fish and non-target species from the seafloor.

    The impact of land-based activities, such as sediment and pollutant run-off and coastal development, also increased as societies industrialised. These placed marine ecosystems under further pressure, yet regulations governing sustainable management of our seas failed to keep up. These influences, coupled with a collective societal amnesia regarding what we have lost, facilitated the hidden transformation of marine ecosystems.

    Using old books and other deep-time approaches, researchers are increasingly making these transformations visible. Reading the words of people from centuries ago, we learn that their experiences of marine ecosystems were often fundamentally different from our own. Understanding the scale of this difference, where species and habitats existed, and in what abundances, can help make the case for their conservation and restoration.

    People have always made use of the seas. For me, looking to the past isn’t just about understanding what we have lost, it is also about taking positive lessons from the past, such as the myriad ways in which societies benefited from the presence of healthy marine ecosystems. Heeding these lessons from history helps us visualise the full range of possible futures available to us, including the many benefits that more ambitious conservation and restoration of our ocean ecosystems could bring, should we choose this path.

    Ruth H. Thurstan works for The University of Exeter. She receives funding from the Convex Seascape Survey and the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement no. 856488).

    ref. What a 19th-century atlas teaches me about marine ecosystems – https://theconversation.com/what-a-19th-century-atlas-teaches-me-about-marine-ecosystems-251184

    MIL OSI Analysis

  • MIL-OSI Analysis: What a 19th-century atlas teaches me about marine ecosystems

    Source: The Conversation – UK – By Ruth H. Thurstan, Associate Professor in Marine and Historical Ecology, University of Exeter

    Ruth Thurstan holds the Piscatorial Atlas Credit: Lee Raby, CC BY-NC-ND

    What stands out most about the book I’m carrying under my arm, as I meander through the exhibits at the National Maritime Museum Cornwall in Falmouth, is its awkwardly large size. The Piscatorial Atlas, authored by Ole Theodor Olsen and published in 1883, contains 50 beautifully illustrated charts of the seas around Great Britain. These show the locations exploited at that time for a variety of fish species, alongside the typical vessels or fishing gear used. This information was collated from fishermen in the decade before the atlas was published.

    The atlas isn’t a book made for travel. Luckily, it can be readily admired online. But leafing through its carefully curated pages, which contain the collective knowledge of so many people who have long since passed away, feels special, and is why I chose it to show to the programme producers today.

    I’ve always loved old books, but I never imagined they would become such an integral part of my work. My interest in marine historical ecology – the use of historical archives to make sense of how our ocean ecosystems are changing – started 18 years ago when I read The Unnatural History of the Sea by Professor Callum Roberts. Within its pages it details how historical perspectives provide critical insights into the deteriorating health of our seas.



    Local science, global stories.

    This article is part of a series, Secrets of the Sea, exploring how marine scientists are developing climate solutions.

    In collaboration with the BBC, Anna Turns travels around the West Country coastline to meet ocean experts making exciting discoveries beneath the waves.


    In recent decades, fishery declines, degradation of coastal habitats and the loss of large predators show that exploitation, coastal development, pollution and climate change are exacting their toll on marine ecosystems.

    Yet information extracted from old books, reports, and even newspaper articles, show us that many of these issues started long ago. We have exploited the seas for thousands of years, but in Britain, the 19th-century introduction of steam power was a watershed moment. A point in time when our ability to exploit the seas abruptly and dramatically increased. My research aims to uncover how our use of this technological advance – and those that followed – have affected the functioning of marine ecosystems and their continued ability to support our needs.

    Transformation of the seas

    These negative effects are profound. Towards the end of the Piscatorial Atlas is a page dedicated to the native oyster (Ostrea edulis). It is my favourite of the charts. A gradation of colour indicates where oysters were found in abundance at this time. Colour surrounds the coastal seas of Britain and further afield. Strikingly, there is an enormous area of oyster ground delineated in the southern North Sea.

    Today, the native oyster ecosystem is defined as collapsed. The decline of nearshore oyster reefs was well underway by the time the Piscatorial Atlas was published, and the loss of the large North Sea oyster ground – so clear on Olsen’s chart – swiftly followed. As those with the knowledge of these once prolific grounds passed away, the memory of the once vast oyster habitats was lost. This problem was further compounded by science. In the late 19th century, studies of oyster grounds were rare, and scientific surveys almost always occurred after the habitat had been destroyed. Low densities of oysters became the scientific norm.

    Recent research I was involved in with a team of experts used historical sources from across Europe to show just how much change has occurred. We showed that reported native oyster habitat once covered tens of thousands of square kilometres and was a dominant feature of some coastal ecosystems. Multiple layers of old oyster shell, consolidated by a layer of living oysters, provided raised reefs that supported a diverse range of species.

    The economic and cultural significance of oysters created a more visible historical record than many other species. Yet, the history of marine declines is not limited to oysters. Historical sources quote fishermen concerned about the expansion of trawling and fishing effort. They described the efficiency with which sail trawlers and early steam-powered vessels extracted fish and non-target species from the seafloor.

    The impact of land-based activities, such as sediment and pollutant run-off and coastal development, also increased as societies industrialised. These placed marine ecosystems under further pressure, yet regulations governing sustainable management of our seas failed to keep up. These influences, coupled with a collective societal amnesia regarding what we have lost, facilitated the hidden transformation of marine ecosystems.

    Using old books and other deep-time approaches, researchers are increasingly making these transformations visible. Reading the words of people from centuries ago, we learn that their experiences of marine ecosystems were often fundamentally different from our own. Understanding the scale of this difference, where species and habitats existed, and in what abundances, can help make the case for their conservation and restoration.

    People have always made use of the seas. For me, looking to the past isn’t just about understanding what we have lost, it is also about taking positive lessons from the past, such as the myriad ways in which societies benefited from the presence of healthy marine ecosystems. Heeding these lessons from history helps us visualise the full range of possible futures available to us, including the many benefits that more ambitious conservation and restoration of our ocean ecosystems could bring, should we choose this path.

    Ruth H. Thurstan works for The University of Exeter. She receives funding from the Convex Seascape Survey and the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement no. 856488).

    ref. What a 19th-century atlas teaches me about marine ecosystems – https://theconversation.com/what-a-19th-century-atlas-teaches-me-about-marine-ecosystems-251184

    MIL OSI Analysis

  • MIL-OSI Analysis: What a 19th-century atlas teaches me about marine ecosystems

    Source: The Conversation – UK – By Ruth H. Thurstan, Associate Professor in Marine and Historical Ecology, University of Exeter

    Ruth Thurstan holds the Piscatorial Atlas Credit: Lee Raby, CC BY-NC-ND

    What stands out most about the book I’m carrying under my arm, as I meander through the exhibits at the National Maritime Museum Cornwall in Falmouth, is its awkwardly large size. The Piscatorial Atlas, authored by Ole Theodor Olsen and published in 1883, contains 50 beautifully illustrated charts of the seas around Great Britain. These show the locations exploited at that time for a variety of fish species, alongside the typical vessels or fishing gear used. This information was collated from fishermen in the decade before the atlas was published.

    The atlas isn’t a book made for travel. Luckily, it can be readily admired online. But leafing through its carefully curated pages, which contain the collective knowledge of so many people who have long since passed away, feels special, and is why I chose it to show to the programme producers today.

    I’ve always loved old books, but I never imagined they would become such an integral part of my work. My interest in marine historical ecology – the use of historical archives to make sense of how our ocean ecosystems are changing – started 18 years ago when I read The Unnatural History of the Sea by Professor Callum Roberts. Within its pages it details how historical perspectives provide critical insights into the deteriorating health of our seas.



    Local science, global stories.

    This article is part of a series, Secrets of the Sea, exploring how marine scientists are developing climate solutions.

    In collaboration with the BBC, Anna Turns travels around the West Country coastline to meet ocean experts making exciting discoveries beneath the waves.


    In recent decades, fishery declines, degradation of coastal habitats and the loss of large predators show that exploitation, coastal development, pollution and climate change are exacting their toll on marine ecosystems.

    Yet information extracted from old books, reports, and even newspaper articles, show us that many of these issues started long ago. We have exploited the seas for thousands of years, but in Britain, the 19th-century introduction of steam power was a watershed moment. A point in time when our ability to exploit the seas abruptly and dramatically increased. My research aims to uncover how our use of this technological advance – and those that followed – have affected the functioning of marine ecosystems and their continued ability to support our needs.

    Transformation of the seas

    These negative effects are profound. Towards the end of the Piscatorial Atlas is a page dedicated to the native oyster (Ostrea edulis). It is my favourite of the charts. A gradation of colour indicates where oysters were found in abundance at this time. Colour surrounds the coastal seas of Britain and further afield. Strikingly, there is an enormous area of oyster ground delineated in the southern North Sea.

    Today, the native oyster ecosystem is defined as collapsed. The decline of nearshore oyster reefs was well underway by the time the Piscatorial Atlas was published, and the loss of the large North Sea oyster ground – so clear on Olsen’s chart – swiftly followed. As those with the knowledge of these once prolific grounds passed away, the memory of the once vast oyster habitats was lost. This problem was further compounded by science. In the late 19th century, studies of oyster grounds were rare, and scientific surveys almost always occurred after the habitat had been destroyed. Low densities of oysters became the scientific norm.

    Recent research I was involved in with a team of experts used historical sources from across Europe to show just how much change has occurred. We showed that reported native oyster habitat once covered tens of thousands of square kilometres and was a dominant feature of some coastal ecosystems. Multiple layers of old oyster shell, consolidated by a layer of living oysters, provided raised reefs that supported a diverse range of species.

    The economic and cultural significance of oysters created a more visible historical record than many other species. Yet, the history of marine declines is not limited to oysters. Historical sources quote fishermen concerned about the expansion of trawling and fishing effort. They described the efficiency with which sail trawlers and early steam-powered vessels extracted fish and non-target species from the seafloor.

    The impact of land-based activities, such as sediment and pollutant run-off and coastal development, also increased as societies industrialised. These placed marine ecosystems under further pressure, yet regulations governing sustainable management of our seas failed to keep up. These influences, coupled with a collective societal amnesia regarding what we have lost, facilitated the hidden transformation of marine ecosystems.

    Using old books and other deep-time approaches, researchers are increasingly making these transformations visible. Reading the words of people from centuries ago, we learn that their experiences of marine ecosystems were often fundamentally different from our own. Understanding the scale of this difference, where species and habitats existed, and in what abundances, can help make the case for their conservation and restoration.

    People have always made use of the seas. For me, looking to the past isn’t just about understanding what we have lost, it is also about taking positive lessons from the past, such as the myriad ways in which societies benefited from the presence of healthy marine ecosystems. Heeding these lessons from history helps us visualise the full range of possible futures available to us, including the many benefits that more ambitious conservation and restoration of our ocean ecosystems could bring, should we choose this path.

    Ruth H. Thurstan works for The University of Exeter. She receives funding from the Convex Seascape Survey and the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement no. 856488).

    ref. What a 19th-century atlas teaches me about marine ecosystems – https://theconversation.com/what-a-19th-century-atlas-teaches-me-about-marine-ecosystems-251184

    MIL OSI Analysis

  • MIL-OSI Analysis: Palestine Action: what it means to proscribe a group, and what the effects could be

    Source: The Conversation – UK – By Brian J. Phillips, Reader (Associate Professor) in International Relations, University of Essex

    The UK’s home secretary, Yvette Cooper, plans to proscribe the protest group Palestine Action under anti-terror law. This move, if approved by parliament, would criminalise the group’s existence, making it a crime to be a member of the group or to support it in any way.

    Palestine Action emerged in 2020, first drawing attention when its members broke into and spray painted red the UK headquarters of Elbit Systems, an Israeli defence contractor. In the years since, the group has sprayed paint, blockaded or otherwise vandalised a number of institutions it sees as complicit in Israeli military actions, such as a Lockheed Martin facility and two Barclays branches.

    The group’s website describes it as a “direct action movement committed to ending global participation in Israel’s genocidal and apartheid regime”.

    The term “direct action” has historically been used for tactics ranging from legal protest to traffic obstruction and property damage, such as animal rights activists smashing laboratory equipment used for experiments on animals. Or, more recently, the roadblocks carried out by Extinction Rebellion.

    Palestine Action’s campaign has caused substantial property damage. Five activists were jailed after a 2022 protest at a Glasgow weapons equipment factory that caused more than an estimated £1 million in damage due to pyrotechnics thrown inside the building.

    Activists are also accused of causing £1 million in damages to Elbit property near Bristol in 2024. Eighteen face charges of aggravated burglary and criminal damage, 16 of whom also face a charge of violent disorder. Nine have pleaded not guilty, while others have not yet entered a plea. During the Bristol attack, one person was accused of assaulting police officers with a sledgehammer, and has pleaded not guilty to causing grievous bodily harm with intent.

    The group’s recent spray-painting of two military jets at RAF Brize Norton – reportedly causing millions of pounds in damage, combined with the military nature of the target – seems to have been the breaking point for the home secretary.

    The question is whether all this makes the group a terrorist organisation.

    The terrorist list criteria

    The UK’s list of proscribed groups currently contains 81 organisations, from radical Islamists such as al-Qaida to neo-Nazis such as the Base.

    The legislation behind the list, the Terrorism Act 2000, imposes serious punishments for proscribed organisations’ members or supporters, from a fine to a maximum sentence of 14 years in prison. Even wearing clothing or publishing an image supporting a proscribed group can be punished by up to six months in prison or a fine of up to £5,000.

    For a group to be proscribed, it needs to be determined by the secretary of state to be “concerned in terrorism”, basically meaning committing or planning terrorist acts. The definition of terrorism is long and legalistic, but is, essentially, the politically-motivated use or threat of actions to intimidate the government or public through violence or destruction, including “serious damage to property”.

    This latter justification, serious property damage, has been invoked by the home secretary in discussing Palestine Action’s planned proscription. So, technically, Palestine Action appears to meet the criteria.

    But there are a variety of groups carrying out serious property damage that have not (yet) been proscribed under anti-terrorism law. Following the same logic, the government could theoretically proscribe Extinction Rebellion and other groups that might not be widely thought of as terrorist organisations.

    Whether it makes sense to proscribe the group, however, is a matter of debate. Proscribing Palestine Action on the basis of its alleged property damage would set a precedent in legally declaring that this type of direct action – vandalism – is considered significant enough to invoke the Terrorism Act in this way.

    Palestine Action is different in an important way from currently proscribed terrorist organisations.

    In Palestine Action’s five years of attacks, it has never killed anyone, or apparently attempted to do so. There have, though, been several injuries allegedly associated with the group. Two people were charged with assaulting an emergency worker at a protest – after the intention to proscribe the group was announced. At some of the group’s actions, members have been charged with assaulting security guards.

    In her statement to parliament, Cooper cited the group’s “impact on innocent members of the public fleeing for safety and subjected to violence”. But the primary focus of the government’s intention to proscribe the group seems to be around serious damage to property, particularly related to national security.

    Many currently proscribed groups have killed thousands of people, from al-Qaida on September 11 or 7/7 to groups like Hamas or Hezbollah attacking Israelis or Boko Haram’s killing sprees in Nigeria.

    There are some less violent proscribed groups. For example, UK-based Islamist group al-Ghurabaa (and the related Saved Sect, also known as al-Muhajiroun) have not been clearly linked to actual violence, although the group is accused of glorifying violence, for example celebrating the 9/11 attacks. It has also apparently inspired terrorist attacks.

    The government’s choice to start using serious property damage as sufficient criteria for terrorist designation would be a substantial change in how anti-terrorism law is applied.

    What happens next?

    If Palestine Action were to be proscribed, the consequences could be substantial.

    Since any support of the group would be a crime, a protest in support of the group – like the one that happened June 23 – could lead to thousands of arrests. If supporters failed to turn out, and the members stopped participating out of fear, it could lead to the end of the group.

    Or the group might shift to strictly legal or less damaging direct actions, like permitted marches or blockades. This would be a clear victory for the government.

    An ultimate goal of proscription is to keep dissident groups protesting legally. It sometimes works. Al-Muhajiroun and other local groups seemingly often tried to walk the fine line of being as extreme as possible, while staying “just within the law”.

    It is also possible that current Palestine Action members form renamed groups and carry on with criminal direct actions. Fragmenting and renaming groups is a common response to proscription, as we have seen with al-Ghurabaa, and with armed groups abroad like Lashkar-e-Taiba, as my own research with my colleague Muhammad Feyyaz has shown.

    This results in counter-terrorism officials playing Whac-A-Mole, frequently updating legislation with aliases and chasing many smaller groups or a broader movement instead of one organisation.

    Overall, the government might be legally justified to proscribe Palestine Action. What parliament must decide, however, is if the group poses enough of a threat to warrant this change to precedent. And officials should think about whether the action is likely to bring about the desired consequences, or if it could radicalise supporters into more violent action.

    Brian J. Phillips works on a research project that receives funding from the Economic and Social Research Council.

    ref. Palestine Action: what it means to proscribe a group, and what the effects could be – https://theconversation.com/palestine-action-what-it-means-to-proscribe-a-group-and-what-the-effects-could-be-259619

    MIL OSI Analysis

  • MIL-OSI Analysis: Labour’s disability cuts rebellion: a former government whip asks, how did Keir Starmer not see this coming?

    Source: The Conversation – UK – By Tony McNulty, Lecturer/Teaching Fellow, British Politics and Public Policy, Queen Mary University of London

    Under pressure. Flickr/UK Parliament, CC BY-NC-ND

    The government has promised to make major concessions to its universal credit and personal independence payment bill after a large-scale and very public rebellion by Labour MPs threatened to derail a vote due on July 1.

    The Commons order paper published on June 26 revealed that 126 Labour MPs had signed an amendment opposing a second reading for the bill, which proposes restricting disability benefits to levels they find unacceptable. Cleverly, the amendment stated that they accept “the need for the reform of the social security system” but they then listed a plethora of reasons as to why they declined to give the bill a second reading when it is due for a vote on July 1.

    Many of these reasons related to the government’s own assessment of the impact of the bill. It openly admits, for example, that an estimated 250,000 people, including 50,000 children, would be pushed into poverty by the changes being made to the social security system.


    Want more politics coverage from academic experts? Every week, we bring you informed analysis of developments in government and fact check the claims being made.

    Sign up for our weekly politics newsletter, delivered every Friday.


    Faced with the possibility of losing a vote to his own MP in the week marking the first anniversary of his arrival in Downing Street, prime minister Keir Starmer is promising to make concessions. These reportedly include exempting people currently receiving disability benefits from the changes.

    But whether or not this is enough to stop the rebellion, significant damage has been done. Securing the second reading on half-promised and lukewarm concessions that cannot be sustained simply stores up future strife.

    Collision course

    How did the government reach a position where it was at risk of losing a vote on one of its key bills in the week in which it celebrates a year in office? Why has it been pushing a bill so obviously lacking in support among its own MPs? Why has no-one rolled with the political pitch and controlled the narrative?

    This is not a muscle flexing exercise of the kind seen in December 1997, when Labour sought to show how tough it could be by cutting benefits for lone parents. It is not a macho attempt to see off a resurgent left flank, because effectively there isn’t one. The troublesome hard left is now tiny. Nor is it a putative rebellion that can be dismissed as dominated by the usual suspects. It is a rebellion of the mainstream core of the backbench parliamentary Labour party (PLP). Among the 126 MPs openly speaking out against the bill, 11 are Labour select committee chairs and 62 of them were only elected last year. In short, these are not the usual suspects. Their complaints cannot be readily dismissed.

    There were allegedly noises off from some whips suggesting this might be a confidence issue – implying that the government could be in trouble so pressure is being piled on rebels to withdraw or risk bringing down the government. I was a government whip from 1999 to 2002, and I can attest that no whip should be running around declaring this a potential “confidence vote”. And no MP should believe that it is. It is not. Were there to be any truth in these rumours then it indicates a whips’ office either vastly inexperienced, overconfident and arrogant, or simply grossly incompetent and panicked. Both the chief whip and the No.10 political operation will come under intense scrutiny whatever happens now. How did they not see this coming?

    The truth is that the only serious option at this point should be to bury the bill. It should be pulled before the vote and resurrected in the context of developing an anti-poverty strategy, including a child poverty alleviation plan. It might be that a sufficient number of “rebel signatories” are persuaded to let the second reading happen with a promise of further changes building on the concessions already announced, but this does not mean a safe passage later in the process. Many of the signatories will have already been disheartened and worried by the scrapping of the winter fuel allowance and the continuation of two-child benefit limit. They may have acquiesced on the latter and pocketed the change in policy on the former, but their disquiet and anger has not gone away.

    The government should never have been in a position of seriously considering pushing the bill through hoping it will secure Conservative support for its second reading. To do so would seriously threaten if not Starmer’s position, then certainly the position of the work and pensions secretary Liz Kendall – and even perhaps that of the chancellor, Rachel Reeves. All three will still emerge from this week damaged in some fashion.

    Rebellions such as this can take on a dynamic and life of their own and are likely to grow rather than diminish. Some 106 Labour MPs signed the amendment initially – only to be joined by more in short order. Backbenchers will have been worried about being asked “what did you do in the war?” by their grassroots members had they not enlisted their support.

    There is also a danger that once blooded by rebellion, some of the 120 plus MPs will get a taste for it – and that spells a real danger for the government, even one with a majority of 165.

    Either way, the government, which was relying on the bill to make £5bn worth of savings that would supposedly obviate the need for tax rises in the autumn, is going to have to somehow salvage both its economic and its political strategy in the wake of this crisis – and start to take its backbenchers more seriously.

    It’s not how anyone would have wanted to mark a year in office. Happy birthday, one and all.

    This article includes links to bookshop.org. If you click on one of the links and go on to buy something from bookshop.org The Conversation UK may earn a commission.

    Tony McNulty is member of the Labour Party

    ref. Labour’s disability cuts rebellion: a former government whip asks, how did Keir Starmer not see this coming? – https://theconversation.com/labours-disability-cuts-rebellion-a-former-government-whip-asks-how-did-keir-starmer-not-see-this-coming-259856

    MIL OSI Analysis

  • MIL-OSI Analysis: Labour’s disability cuts rebellion: a former government whip asks, how did Keir Starmer not see this coming?

    Source: The Conversation – UK – By Tony McNulty, Lecturer/Teaching Fellow, British Politics and Public Policy, Queen Mary University of London

    Under pressure. Flickr/UK Parliament, CC BY-NC-ND

    The government has promised to make major concessions to its universal credit and personal independence payment bill after a large-scale and very public rebellion by Labour MPs threatened to derail a vote due on July 1.

    The Commons order paper published on June 26 revealed that 126 Labour MPs had signed an amendment opposing a second reading for the bill, which proposes restricting disability benefits to levels they find unacceptable. Cleverly, the amendment stated that they accept “the need for the reform of the social security system” but they then listed a plethora of reasons as to why they declined to give the bill a second reading when it is due for a vote on July 1.

    Many of these reasons related to the government’s own assessment of the impact of the bill. It openly admits, for example, that an estimated 250,000 people, including 50,000 children, would be pushed into poverty by the changes being made to the social security system.


    Want more politics coverage from academic experts? Every week, we bring you informed analysis of developments in government and fact check the claims being made.

    Sign up for our weekly politics newsletter, delivered every Friday.


    Faced with the possibility of losing a vote to his own MP in the week marking the first anniversary of his arrival in Downing Street, prime minister Keir Starmer is promising to make concessions. These reportedly include exempting people currently receiving disability benefits from the changes.

    But whether or not this is enough to stop the rebellion, significant damage has been done. Securing the second reading on half-promised and lukewarm concessions that cannot be sustained simply stores up future strife.

    Collision course

    How did the government reach a position where it was at risk of losing a vote on one of its key bills in the week in which it celebrates a year in office? Why has it been pushing a bill so obviously lacking in support among its own MPs? Why has no-one rolled with the political pitch and controlled the narrative?

    This is not a muscle flexing exercise of the kind seen in December 1997, when Labour sought to show how tough it could be by cutting benefits for lone parents. It is not a macho attempt to see off a resurgent left flank, because effectively there isn’t one. The troublesome hard left is now tiny. Nor is it a putative rebellion that can be dismissed as dominated by the usual suspects. It is a rebellion of the mainstream core of the backbench parliamentary Labour party (PLP). Among the 126 MPs openly speaking out against the bill, 11 are Labour select committee chairs and 62 of them were only elected last year. In short, these are not the usual suspects. Their complaints cannot be readily dismissed.

    There were allegedly noises off from some whips suggesting this might be a confidence issue – implying that the government could be in trouble so pressure is being piled on rebels to withdraw or risk bringing down the government. I was a government whip from 1999 to 2002, and I can attest that no whip should be running around declaring this a potential “confidence vote”. And no MP should believe that it is. It is not. Were there to be any truth in these rumours then it indicates a whips’ office either vastly inexperienced, overconfident and arrogant, or simply grossly incompetent and panicked. Both the chief whip and the No.10 political operation will come under intense scrutiny whatever happens now. How did they not see this coming?

    The truth is that the only serious option at this point should be to bury the bill. It should be pulled before the vote and resurrected in the context of developing an anti-poverty strategy, including a child poverty alleviation plan. It might be that a sufficient number of “rebel signatories” are persuaded to let the second reading happen with a promise of further changes building on the concessions already announced, but this does not mean a safe passage later in the process. Many of the signatories will have already been disheartened and worried by the scrapping of the winter fuel allowance and the continuation of two-child benefit limit. They may have acquiesced on the latter and pocketed the change in policy on the former, but their disquiet and anger has not gone away.

    The government should never have been in a position of seriously considering pushing the bill through hoping it will secure Conservative support for its second reading. To do so would seriously threaten if not Starmer’s position, then certainly the position of the work and pensions secretary Liz Kendall – and even perhaps that of the chancellor, Rachel Reeves. All three will still emerge from this week damaged in some fashion.

    Rebellions such as this can take on a dynamic and life of their own and are likely to grow rather than diminish. Some 106 Labour MPs signed the amendment initially – only to be joined by more in short order. Backbenchers will have been worried about being asked “what did you do in the war?” by their grassroots members had they not enlisted their support.

    There is also a danger that once blooded by rebellion, some of the 120 plus MPs will get a taste for it – and that spells a real danger for the government, even one with a majority of 165.

    Either way, the government, which was relying on the bill to make £5bn worth of savings that would supposedly obviate the need for tax rises in the autumn, is going to have to somehow salvage both its economic and its political strategy in the wake of this crisis – and start to take its backbenchers more seriously.

    It’s not how anyone would have wanted to mark a year in office. Happy birthday, one and all.

    This article includes links to bookshop.org. If you click on one of the links and go on to buy something from bookshop.org The Conversation UK may earn a commission.

    Tony McNulty is member of the Labour Party

    ref. Labour’s disability cuts rebellion: a former government whip asks, how did Keir Starmer not see this coming? – https://theconversation.com/labours-disability-cuts-rebellion-a-former-government-whip-asks-how-did-keir-starmer-not-see-this-coming-259856

    MIL OSI Analysis

  • MIL-OSI Analysis: Could the first images from the Vera Rubin telescope change how we view space for good?

    Source: The Conversation – UK – By Professor Manda Banerji, Professor of Astrophysics, School of Physics & Astronomy, University of Southampton

    We are entering a new era of cosmic exploration. The new Vera C Rubin Observatory in Chile will transform astronomy with its extraordinary ability to map the universe in breathtaking detail. It is set to reveal secrets previously beyond our grasp. Here, we delve into the first images taken by Rubin’s telescope and what they are already showing us.

    These images vividly showcase the unprecedented power that Rubin will use to
    revolutionise astronomy and our understanding of the Universe. Rubin is truly transformative, thanks to its unique combination of sensitivity, vast sky area coverage and exceptional image quality.

    These pictures powerfully demonstrate those attributes. They reveal not only bright objects in exquisite detail but also faint structures, both near and far, across a large area of sky.

    Cosmic nurseries – nebulae in detail

    The stunning pink and blue clouds in this image are the Lagoon (lower left) and Trifid (upper right) nebulae. The word nebula comes from the Latin for cloud, and these giant clouds are truly enormous – so vast it takes light decades to travel across them. They are stellar nurseries, the very birth sites for the next generation of stars and planets in our Milky Way galaxy.

    The intense radiation from hot, young stars energises the gas particles, causing
    them to glow pink. Further from these nascent stars, colder regions consist of
    microscopic dust grains. These reflect starlight (a process known in astronomy as
    “scattering”), much like our atmosphere, creating the beautiful blue hues. Darker filaments within are much denser regions of dust, obscuring all but the brightest background stars.

    To detect these colours, astronomers use filters over their instruments, allowing only certain wavelengths of light onto the detectors. Rubin has six such filters, spanning from short ultraviolet (UV) wavelengths through the visible spectrum to longer near-infrared light. Combining information from these different filters enables detailed measurements of the properties of stars and gas, such as their temperature and size.

    Rubin’s speed – its ability to take an image with one filter and then quickly move to the next – combined with the sheer area of sky it can see at any one time, is what makes it so unique and so exciting. The level of detail, revealing the finest and faintest structures, will enable it to map the substructure and satellite galaxies of the Milky Way like never before.

    Mapping galaxies across billions of light years

    This image captures a small section of NSF–DOE Vera C. Rubin Observatory’s view of the Virgo Cluster, offering a vivid glimpse of the variety in the cosmos.
    Credit: NSF–DOE Vera C. Rubin Observatory

    The images of galaxies powerfully demonstrate the scale at which the Rubin
    observatory will map the universe beyond our own Milky Way. The large galaxies
    visible here (such as the two bright spiral shaped galaxies visible in the lower right quarter of the picture) belong to the Virgo cluster, a giant structure containing more than 1,000 galaxies, each holding billions to trillions of stars.

    This image beautifully showcases the huge diversity of shapes, sizes and colours of galaxies in our universe revealed by Rubin in their full technicolour glory. Inside these galaxies, bright dots are visible – these are star-forming regions, just like the Lagoon and Trifid nebulae, but remarkably, these are millions of light years away from us.

    The still image captures just 2% of the area of a full Rubin image revealing a universe that is teeming with celestial bodies. The full image, which contains around ten million galaxies, would need several hundred ultra high-definition TV screens to display in all its detail. By the end of its ten-year survey, Rubin will catalogue the properties of some 20 billion galaxies, their colours and locations on the sky containing information about even more mysterious components of our universe such as dark matter and dark energy. Dark matter makes up most of the matter in the cosmos, but does not reflect or emit light. Dark energy seems to be responsible for the accelerating expansion of the universe.

    The UK’s role

    These unfathomable numbers demand data processing on a whole new scale.
    Uncovering new discoveries from this data requires a giant collaborative effort, in which UK astronomy is playing a major role. The UK will process around 1.5 million Rubin images and hosts one of three international data access centres for the project, providing scientists across the globe with access to the vast Rubin data. Here at the University of Southampton, we are leading two critical software
    development contributions to Rubin.

    First of these is the capability to combine the Rubin images with those at longer infrared wavelengths. This extends the colours that Rubin sees, providing key diagnostic information about the properties of stars and galaxies. Second is the software that will link Rubin observations to another new instrument called 4MOST, soon to be installed at the Vista telescope in Chile.

    Part of 4MOST’s job will be to snap up and classify rapidly changing “sources”, or objects, in the sky that have been discovered by Rubin. One such type of rapidly changing source is a stellar explosion known as a supernova. We expect to have catalogued more supernova explosions within just two years than have ever been made previously. Our contributions to the Rubin project will therefore lead to a totally new understanding of how the stars and galaxies in our universe live and die, offering an unprecedented glimpse into the grand cosmic cycle.

    The Rubin observatory isn’t just a new telescope – it’s a new pair of eyes on the
    universe, revealing the cosmos in unprecedented detail. A treasure trove of
    discoveries await, but most interesting among them will be the hidden secrets of the universe that we are yet to contemplate. The first images from Rubin have been a spectacular demonstration of the vastness of the universe. What might we find in
    this gargantuan dataset of the cosmos as the ultimate timelapse movie of our
    universe unfolds?

    Professor Manda Banerji receives funding from the Royal Society and the Science and Technology Facilities Council.

    Dr Philip Wiseman receives funding from the Science and Technology Facilities Council

    ref. Could the first images from the Vera Rubin telescope change how we view space for good? – https://theconversation.com/could-the-first-images-from-the-vera-rubin-telescope-change-how-we-view-space-for-good-259857

    MIL OSI Analysis

  • MIL-OSI Analysis: Chaotic new aid system means getting food in Gaza has become a matter of life – and often death

    Source: The Conversation – UK – By Leonie Fleischmann, Senior Lecturer in International Politics, City St George’s, University of London

    With all eyes on the ceasefire between Israel and Iran, which came into effect 12 days after Israel launched a major attack on Iran’s nuclear and military structure, attention towards Gaza has waned. This is at a time when attempting to gain access to food under a new model of aid distribution has been described by the United Nations as a “death trap”.

    According to the UN World Food Programme, more than 470,000 people are facing “catastrophic” hunger and the entire population is experiencing “acute” food insecurity. This was exacerbated when Israel imposed a blockade on the Strip in mid-March 2025, preventing the entry of food, medication and other aid for a period of 70 days.

    Following international pressure, Israel’s prime minister, Benjamin Netanyahu, ordered the resumption of humanitarian aid through a new model of distribution, which bypasses the existing UN and NGO channels. It was devised by Israel and handed to a United States-backed organisation, the Gaza Humanitarian Foundation (GHF) to operate.

    According to Netanyahu, taking control of aid delivery would prevent Hamas from seizing and selling supplies. Two of his cabinet ministers, far-right politicians Bezalel Smotrich and Itamar Ben Gvir, objected to any aid entering Gaza, due to the risk of it serving to bolster Hamas.

    A video was circulated on social media on June 26 allegedly showing armed men from Hamas commandeering aid trucks in northern Gaza. Smotrich threatened to leave the coalition if supplies continued to reach the hands of Hamas. In response, Netanyahu has since halted the entry of humanitarian aid into the north of Gaza.

    GHF was ostensibly established to improve the distribution of aid in Gaza. But the UN swiftly condemned its new distribution model as “inadequate, dangerous and a violation of impartiality rules”.

    Reports from one distribution site on its first day of operation on May 27 showed scenes of chaos and confusion. The site outside Rafah was described as overwhelmed with hundreds of people rushing towards the aid boxes. The New York Times reported that Israel Defense Force (IDF) personnel fired several warning shots, which sent the crowed running away in panic.

    In the past two months, there have been continued reports of violence and chaos at the distribution sites, with deadly incidents a near daily occurrence. On the day the ceasefire between Iran and Israel was confirmed (June 24) at least 46 Palestinians waiting for aid in Gaza were shot by Israeli forces in two separate incidents, according to Gaza’s civil defence agency. Over 400 Palestinians have been killed around the four aid distribution centres since they began operating.

    Inbuilt chaos and lethal violence

    Arguably, this chaos and violence is inbuilt in the new aid delivery system. Even before it began operations, the GHF received widespread criticism.




    Read more:
    Lethal humanitarianism: why violence at Gaza aid centres should not come as a surprise


    A letter signed by leading aid and human rights organisations criticised the GHF for not meeting the four universally recognised principles for humanitarian action: humanity, neutrality, impartiality and independence.

    Critics say that the GHF system effectively militarises aid distribution. GHF’s leadership is made up of retired military officers and private security contractors, with some humanitarian aid officials. It coordinates with a private US security company on the ground in Gaza. Meanwhile the IDF patrols the perimeters at what it calls “secure distribution sites”.

    Critics argued that the proposed model would be insufficient. The plan called for only four aid distribution centres to be established in the southern part of the Gaza Strip, compared with about 400 UN-led sites in operation across Gaza prior to October 7 2023.

    The reduced number and location of the aid sites can be understood as a mechanism of forced displacement. It appears to be consistent with Netanyahu’s plan to relocate Palestinians to a “sterile zone” in Gaza’s far south. UN officials argued that the requirement for civilians to travel long distances and to cross Israeli military lines and combat zones to collect aid from the sites would “put civilian lives in danger and cause mass displacement while using aid as ‘bait’”. Forced displacement is illegal under international law.

    Countering the criticisms

    The GHF rejected claims that the IDF have attacked Palestinians at the aid sites. Reports from Israeli news outlets have also countered the widespread media claims.

    Israel Hayom, a free Israeli Hebrew-language daily newspaper criticised “inflammatory” reports that the IDF had opened fire on Palestinians lining up for food. The right-leaning news outlet, argued that it was Hamas which had shot at Gazan civilians.

    The broadcaster 7 Israel National News reported that Hamas killed eight aid workers from the GHF in early June. A more positive spin from the same news outlet highlighted that improvements that have been made to security at the centres and that enough supplies for 1.4 million meals had been distributed in a single day on June 5.

    Despite these claims from within Israel, evidence presented by the UN has suggested that the aid mechanisms are not only failing to meet the humanitarian needs in Gaza, but are making “a desperate situation worse”.

    Following two months in operation, 15 human rights and legal organisations have called for the GHF to be suspended. They argue that “this new model of privatised, militarised aid distribution constitutes a radical and dangerous shift away from established international humanitarian relief operations”.

    As a consequence of both the controversial establishment of the GHF and its failures on the ground, they believe that its operations may amount to grave violations of international humanitarian, human rights and criminal law.

    Leonie Fleischmann does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Chaotic new aid system means getting food in Gaza has become a matter of life – and often death – https://theconversation.com/chaotic-new-aid-system-means-getting-food-in-gaza-has-become-a-matter-of-life-and-often-death-259815

    MIL OSI Analysis

  • MIL-OSI Analysis: The UK’s plan to genetically test all newborns sounds smart — until it creates patients who aren’t sick

    Source: The Conversation – UK – By Luca Stroppa, Postdoctoral Fellow on the project “Early Diagnosis – Handling Knowing”, University of St Andrews

    The current heel-prick test checks for nine rare genetic conditions, antibydni/Shutterstock

    By 2030, every baby born in the UK could have their entire genome sequenced under a new NHS initiative to “predict and prevent illness”. This would dramatically expand the current heel-prick test, which checks for nine rare genetic conditions, into a far more extensive screen of hundreds of potential risks.

    On the surface, the idea sounds like an obvious win for public health: spot problems early, intervene sooner and save lives. But genetic testing on this scale carries real risks, especially if the results are misunderstood or poorly communicated.

    The new plan builds on a recent NHS pilot study that sequenced the genomes of 100,000 newborns in England to identify more than 200 genetic conditions. However, these tests don’t provide clear cut answers. They don’t offer a diagnosis or certainty, just an estimate of risk.

    A genetic result might suggest a child has a higher (or lower) probability of developing a certain disease later in life. But risk is not prediction. If parents, or even clinicians, misinterpret that nuance, the consequences could be serious.

    Some families may come to see a child flagged as “at risk” as a patient-in-waiting. In extreme cases, they may treat a probability as a certainty; assuming, for instance, that a child “has the gene” and will inevitably become ill. That assumption could reshape how children are raised, how they’re treated and how they could see themselves.

    Alarming language

    This isn’t speculation. Research shows that while some people understand risk scores accurately, many struggle with statistical information. Words like “high risk” or “likely” are interpreted differently by different people and often more seriously than intended. Even trained doctors can overestimate what a positive test result means. When it comes to genomics, the line between “you might get sick” and “you will get sick” can blur quickly.

    UK policymakers haven’t helped this confusion. Government messaging refers to “diagnosis before symptoms even occur” and “leapfrogging disease.” But this language overpromises what genomic data can do and downplays its uncertainty.

    When testing is indiscriminate and communication unclear, the fallout can be wide ranging. Children identified as “high risk” may undergo years of monitoring, unnecessary medical appointments, or even treatment for diseases they never develop. In some cases, this leads to physical harms, from unnecessary medications to procedures with side effects. In others, the damage is psychological: shaping a child’s identity around an anticipated future of illness. These psychological effects can be lasting. Being told you’re likely to develop a condition like dementia may influence how a person plans their life, even if that illness never materialises.

    False positives

    There are also broader issues with applying this kind of screening to everyone. Risk based testing works best when it’s targeted; for example, among those with symptoms or a strong family history. But in the general population, where most people are healthy, false positives can far outnumber accurate results. Even well designed tests can produce misleading outcomes when applied at scale.

    This is a well-known statistical effect, discussed during the COVID pandemic. In populations where a disease is rare, even highly accurate tests produce more false positives than true ones. If DNA screening is rolled out universally, many families will be told their child is at risk when they are not. These false positives can lead to a cascade of further tests, stress and unnecessary clinical interventions; all of which consume time and resources and may cause real harm.

    This issue already affects adult testing. For example, Alzheimer’s tests that measure early changes in the brain work well in memory clinics, where patients already show symptoms. But when these same tests are used on the general population, where most people are healthy, they produce false positives in up to two-thirds of cases. If genetic screening in newborns is rolled out in the same way, it could lead to similar problems: mislabelling healthy children as sick, and causing unnecessary worry and follow-up tests.

    So what’s the solution? It’s not to abandon genetic testing altogether – far from it. When used carefully, genomic data can offer real benefits, particularly for patients with symptoms or in research settings. But if we’re going to roll this out to every newborn, the surrounding infrastructure needs to be robust.

    That includes:

    • Clear, consistent communication: Risk scores must be explained in ways that emphasise uncertainty, not oversold as definitive predictions.

    • Support for parents: For consent to be truly informed, parents need help understanding that a genetic flag is not a diagnosis – and that many people with elevated risk never go on to develop the condition.

    • Training for clinicians: Many doctors still lack the tools to interpret and explain genetic information accurately and responsibly.

    • A national network of genetic counsellors Genetic counsellors are essential for supporting families through testing and interpretation. But current numbers in the UK fall far short of what universal newborn screening would require.

    Genomic data holds great promise. But using it as a blanket tool for all newborns demands caution, clarity, and investment in communication and care. Without these safeguards, we risk turning healthy babies into patients-in-waiting.

    The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    ref. The UK’s plan to genetically test all newborns sounds smart — until it creates patients who aren’t sick – https://theconversation.com/the-uks-plan-to-genetically-test-all-newborns-sounds-smart-until-it-creates-patients-who-arent-sick-259816

    MIL OSI Analysis

  • MIL-OSI Analysis: Survey shows support for electoral reform now at 60% – so could it happen?

    Source: The Conversation – UK – By Alan Renwick, Professor of Democratic Politics, UCL

    Public support for reforming the UK’s first past the post electoral system has risen markedly of late. So is there any serious chance that such reform could actually happen?

    The annual British Social Attitudes survey (BSA) has been tracking public attitudes to electoral reform (and other issues) since 1983. It found consistent majorities for the status quo up to 2017, but charts a dramatic shift since then. In the latest BSA, support for reform has risen to 60%, with just 36% backing the current arrangements.

    It’s true that these views are unlikely to be deeply held: most people rarely think about electoral systems. But they do reflect a profound disillusionment with the way the political system is working.

    Significant electoral reforms are very rare outside times of regime change. When I wrote a book on the subject in 2010, there had been just six major reforms (from one system type to another) in national parliaments in established democracies since the second world war. That number has increased a little since then, but only because Italy has got into a pattern of endless tinkering. The basic pattern is one of stability.

    The main reason for that is obvious: those who gain power through the existing system rarely want to change it.

    Yet the cases where reform has happened reveal two basic routes through which such change can take place.

    First, those in power can conclude that a different system would better serve their interests. In 1985, for example, France’s president François Mitterrand replaced the system for electing the National Assembly because he feared heavy losses for his Socialist party in the looming elections.

    Second, leaders can cave into public demands for reform because they fear that failing to do so will add to their unpopularity. This requires a scandal that affects people in their daily lives, and campaigners who successfully pin blame for that scandal on the voting system. It typically also needs at least a few reform advocates within government.


    Want more politics coverage from academic experts? Every week, we bring you informed analysis of developments in government and fact check the claims being made.

    Sign up for our weekly politics newsletter, delivered every Friday.


    These conditions characterised three major reforms in the 1990s, in Italy, Japan, and New Zealand. In the first two cases, rampant corruption fed economic woes and was attributed to the voting system. In New Zealand, first past the post enabled extreme concentration of power, which allowed successive governments to unleash radical, and widely disliked, economic restructuring.

    Prospects for reform in the UK

    If Labour continues to lag in the polls and votes remain fragmented across multiple parties, we might imagine reform by the first route in the UK. Ministers could calculate that a more proportional system would cut Labour’s losses, clip Nigel Farage’s wings, and reduce uncertainty.

    Yet majority parties facing heavy defeat almost never change the system in this way. Mitterrand’s reform of 1985 was a rare exception. Such parties always hope things will turn around. They don’t want to look like they have given up. And they are used to playing a game of alternation in power: they want to hold all the levers some of the time, and will tolerate years in the wilderness to get that.

    Reform by the second route is equally improbable. Notwithstanding great public dissatisfaction with the state of politics in the UK, there is little narrative that the electoral system is the source of the problem.

    But, depending on the results, the chances of reform could grow after the next general election.

    Change by the first route is most likely if no party comes close to a majority and a coalition is formed from multiple fragments. Those parties might all see reform as in their interests. Perhaps more likely, the smaller parties in such a coalition might push their larger partner into conceding a referendum – much as the Liberal Democrats did with the Conservatives in 2010. If support for the two big parties is disintegrating, referendum voters might opt for change – though that is not guaranteed.

    As for the second route, a majority victory for Reform UK that was generated by first past the post from a small vote share could – given the party’s marmite quality – trigger widespread public rejection of the voting system. A clear path to change might open up if Reform then lost a subsequent election, particularly if it lost to a coalition of parties, some of which backed reform already.

    In short, the shifting sands of politics are making electoral reform more likely. But almost certainly not before the 2030s. And much will depend on how the party system evolves in the years to come.

    This article includes links to bookshop.org. If you click on one of the links and go on to buy something from bookshop.org The Conversation UK may earn a commission.

    Alan Renwick does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Survey shows support for electoral reform now at 60% – so could it happen? – https://theconversation.com/survey-shows-support-for-electoral-reform-now-at-60-so-could-it-happen-259851

    MIL OSI Analysis

  • MIL-OSI Analysis: Do the US public support Trump bombing Iran? Here’s what the data shows

    Source: The Conversation – UK – By Paul Whiteley, Professor, Department of Government, University of Essex

    Political scientists first identified a phenomenon known as the “rally round the flag” effect in the 1970s . This refers to the tendency for the US public to increase their support for a president when the county becomes involved in conflicts abroad. After the massive air strikes on Iran’s nuclear sites, the question is whether the US bombing missions will boost support for Donald Trump.

    An Economist/YouGov poll conducted between June 19 and June 23 suggests that it is unlikely that the Trump administration will experience a “rally round the flag” event after the US air strikes on June 22.

    The survey asked: “Do you think the U.S. military should or should not bomb Iranian nuclear facilities?” Some of those surveyed would have answered before the raids took place, while others were responding afterwards.

    Donald Trump makes a public announcement of the US air strikes on Iran.

    Altogether around 29% supported the bombing, with 46% opposed and 25% not sure. The chart identifies big differences between groups in their opinions about the raid though. There’s a considerable gender divide. with 38% of men supporting the action (44% opposed), but only 21% of women in favour (48% opposed).

    In relation to ethnicity, 34% of white people supported it and 42% opposed the raid. In contrast black people were much more likely to oppose (66%), with just 7% supportive. Among Hispanics 26% supported and 43% opposed the bombing.

    There was also a wide divide in opinions among age groups, with only 15% of those aged between 18 to 29 supporting the air strikes and 59% opposing them. This was the highest level of opposition from any age group. This chimes with a general lack of support for Trump from this generation, with a massive 70% saying, in the same poll, that the country was heading in the wrong direction.

    In contrast, those over the age of 65 were more in favour, with 42% supporting the military action and 37% opposing. This was the only age group in which supporters outnumbered opponents.




    Read more:
    Will Trump’s high-risk Iran strategy pay dividends at home if the peace deal holds?


    The group most opposed to the bombings were those with annual incomes over US$100,000 (£72,813), with 53% opposing and only 25% supporting. The lowest income group (those earning less than US$50,000) and middle income group (earning more than US$50,000 and less than US$100,000) had very similar views, with 30% and 31% supporting the attack respectively, and 45% and 46% opposing it.

    Should the US military bomb Iranian nuclear facilities?


    Author’s graph based on Economist/YouGov data, CC BY-ND

    Perhaps the most interesting statistic is what those who voted for Trump in the presidential election last year thought about the president’s decision to attack Iran. Around half, 51%, of them supported the bombing, with 24% opposed. In the case of Harris voters only 10% supported the action while 70% opposed it.

    We can get some idea of what prompts these responses by probing into the overall confidence the American people currently have in the Trump administration. There has been a gradual decline in the president’s job approval ratings, currently about 40% approve and 54% disapprove of his performance in the job. This compares with 43% approving and 51% disapproving in the Economist/YouGov survey published a month ago on May 19. Back on March 20, 48% of Americans approved of his job performance, while 49% disapproved.

    When asked if they have a favourable or unfavourable view of Trump, 41% say the former and 54% the latter. This has also become slightly more negative since the Economist’s survey in May, when 44% felt favourably and 53% unfavourably.

    Worries about a world war

    It appears than many Americans are becoming afraid for the future of their country’s role in a war. Respondents were asked if they thought there was a greater or lesser chance of a world war compared with five years ago. Around 58% thought the chances were greater, compared with only 11% who thought they were lower.

    A similar question asked if they thought the chances of a nuclear war were greater or lesser than five years ago. This produced a rather similar set of responses. No less than 52% thought there was a greater chance with only 12% thinking that the chances were lower.

    The final and in many ways the most striking responses of all related to the question: Do you think that things in this country today are under control or out of control? A surprising 65% thought they were out of control and only 21% thought the opposite. This suggests that Trump’s erratic behaviour has started to spook Americans on a large scale, since they do not know, in line with national leaders around the world, what he will do next.

    Paul Whiteley has received funding from the British Academy and the ESRC

    ref. Do the US public support Trump bombing Iran? Here’s what the data shows – https://theconversation.com/do-the-us-public-support-trump-bombing-iran-heres-what-the-data-shows-259841

    MIL OSI Analysis

  • MIL-OSI Analysis: Thimerosal discouraged in US flu vaccines, breaking with WHO guidance

    Source: The Conversation – UK – By Edward Beamer, Lecturer, Pharmacology, Sheffield Hallam University

    A federal vaccine panel, recently reshaped by US health secretary Robert F. Kennedy Jr., has voted to discourage the use of flu vaccines containing thimerosal, a mercury-based preservative. The decision marks a dramatic shift in vaccine policy, as thimerosal has long been considered safe by health agencies worldwide, with its use already limited to a few multi-dose flu shots.

    RFK Jr. has long linked thimerosal to autism – a connection that extensive scientific research has thoroughly debunked.

    Thimerosal is an organic chemical containing mercury, used as a preservative in vaccines since the 1930s. Its effect comes from the mercury that disrupts the function of enzymes in microbes, such as bacteria and fungi. This prevents contamination of vaccines while they are stored in vials. Mercury, however, is also well-known as a potent toxin acting on cells the brain.

    Much of mercury’s toxicity to brain cells stems from the same attributes that make thimerosal such a useful preservative. It disrupts the basic biological function of cells by changing the structure of proteins and enzymes.

    In the brain, this can lead neurons to become excessively active, can impair the way they use energy, it can increase inflammation and lead to the death of neurons. While mercury poisoning can damage brain function in adults, babies are even more vulnerable.

    People have long understood that mercury is toxic. But in the latter half of the 20th century, scientists discovered that industrial mercury entered rivers and seas, accumulating in the tissues of fish and shellfish. The neurological consequences of consuming too much contaminated seafood could be severe. This led environmental scientists to determine safe levels of mercury exposure.

    Anxiety about mercury in vaccines intensified when it was noticed that some children receiving multiple vaccines could exceed established safety limits for mercury exposure. These limits were based on environmental toxicity studies. How mercury affects the brain, though, depends very much on the chemical form in which it is ingested.

    In the 20th century, scientists discovered that mercury accumulates in the fish that we eat.
    J nel/Shutterstock.com

    Methylmercury v ethylmercury

    The form of mercury that contaminates the environment as a consequence of industrial processes is methylmercury. The form that is part of thimerosal is ethylmercury.

    The structure of these molecules differs in subtle but important ways. Methylmercury has one more carbon atom and two more hydrogen atoms than ethylmercury. These small differences significantly affect how each compound behaves in the body, particularly, in how easily they dissolve in fats.

    Fat solubility is a key consideration in pharmacokinetics – the science of how drugs and other molecules travel through the body. Briefly, because cell membranes are made of fatty substances, a molecule’s ability to dissolve in fats strongly influences how it crosses these membranes and moves through the body.

    It affects how a molecule is absorbed into the blood, how it is distributed to different tissues, how it is broken down by the body into other chemicals and how it is excreted.

    Methylmercury from environmental contamination is more fat-soluble than ethylmercury from thimerosal. This means that it accumulates more easily in tissues, and is excreted from the body more slowly.

    It also means that it can more easily cross into the brain and accumulate at greater concentrations for longer. For this reason, the safety guidelines that were established for methylmercury were unlikely to accurately predict the safety of ethylmercury.

    Global policy shift amid public fear

    Nevertheless, concerns about vaccine hesitancy, rising autism diagnoses and fears of a potential link to childhood vaccines led to thimerosal being almost entirely removed from childhood vaccines in the US by 2001 and in the UK between 2003 and 2005.

    Beyond biological considerations, policymakers were also responding to concerns about how vaccine fears could undermine immunisation efforts and fuel the spread of infectious diseases.

    Denmark, which removed thimerosal from childhood vaccines in 1992, provided an early opportunity to study the issue. Researchers compared the rates of autism before and after thimerosal’s removal as well as compared with similar countries still using it. Several large studies demonstrated conclusively that thimerosal was not causing autism or neurodevelopmental harm.

    Despite the overwhelming evidence that thimerosal is safe, it is no longer widely used in childhood vaccines in high-income countries, replaced by preservative-free vaccines, which must be stored as a single dose per vial.

    Storing multiple doses of a vaccine in the same vial, however, is still an extremely useful approach in resource-limited settings, in pandemics and where diseases require rapid, large-scale vaccination campaigns – common with influenza.

    International health bodies, including the World Health Organization, continue to support thimerosal’s use. They emphasise that the benefits of immunisation far outweigh the theoretical risks from low-dose ethylmercury exposure.

    Edward Beamer does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Thimerosal discouraged in US flu vaccines, breaking with WHO guidance – https://theconversation.com/thimerosal-discouraged-in-us-flu-vaccines-breaking-with-who-guidance-259609

    MIL OSI Analysis

  • MIL-OSI Analysis: How strawberries and cream were a rare and exciting treat for Victorians – and then became a Wimbledon icon

    Source: The Conversation – UK – By Rebecca Earle, Professor of History, University of Warwick

    Strawberries and Cream by Raphaelle Peale (1816). National Gallery of Art

    Wimbledon is all about strawberries and cream (and of course tennis). The club itself describes strawberries and cream as “a true icon of The Championships”.

    While a meal at one of the club’s restaurants can set you back £130 or more, a bowl of the iconic dish is a modest £2.70 (up from £2.50 in 2024 – the first price rise in 15 years). In 2024 visitors munched their way through nearly 2 million berries.

    Strawberries and cream has a long association with Wimbledon. Even before lawn tennis was added to its activities, the All England Croquet Club (now the All England Lawn Tennis & Croquet Club) was serving strawberries and cream to visitors. They would have expected no less. Across Victorian Britain, strawberries and cream was a staple of garden parties of all sorts. Private affairs, political fundraisers and county cricket matches all typically served the dish.

    Alongside string bands and games of lawn tennis, strawberries and cream were among the pleasures that Victorians expected to encounter at a fête or garden party. As a result, one statistician wrote in the Dundee Evening Telegraph in 1889, Londoners alone consumed 12 million berries a day over the summer. At that rate, he explained, if strawberries were available year-round, Britons would spend 24 times more on strawberries than on missionary work, and twice as much as on education.


    Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.


    But of course strawberries and cream were not available year-round. They were a delightful treat of the summer and the delicate berries did not last. Victorian newspapers, such as the Illustrated London News, complained that even the fruits on sale in London were a sad, squashed travesty of those eaten in the countryside, to say nothing of London’s cream, which might have been watered down.

    Wimbledon’s lawn tennis championships were held in late June or early July – in the midst, in other words, of strawberry season.

    Eating strawberries and cream had long been a distinctly seasonal pleasure. Seventeenth-century menu plans for elegant banquets offered strawberries, either with cream or steeped (rather deliciously, and I recommend you try this) in rose water, white wine, and sugar – as a suitable dish for the month of June.

    Strawberries and Cream by Robert Gemmell Hutchison (1855–1936).
    National Galleries of Scotland, CC BY-NC

    They were, in the view of the 17th-century gardener John Parkinson, “a cooling and pleasant dish in the hot summer season”. They were, in short, a summer food. That was still the case in the 1870s, when the Wimbledon tennis championship was established.

    This changed dramatically with the invention of mechanical refrigeration. From the late 19th century, new technologies enabled the global movement of chilled and frozen foods across vast oceans and spaces.

    Domestic ice-boxes and refrigerators followed. These modern devices were hailed as freeing us from the tyranny of seasons. As the Ladies Home Journal magazine proclaimed triumphantly in 1929: “Refrigeration wipes out seasons and distances … We grow perishable products in the regions best suited to them instead of being forced to stick close to the large markets.” Eating seasonally, or locally, was a tiresome constraint and it was liberating to be able to enjoy foods at whatever time of year we desired.

    As a result, points out historian Susan Friedberg, our concept of “freshness” was transformed. Consumers “stopped expecting fresh food to be just-picked or just-caught or just-killed. Instead, they expected to find and keep it in the refrigerator.”

    Strawberries and cream being enjoyed at Wimbledon.
    bonchan/Shutterstock

    Today, when we can buy strawberries year round, we have largely lost the excitement that used to accompany advent of the strawberry season. Colour supplements and supermarket magazines do their best to drum up some enthusiasm for British strawberries, but we are far from the days when poets could rhapsodise about dairy maids “dreaming of their strawberries and cream” in the month of May.

    Strawberries and cream, once a “rare service” enjoyed in the short months from late April to early July, are now a season-less staple, available virtually year round from the global networks of commercial growers who supply Britain’s food. The special buzz about Wimbledon’s iconic dish of strawberries and cream is a glimpse into an earlier time, and reminds us that it was not always so.

    Rebecca Earle does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. How strawberries and cream were a rare and exciting treat for Victorians – and then became a Wimbledon icon – https://theconversation.com/how-strawberries-and-cream-were-a-rare-and-exciting-treat-for-victorians-and-then-became-a-wimbledon-icon-258629

    MIL OSI Analysis

  • MIL-OSI Analysis: Sixteenth-century tennis was a dangerous sport played with balls covered in wool

    Source: The Conversation – UK – By Penny Roberts, Professor of Early Modern European History, University of Warwick

    Portrait of a young boy with a paletta and a ball, late 16th century, artist unknown. Wiki Commons/Canva

    In 1570, a Frenchman was arrested for smuggling clandestine correspondence between France and England. A passing comment in his interrogation document reveals that he also happened to be carrying a leather bag “in which there were three or four dozen balls of wool for playing tennis”.

    The French term used was jeu de paume. This sport was played with the hand (palm), often gloved, rather than a racquet. This developed into the game that in English we usually refer to as “real tennis” (a different beast to the lawn tennis played at Wimbledon).

    The interrogator believed that this cheap merchandise was simply a ruse for the man’s true purpose of communicating with Huguenot exiles. I have written a book, Huguenot Networks, based on this interrogation document, which will be published by Cambridge University Press later this year. But, as a historian, I was intrigued by both the number and makeup of the goods he was transporting. The wool, if wrapped tightly, could certainly have made these balls bouncy.


    Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.


    By chance, I encountered similar objects in a small display in the Palazzo Te in Mantua in Italy. These balls had apparently been retrieved from the palace roof and several others had come from a nearby church. They were variously made of leather, cloth and string rather than wool, probably stuffed with earth or animal hair. Just like the handmade “real tennis” balls of today, they were harder and more variable in size than regular tennis balls, and usually not so colourful, although sometimes having a simple painted design on the outside.

    Today, “real tennis” is known as the “sport of kings”, praised for testing agility and athletic prowess. The most famous court in England is at Hampton Court, but many others survive in the UK. For instance, there is one down the road from where I work at the University of Warwick, at Moreton Morrell in Warwickshire.

    Louis X of France popularised the sport.
    Gallica

    In the 16th century, real tennis attracted gamblers, meaning it became a later target for Puritans. Anne Boleyn is said to have placed a wager on a match she was watching on the day of her arrest. And Henry VIII, fittingly, supposedly played a match on the day Boleyn was executed.

    And if there is any doubt about how dangerous tennis could be, several royal deaths in France are attributed to it. King Louis X of France was a keen player of jeu de paume. He was the first ruler to order enclosed indoor courts to be constructed. This later became popular across Europe.

    In June 1316, after a particularly exhausting game, Louis X is said to have drunk a large quantity of chilled wine and soon afterwards died – probably of pleurisy, although there was some suspicion of poisoning.

    Likewise, in August 1536, the death of the 18-year-old dauphin, eldest son of Francis I, was blamed on his Italian secretary, the Count of Montecuccoli, who had brought him a glass of cold water after a match. The count was subsequently executed despite a post-mortem suggesting that the prince had died of natural causes.

    By the 16th century, there were two courts at the Louvre and many more around the city of Paris as well as at other royal residences. Ambassadors’ accounts describe frequent games between high-ranking courtiers and the king which could sometimes result in injury, especially if struck by one of the hard balls.

    Our man carrying many tennis balls in 1570 had probably spotted a lucrative opportunity in response to rising demand. The French game had become increasingly popular in England under the Tudors.

    By the Tudor period, no self-respecting European court was without its own purpose-built tennis courts where monarchs and their entourages tested their prowess and skill. They often did so before ambassadors, who could report back to their own rulers, making it a truly competitive international sport.

    Thankfully, today’s game has far fewer dangers – there’s no risk of being hit by a ball full of earth or the fear of mortal retribution after beating an exhausted high-ranking opponent.

    This article features references to books that have been included for editorial reasons, and may contain links to bookshop.org. If you click on one of the links and go on to buy something from bookshop.org The Conversation UK may earn a commission.

    Penny Roberts does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Sixteenth-century tennis was a dangerous sport played with balls covered in wool – https://theconversation.com/sixteenth-century-tennis-was-a-dangerous-sport-played-with-balls-covered-in-wool-255643

    MIL OSI Analysis

  • MIL-OSI Analysis: Hurricane Helene set up future disasters, from landslides to flooding – cascading hazards like these are upending risk models

    Source: The Conversation – USA – By Brian J. Yanites, Associate Professor of Earth and Atmospheric Science. Professor of Surficial and Sedimentary Geology, Indiana University

    The Carter Lodge hangs precariously over the flood-scoured bank of the Broad River in Chimney Rock Village, N.C., on May 13, 2025, eight months after Hurricane Helene. AP Photo/Allen G. Breed

    Hurricane Helene lasted only a few days in September 2024, but it altered the landscape of the Southeastern U.S. in profound ways that will affect the hazards local residents face far into the future.

    Mudslides buried roads and reshaped river channels. Uprooted trees left soil on hillslopes exposed to the elements. Sediment that washed into rivers changed how water flows through the landscape, leaving some areas more prone to flooding and erosion.

    Helene was a powerful reminder that natural hazards don’t disappear when the skies clear – they evolve.

    These transformations are part of what scientists call cascading hazards. They occur when one natural event alters the landscape in ways that lead to future hazards. A landslide triggered by a storm might clog a river, leading to downstream flooding months or years later. A wildfire can alter the soil and vegetation, setting the stage for debris flows with the next rainstorm.

    Satellite images before (top) and after Hurricane Helene (bottom) show how the storm altered landscape near Pensacola, N.C., in the Blue Ridge Mountains.
    Google Earth, CC BY

    I study these disasters as a geomorphologist. In a new paper in the journal Science, I and a team of scientists from 18 universities and the U.S. Geological Survey explain why hazard models – used to help communities prepare for disasters – can’t just rely on the past. Instead, they need to be nimble enough to forecast how hazards evolve in real time.

    The science behind cascading hazards

    Cascading hazards aren’t random. They emerge from physical processes that operate continuously across the landscape – sediment movement, weathering, erosion. Together, the atmosphere, biosphere and the earth are constantly reshaping the conditions that cause natural disasters.

    For instance, earthquakes fracture rock and shake loose soil. Even if landslides don’t occur during the quake itself, the ground may be weakened, leaving it primed for failure during later rainstorms.

    That’s exactly what happened after the 2008 earthquake in Sichuan Province, China, which led to a surge in debris flows long after the initial seismic event.

    A strong aftershock after a 7.8 magnitude earthquake in Sichuan province, China, in May 2008 triggered more landslides in central China.
    AP Photo/Andy Wong

    Earth’s surface retains a “memory” of these events. Sediment disturbed in an earthquake, wildfire or severe storm will move downslope over years or even decades, reshaping the landscape as it goes.

    The 1950 Assam earthquake in India is a striking example: It triggered thousands of landslides. The sediment from these landslides gradually moved through the river system, eventually causing flooding and changing river channels in Bangladesh some 20 years later.

    An intensifying threat in a changing world

    These risks present challenges for everything from emergency planning to home insurance. After repeated wildfire-mudslide combinations in California, some insurers pulled out of the state entirely, citing mounting risks and rising costs among the reasons.

    Cascading hazards are not new, but their impact is intensifying.

    Climate change is increasing the frequency and severity of wildfires, storms and extreme rainfall. At the same time, urban development continues to expand into steep, hazard-prone terrain, exposing more people and infrastructure to evolving risks.

    The rising risk of interconnected climate disasters like these is overwhelming systems built for isolated events.

    Yet climate change is only part of the equation. Earth processes – such as earthquakes and volcanic eruptions – also trigger cascading hazards, often with long-lasting effects.

    Mount St. Helens is a powerful example: More than four decades after its eruption in 1980, the U.S. Army Corps of Engineers continues to manage ash and sediment from the eruption to keep it from filling river channels in ways that could increase the flood risk in downstream communities.

    Rethinking risk and building resilience

    Traditionally, insurance companies and disaster managers have estimated hazard risk by looking at past events.

    But when the landscape has changed, the past may no longer be a reliable guide to the future. To address this, computer models based on the physics of how these events work are needed to help forecast hazard evolution in real time, much like weather models update with new atmospheric data.

    A March 2024 landslide in the Oregon Coast Range wiped out trees in its path.
    Brian Yanites, June 2025
    A drone image of the same March 2024 landslide in the Oregon Coast Range shows where it temporarily dammed the river below.
    Brian Yanites, June 2025

    Thanks to advances in Earth observation technology, such as satellite imagery, drone and lidar, which is similar to radar but uses light, scientists can now track how hillslopes, rivers and vegetation change after disasters. These observations can feed into geomorphic models that simulate how loosened sediment moves and where hazards are likely to emerge next.

    Researchers are already coupling weather forecasts with post-wildfire debris flow models. Other models simulate how sediment pulses travel through river networks.

    Cascading hazards reveal that Earth’s surface is not a passive backdrop, but an active, evolving system. Each event reshapes the stage for the next.

    Understanding these connections is critical for building resilience so communities can withstand future storms, earthquakes and the problems created by debris flows. Better forecasts can inform building codes, guide infrastructure design and improve how risk is priced and managed. They can help communities anticipate long-term threats and adapt before the next disaster strikes.

    Most importantly, they challenge everyone to think beyond the immediate aftermath of a disaster – and to recognize the slow, quiet transformations that build toward the next.

    Brian J. Yanites receives funding from the National Science Foundation.

    ref. Hurricane Helene set up future disasters, from landslides to flooding – cascading hazards like these are upending risk models – https://theconversation.com/hurricane-helene-set-up-future-disasters-from-landslides-to-flooding-cascading-hazards-like-these-are-upending-risk-models-259502

    MIL OSI Analysis

  • MIL-OSI Analysis: Checking in on New England’s fishing industry 25 Years after ‘The Perfect Storm’ hit movie theaters

    Source: The Conversation – USA – By Stephanie Otts, Director of National Sea Grant Law Center, University of Mississippi

    Filming ‘The Perfect Storm’ in Gloucester Harbor, Mass.
    The Salem News Historic Photograph Collection, Salem State University Archives and Special Collections, CC BY

    Twenty-five years ago, “The Perfect Storm” roared into movie theaters. The disaster flick, starring George Clooney and Mark Wahlberg, was a riveting, fictionalized account of commercial swordfishing in New England and a crew who went down in a violent storm.

    The anniversary of the film’s release, on June 30, 2000, provides an opportunity to reflect on the real-life changes to New England’s commercial fishing industry.

    Fishing was once more open to all

    In the true story behind the movie, six men lost their lives in late October 1991 when the commercial swordfishing vessel Andrea Gail disappeared in a fierce storm in the North Atlantic as it was headed home to Gloucester, Massachusetts.

    At the time, and until very recently, almost all commercial fisheries were open access, meaning there were no restrictions on who could fish.

    There were permit requirements and regulations about where, when and how you could fish, but anyone with the means to purchase a boat and associated permits, gear, bait and fuel could enter the fishery. Eight regional councils established under a 1976 federal law to manage fisheries around the U.S. determined how many fish could be harvested prior to the start of each fishing season.

    Fishing has been an integral part of coastal New England culture since its towns were established. In this 1899 photo, a New England community weighs and packs mackerel.
    Charles Stevenson/Freshwater and Marine Image Bank

    Fishing started when the season opened and continued until the catch limit was reached. In some fisheries, this resulted in a “race to the fish” or a “derby,” where vessels competed aggressively to harvest the available catch in short amounts of time. The limit could be reached in a single day, as happened in the Pacific halibut fishery in the late 1980s.

    By the 1990s, however, open access systems were coming under increased criticism from economists as concerns about overfishing rose.

    The fish catch peaked in New England in 1987 and would remain far above what the fish population could sustain for two more decades. Years of overfishing led to the collapse of fish stocks, including North Atlantic cod in 1992 and Pacific sardine in 2015.

    As populations declined, managers responded by cutting catch limits to allow more fish to survive and reproduce. Fishing seasons were shortened, as it took less time for the fleets to harvest the allowed catch. It became increasingly hard for fishermen to catch enough fish to earn a living.

    Saving fisheries changed the industry

    In the early 2000s, as these economic and environmental challenges grew, fisheries managers started limiting access. Instead of allowing anyone to fish, only vessels or individuals meeting certain eligibility requirements would have the right to fish.

    The most common method of limiting access in the U.S. is through limited entry permits, initially awarded to individuals or vessels based on previous participation or success in the fishery. Another approach is to assign individual harvest quotas or “catch shares” to permit holders, limiting how much each boat can bring in.

    In 2007, Congress amended the 1976 Magnuson-Stevens Fishery Conservation and Management Act to promote the use of limited access programs in U.S. fisheries.

    Ships in the fleet out of New Bedford, Mass.
    Henry Zbyszynski/Flickr, CC BY

    Today, limited access is common, and there are positive signs that the management change is helping achieve the law’s environmental goal of preventing overfishing. Since 2000, the populations of 50 major fishing stocks have been rebuilt, meaning they have recovered to a level that can once again support fishing.

    I’ve been following the changes as a lawyer focused on ocean and coastal issues, and I see much work still to be done.

    Forty fish stocks are currently being managed under rebuilding plans that limit catch to allow the stock to grow, including Atlantic cod, which has struggled to recover due to a complex combination of factors, including climatic changes.

    The lingering effect on communities today

    While many fish stocks have recovered, the effort came at an economic cost to many individual fishermen. The limited-access Northeast groundfish fishery, which includes Atlantic cod, haddock and flounder, shed nearly 800 crew positions between 2007 and 2015.

    The loss of jobs and revenue from fishing impacts individual family income and relationships, strains other businesses in fishing communities, and affects those communities’ overall identity and resilience, as illustrated by a recent economic snapshot of the Alaska seafood industry.

    When original limited-access permit holders leave the business – for economic, personal or other reasons – their permits are either terminated or sold to other eligible permit holders, leading to fewer active vessels in the fleet. As a result, the number of vessels fishing for groundfish has declined from 719 in 2007 to 194 in 2023, meaning fewer jobs.

    A fisherman unloads a portion of his catch for the day of 300 pounds of groundfish, including flounder, in January 2006 in Gloucester, Mass.
    AP Photo/Lisa Poole

    Because of their scarcity, limited-access permits can cost upward of US$500,000, which is often beyond the financial means of a small businesses or a young person seeking to enter the industry. The high prices may also lead retiring fishermen to sell their permits, as opposed to passing them along with the vessels to the next generation.

    These economic forces have significantly altered the fishing industry, leading to more corporate and investor ownership, rather than the family-owned operations that were more common in the Andrea Gail’s time.

    Similar to the experience of small family farms, fishing captains and crews are being pushed into corporate arrangements that reduce their autonomy and revenues.

    Consolidation can threaten the future of entire fleets, as New Bedford, Massachusetts, saw when Blue Harvest Fisheries, backed by a private equity firm, bought up vessels and other assets and then declared bankruptcy a few years later, leaving a smaller fleet and some local business and fishermen unpaid for their work. A company with local connections bought eight vessels from Blue Harvest along with 48 state and federal permits the company held.

    New challenges and unchanging risks

    While there are signs of recovery for New England’s fisheries, challenges continue.

    Warming water temperatures have shifted the distribution of some species, affecting where and when fish are harvested. For example, lobsters have moved north toward Canada. When vessels need to travel farther to find fish, that increases fuel and supply costs and time away from home.

    Fisheries managers will need to continue to adapt to keep New England’s fisheries healthy and productive.

    One thing that, unfortunately, hasn’t changed is the dangerous nature of the occupation. Between 2000 and 2019, 414 fishermen died in 245 disasters.

    Stephanie Otts receives funding from the NOAA National Sea Grant College Program through the U.S. Department of Commerce. Previous support for fisheries management legal research provided by The Nature Conservancy.

    ref. Checking in on New England’s fishing industry 25 Years after ‘The Perfect Storm’ hit movie theaters – https://theconversation.com/checking-in-on-new-englands-fishing-industry-25-years-after-the-perfect-storm-hit-movie-theaters-255076

    MIL OSI Analysis

  • MIL-OSI Analysis: Hurricane Helene set up future disasters, from landslides to flooding – cascading hazards like these are now upending risk models

    Source: The Conversation – USA – By Brian J. Yanites, Associate Professor of Earth and Atmospheric Science. Professor of Surficial and Sedimentary Geology, Indiana University

    The Carter Lodge hangs precariously over the flood-scoured bank of the Broad River in Chimney Rock Village, N.C., on May 13, 2025, eight months after Hurricane Helene. AP Photo/Allen G. Breed

    Hurricane Helene lasted only a few days in September 2024, but it altered the landscape of the Southeastern U.S. in profound ways that will affect the hazards local residents face far into the future.

    Mudslides buried roads and reshaped river channels. Uprooted trees left soil on hillslopes exposed to the elements. Sediment that washed into rivers changed how water flows through the landscape, leaving some areas more prone to flooding and erosion.

    Helene was a powerful reminder that natural hazards don’t disappear when the skies clear – they evolve.

    These transformations are part of what scientists call cascading hazards. They occur when one natural event alters the landscape in ways that lead to future hazards. A landslide triggered by a storm might clog a river, leading to downstream flooding months or years later. A wildfire can alter the soil and vegetation, setting the stage for debris flows with the next rainstorm.

    Satellite images before (top) and after Hurricane Helene (bottom) show how the storm altered landscape near Pensacola, N.C., in the Blue Ridge Mountains.
    Google Earth, CC BY

    I study these disasters as a geomorphologist. In a new paper in the journal Science, I and a team of scientists from 18 universities and the U.S. Geological Survey explain why hazard models – used to help communities prepare for disasters – can’t just rely on the past. Instead, they need to be nimble enough to forecast how hazards evolve in real time.

    The science behind cascading hazards

    Cascading hazards aren’t random. They emerge from physical processes that operate continuously across the landscape – sediment movement, weathering, erosion. Together, the atmosphere, biosphere and the earth are constantly reshaping the conditions that cause natural disasters.

    For instance, earthquakes fracture rock and shake loose soil. Even if landslides don’t occur during the quake itself, the ground may be weakened, leaving it primed for failure during later rainstorms.

    That’s exactly what happened after the 2008 earthquake in Sichuan Province, China, which led to a surge in debris flows long after the initial seismic event.

    A strong aftershock after a 7.8 magnitude earthquake in Sichuan province, China, in May 2008 triggered more landslides in central China.
    AP Photo/Andy Wong

    Earth’s surface retains a “memory” of these events. Sediment disturbed in an earthquake, wildfire or severe storm will move downslope over years or even decades, reshaping the landscape as it goes.

    The 1950 Assam earthquake in India is a striking example: It triggered thousands of landslides. The sediment from these landslides gradually moved through the river system, eventually causing flooding and changing river channels in Bangladesh some 20 years later.

    An intensifying threat in a changing world

    These risks present challenges for everything from emergency planning to home insurance. After repeated wildfire-mudslide combinations in California, some insurers pulled out of the state entirely, citing mounting risks and rising costs among the reasons.

    Cascading hazards are not new, but their impact is intensifying.

    Climate change is increasing the frequency and severity of wildfires, storms and extreme rainfall. At the same time, urban development continues to expand into steep, hazard-prone terrain, exposing more people and infrastructure to evolving risks.

    The rising risk of interconnected climate disasters like these is overwhelming systems built for isolated events.

    Yet climate change is only part of the equation. Earth processes – such as earthquakes and volcanic eruptions – also trigger cascading hazards, often with long-lasting effects.

    Mount St. Helens is a powerful example: More than four decades after its eruption in 1980, the U.S. Army Corps of Engineers continues to manage ash and sediment from the eruption to keep it from filling river channels in ways that could increase the flood risk in downstream communities.

    Rethinking risk and building resilience

    Traditionally, insurance companies and disaster managers have estimated hazard risk by looking at past events.

    But when the landscape has changed, the past may no longer be a reliable guide to the future. To address this, computer models based on the physics of how these events work are needed to help forecast hazard evolution in real time, much like weather models update with new atmospheric data.

    A March 2024 landslide in the Oregon Coast Range wiped out trees in its path.
    Brian Yanites, June 2025
    A drone image of the same March 2024 landslide in the Oregon Coast Range shows where it temporarily dammed the river below.
    Brian Yanites, June 2025

    Thanks to advances in Earth observation technology, such as satellite imagery, drone and lidar, which is similar to radar but uses light, scientists can now track how hillslopes, rivers and vegetation change after disasters. These observations can feed into geomorphic models that simulate how loosened sediment moves and where hazards are likely to emerge next.

    Researchers are already coupling weather forecasts with post-wildfire debris flow models. Other models simulate how sediment pulses travel through river networks.

    Cascading hazards reveal that Earth’s surface is not a passive backdrop, but an active, evolving system. Each event reshapes the stage for the next.

    Understanding these connections is critical for building resilience so communities can withstand future storms, earthquakes and the problems created by debris flows. Better forecasts can inform building codes, guide infrastructure design and improve how risk is priced and managed. They can help communities anticipate long-term threats and adapt before the next disaster strikes.

    Most importantly, they challenge everyone to think beyond the immediate aftermath of a disaster – and to recognize the slow, quiet transformations that build toward the next.

    Brian J. Yanites receives funding from the National Science Foundation.

    ref. Hurricane Helene set up future disasters, from landslides to flooding – cascading hazards like these are now upending risk models – https://theconversation.com/hurricane-helene-set-up-future-disasters-from-landslides-to-flooding-cascading-hazards-like-these-are-now-upending-risk-models-259502

    MIL OSI Analysis

  • MIL-OSI Analysis: Natural disasters don’t disappear when the storm ends or the earthquake stops – they evolve

    Source: The Conversation – USA – By Brian J. Yanites, Associate Professor of Earth and Atmospheric Science. Professor of Surficial and Sedimentary Geology, Indiana University

    The Carter Lodge hangs precariously over the flood-scoured bank of the Broad River in Chimney Rock Village, N.C., on May 13, 2025, eight months after Hurricane Helene. AP Photo/Allen G. Breed

    Hurricane Helene lasted only a few days in September 2024, but it altered the landscape of the Southeastern U.S. in profound ways that will affect the hazards local residents face far into the future.

    Mudslides buried roads and reshaped river channels. Uprooted trees left soil on hillslopes exposed to the elements. Sediment that washed into rivers changed how water flows through the landscape, leaving some areas more prone to flooding and erosion.

    Helene was a powerful reminder that natural hazards don’t disappear when the skies clear – they evolve.

    These transformations are part of what scientists call cascading hazards. They occur when one natural event alters the landscape in ways that lead to future hazards. A landslide triggered by a storm might clog a river, leading to downstream flooding months or years later. A wildfire can alter the soil and vegetation, setting the stage for debris flows with the next rainstorm.

    Satellite images before (top) and after Hurricane Helene (bottom) show how the storm altered landscape near Pensacola, N.C., in the Blue Ridge Mountains.
    Google Earth, CC BY

    I study these disasters as a geomorphologist. In a new paper in the journal Science, I and a team of scientists from 18 universities and the U.S. Geological Survey explain why hazard models – used to help communities prepare for disasters – can’t just rely on the past. Instead, they need to be nimble enough to forecast how hazards evolve in real time.

    The science behind cascading hazards

    Cascading hazards aren’t random. They emerge from physical processes that operate continuously across the landscape – sediment movement, weathering, erosion. Together, the atmosphere, biosphere and the earth are constantly reshaping the conditions that cause natural disasters.

    For instance, earthquakes fracture rock and shake loose soil. Even if landslides don’t occur during the quake itself, the ground may be weakened, leaving it primed for failure during later rainstorms.

    That’s exactly what happened after the 2008 earthquake in Sichuan Province, China, which led to a surge in debris flows long after the initial seismic event.

    A strong aftershock after a 7.8 magnitude earthquake in Sichuan province, China, in May 2008 triggered more landslides in central China.
    AP Photo/Andy Wong

    Earth’s surface retains a “memory” of these events. Sediment disturbed in an earthquake, wildfire or severe storm will move downslope over years or even decades, reshaping the landscape as it goes.

    The 1950 Assam earthquake in India is a striking example: It triggered thousands of landslides. The sediment from these landslides gradually moved through the river system, eventually causing flooding and changing river channels in Bangladesh some 20 years later.

    An intensifying threat in a changing world

    These risks present challenges for everything from emergency planning to home insurance. After repeated wildfire-mudslide combinations in California, some insurers pulled out of the state entirely, citing mounting risks and rising costs among the reasons.

    Cascading hazards are not new, but their impact is intensifying.

    Climate change is increasing the frequency and severity of wildfires, storms and extreme rainfall. At the same time, urban development continues to expand into steep, hazard-prone terrain, exposing more people and infrastructure to evolving risks.

    The rising risk of interconnected climate disasters like these is overwhelming systems built for isolated events.

    Yet climate change is only part of the equation. Earth processes – such as earthquakes and volcanic eruptions – also trigger cascading hazards, often with long-lasting effects.

    Mount St. Helens is a powerful example: More than four decades after its eruption in 1980, the U.S. Army Corps of Engineers continues to manage ash and sediment from the eruption to keep it from filling river channels in ways that could increase the flood risk in downstream communities.

    Rethinking risk and building resilience

    Traditionally, insurance companies and disaster managers have estimated hazard risk by looking at past events.

    But when the landscape has changed, the past may no longer be a reliable guide to the future. To address this, computer models based on the physics of how these events work are needed to help forecast hazard evolution in real time, much like weather models update with new atmospheric data.

    A March 2024 landslide in the Oregon Coast Range wiped out trees in its path.
    Brian Yanites, June 2025
    A drone image of the same March 2024 landslide in the Oregon Coast Range shows where it temporarily dammed the river below.
    Brian Yanites, June 2025

    Thanks to advances in Earth observation technology, such as satellite imagery, drone and lidar, which is similar to radar but uses light, scientists can now track how hillslopes, rivers and vegetation change after disasters. These observations can feed into geomorphic models that simulate how loosened sediment moves and where hazards are likely to emerge next.

    Researchers are already coupling weather forecasts with post-wildfire debris flow models. Other models simulate how sediment pulses travel through river networks.

    Cascading hazards reveal that Earth’s surface is not a passive backdrop, but an active, evolving system. Each event reshapes the stage for the next.

    Understanding these connections is critical for building resilience so communities can withstand future storms, earthquakes and the problems created by debris flows. Better forecasts can inform building codes, guide infrastructure design and improve how risk is priced and managed. They can help communities anticipate long-term threats and adapt before the next disaster strikes.

    Most importantly, they challenge everyone to think beyond the immediate aftermath of a disaster – and to recognize the slow, quiet transformations that build toward the next.

    Brian J. Yanites receives funding from the National Science Foundation.

    ref. Natural disasters don’t disappear when the storm ends or the earthquake stops – they evolve – https://theconversation.com/natural-disasters-dont-disappear-when-the-storm-ends-or-the-earthquake-stops-they-evolve-259502

    MIL OSI Analysis

  • MIL-OSI Analysis: Natural disasters don’t disappear when the storm ends or the earthquake stops – they evolve

    Source: The Conversation – USA – By Brian J. Yanites, Associate Professor of Earth and Atmospheric Science. Professor of Surficial and Sedimentary Geology, Indiana University

    The Carter Lodge hangs precariously over the flood-scoured bank of the Broad River in Chimney Rock Village, N.C., on May 13, 2025, eight months after Hurricane Helene. AP Photo/Allen G. Breed

    Hurricane Helene lasted only a few days in September 2024, but it altered the landscape of the Southeastern U.S. in profound ways that will affect the hazards local residents face far into the future.

    Mudslides buried roads and reshaped river channels. Uprooted trees left soil on hillslopes exposed to the elements. Sediment that washed into rivers changed how water flows through the landscape, leaving some areas more prone to flooding and erosion.

    Helene was a powerful reminder that natural hazards don’t disappear when the skies clear – they evolve.

    These transformations are part of what scientists call cascading hazards. They occur when one natural event alters the landscape in ways that lead to future hazards. A landslide triggered by a storm might clog a river, leading to downstream flooding months or years later. A wildfire can alter the soil and vegetation, setting the stage for debris flows with the next rainstorm.

    Satellite images before (top) and after Hurricane Helene (bottom) show how the storm altered landscape near Pensacola, N.C., in the Blue Ridge Mountains.
    Google Earth, CC BY

    I study these disasters as a geomorphologist. In a new paper in the journal Science, I and a team of scientists from 18 universities and the U.S. Geological Survey explain why hazard models – used to help communities prepare for disasters – can’t just rely on the past. Instead, they need to be nimble enough to forecast how hazards evolve in real time.

    The science behind cascading hazards

    Cascading hazards aren’t random. They emerge from physical processes that operate continuously across the landscape – sediment movement, weathering, erosion. Together, the atmosphere, biosphere and the earth are constantly reshaping the conditions that cause natural disasters.

    For instance, earthquakes fracture rock and shake loose soil. Even if landslides don’t occur during the quake itself, the ground may be weakened, leaving it primed for failure during later rainstorms.

    That’s exactly what happened after the 2008 earthquake in Sichuan Province, China, which led to a surge in debris flows long after the initial seismic event.

    A strong aftershock after a 7.8 magnitude earthquake in Sichuan province, China, in May 2008 triggered more landslides in central China.
    AP Photo/Andy Wong

    Earth’s surface retains a “memory” of these events. Sediment disturbed in an earthquake, wildfire or severe storm will move downslope over years or even decades, reshaping the landscape as it goes.

    The 1950 Assam earthquake in India is a striking example: It triggered thousands of landslides. The sediment from these landslides gradually moved through the river system, eventually causing flooding and changing river channels in Bangladesh some 20 years later.

    An intensifying threat in a changing world

    These risks present challenges for everything from emergency planning to home insurance. After repeated wildfire-mudslide combinations in California, some insurers pulled out of the state entirely, citing mounting risks and rising costs among the reasons.

    Cascading hazards are not new, but their impact is intensifying.

    Climate change is increasing the frequency and severity of wildfires, storms and extreme rainfall. At the same time, urban development continues to expand into steep, hazard-prone terrain, exposing more people and infrastructure to evolving risks.

    The rising risk of interconnected climate disasters like these is overwhelming systems built for isolated events.

    Yet climate change is only part of the equation. Earth processes – such as earthquakes and volcanic eruptions – also trigger cascading hazards, often with long-lasting effects.

    Mount St. Helens is a powerful example: More than four decades after its eruption in 1980, the U.S. Army Corps of Engineers continues to manage ash and sediment from the eruption to keep it from filling river channels in ways that could increase the flood risk in downstream communities.

    Rethinking risk and building resilience

    Traditionally, insurance companies and disaster managers have estimated hazard risk by looking at past events.

    But when the landscape has changed, the past may no longer be a reliable guide to the future. To address this, computer models based on the physics of how these events work are needed to help forecast hazard evolution in real time, much like weather models update with new atmospheric data.

    A March 2024 landslide in the Oregon Coast Range wiped out trees in its path.
    Brian Yanites, June 2025
    A drone image of the same March 2024 landslide in the Oregon Coast Range shows where it temporarily dammed the river below.
    Brian Yanites, June 2025

    Thanks to advances in Earth observation technology, such as satellite imagery, drone and lidar, which is similar to radar but uses light, scientists can now track how hillslopes, rivers and vegetation change after disasters. These observations can feed into geomorphic models that simulate how loosened sediment moves and where hazards are likely to emerge next.

    Researchers are already coupling weather forecasts with post-wildfire debris flow models. Other models simulate how sediment pulses travel through river networks.

    Cascading hazards reveal that Earth’s surface is not a passive backdrop, but an active, evolving system. Each event reshapes the stage for the next.

    Understanding these connections is critical for building resilience so communities can withstand future storms, earthquakes and the problems created by debris flows. Better forecasts can inform building codes, guide infrastructure design and improve how risk is priced and managed. They can help communities anticipate long-term threats and adapt before the next disaster strikes.

    Most importantly, they challenge everyone to think beyond the immediate aftermath of a disaster – and to recognize the slow, quiet transformations that build toward the next.

    Brian J. Yanites receives funding from the National Science Foundation.

    ref. Natural disasters don’t disappear when the storm ends or the earthquake stops – they evolve – https://theconversation.com/natural-disasters-dont-disappear-when-the-storm-ends-or-the-earthquake-stops-they-evolve-259502

    MIL OSI Analysis

  • MIL-OSI Analysis: Natural disasters don’t disappear when the storm ends or the earthquake stops – they evolve

    Source: The Conversation – USA – By Brian J. Yanites, Associate Professor of Earth and Atmospheric Science. Professor of Surficial and Sedimentary Geology, Indiana University

    The Carter Lodge hangs precariously over the flood-scoured bank of the Broad River in Chimney Rock Village, N.C., on May 13, 2025, eight months after Hurricane Helene. AP Photo/Allen G. Breed

    Hurricane Helene lasted only a few days in September 2024, but it altered the landscape of the Southeastern U.S. in profound ways that will affect the hazards local residents face far into the future.

    Mudslides buried roads and reshaped river channels. Uprooted trees left soil on hillslopes exposed to the elements. Sediment that washed into rivers changed how water flows through the landscape, leaving some areas more prone to flooding and erosion.

    Helene was a powerful reminder that natural hazards don’t disappear when the skies clear – they evolve.

    These transformations are part of what scientists call cascading hazards. They occur when one natural event alters the landscape in ways that lead to future hazards. A landslide triggered by a storm might clog a river, leading to downstream flooding months or years later. A wildfire can alter the soil and vegetation, setting the stage for debris flows with the next rainstorm.

    Satellite images before (top) and after Hurricane Helene (bottom) show how the storm altered landscape near Pensacola, N.C., in the Blue Ridge Mountains.
    Google Earth, CC BY

    I study these disasters as a geomorphologist. In a new paper in the journal Science, I and a team of scientists from 18 universities and the U.S. Geological Survey explain why hazard models – used to help communities prepare for disasters – can’t just rely on the past. Instead, they need to be nimble enough to forecast how hazards evolve in real time.

    The science behind cascading hazards

    Cascading hazards aren’t random. They emerge from physical processes that operate continuously across the landscape – sediment movement, weathering, erosion. Together, the atmosphere, biosphere and the earth are constantly reshaping the conditions that cause natural disasters.

    For instance, earthquakes fracture rock and shake loose soil. Even if landslides don’t occur during the quake itself, the ground may be weakened, leaving it primed for failure during later rainstorms.

    That’s exactly what happened after the 2008 earthquake in Sichuan Province, China, which led to a surge in debris flows long after the initial seismic event.

    A strong aftershock after a 7.8 magnitude earthquake in Sichuan province, China, in May 2008 triggered more landslides in central China.
    AP Photo/Andy Wong

    Earth’s surface retains a “memory” of these events. Sediment disturbed in an earthquake, wildfire or severe storm will move downslope over years or even decades, reshaping the landscape as it goes.

    The 1950 Assam earthquake in India is a striking example: It triggered thousands of landslides. The sediment from these landslides gradually moved through the river system, eventually causing flooding and changing river channels in Bangladesh some 20 years later.

    An intensifying threat in a changing world

    These risks present challenges for everything from emergency planning to home insurance. After repeated wildfire-mudslide combinations in California, some insurers pulled out of the state entirely, citing mounting risks and rising costs among the reasons.

    Cascading hazards are not new, but their impact is intensifying.

    Climate change is increasing the frequency and severity of wildfires, storms and extreme rainfall. At the same time, urban development continues to expand into steep, hazard-prone terrain, exposing more people and infrastructure to evolving risks.

    The rising risk of interconnected climate disasters like these is overwhelming systems built for isolated events.

    Yet climate change is only part of the equation. Earth processes – such as earthquakes and volcanic eruptions – also trigger cascading hazards, often with long-lasting effects.

    Mount St. Helens is a powerful example: More than four decades after its eruption in 1980, the U.S. Army Corps of Engineers continues to manage ash and sediment from the eruption to keep it from filling river channels in ways that could increase the flood risk in downstream communities.

    Rethinking risk and building resilience

    Traditionally, insurance companies and disaster managers have estimated hazard risk by looking at past events.

    But when the landscape has changed, the past may no longer be a reliable guide to the future. To address this, computer models based on the physics of how these events work are needed to help forecast hazard evolution in real time, much like weather models update with new atmospheric data.

    A March 2024 landslide in the Oregon Coast Range wiped out trees in its path.
    Brian Yanites, June 2025
    A drone image of the same March 2024 landslide in the Oregon Coast Range shows where it temporarily dammed the river below.
    Brian Yanites, June 2025

    Thanks to advances in Earth observation technology, such as satellite imagery, drone and lidar, which is similar to radar but uses light, scientists can now track how hillslopes, rivers and vegetation change after disasters. These observations can feed into geomorphic models that simulate how loosened sediment moves and where hazards are likely to emerge next.

    Researchers are already coupling weather forecasts with post-wildfire debris flow models. Other models simulate how sediment pulses travel through river networks.

    Cascading hazards reveal that Earth’s surface is not a passive backdrop, but an active, evolving system. Each event reshapes the stage for the next.

    Understanding these connections is critical for building resilience so communities can withstand future storms, earthquakes and the problems created by debris flows. Better forecasts can inform building codes, guide infrastructure design and improve how risk is priced and managed. They can help communities anticipate long-term threats and adapt before the next disaster strikes.

    Most importantly, they challenge everyone to think beyond the immediate aftermath of a disaster – and to recognize the slow, quiet transformations that build toward the next.

    Brian J. Yanites receives funding from the National Science Foundation.

    ref. Natural disasters don’t disappear when the storm ends or the earthquake stops – they evolve – https://theconversation.com/natural-disasters-dont-disappear-when-the-storm-ends-or-the-earthquake-stops-they-evolve-259502

    MIL OSI Analysis

  • MIL-OSI Analysis: Michelin Guide scrutiny could boost Philly tourism, but will it stifle chefs’ freedom to experiment and innovate?

    Source: The Conversation – USA – By Jonathan Deutsch, Professor of Food and Hospitality Management, Drexel University

    Chef Phila Lorn prepares a bowl of noodle soup at Mawn restaurant in Philadelphia. AP Photo/Matt Rourke

    The Philadelphia restaurant scene is abuzz with the news that the famed Michelin Guide is coming to town.

    As a research chef and educator at Drexel University in Philadelphia, I am following the Michelin developments closely.

    Having eaten in Michelin restaurants in other cities, I am confident that Philly has at least a few star-worthy restaurants. Our innovative dining scene was named one of the top 10 in the U.S. by Food & Wine in 2025.

    Researchers have convincingly shown that Michelin ratings can boost tourism, so Philly gaining some starred restaurants could bring more revenue for the city.

    But as the lead author of the textbook “Culinary Improvisation,” which teaches creativity, I also worry the Michelin scrutiny could make chefs more focused on delivering a consistent experience than continuing along the innovative trajectory that attracts Michelin in the first place.

    Ingredients for culinary innovation

    In “Culinary Improvisation” we discuss three elements needed to foster innovation in the kitchen.

    The first is mastery of culinary technique, both classical and modern. Simply stated, this refers to good cooking.

    The second is access to a diverse range of ingredients and flavors. The more colors the artist has on their palette, the more directions the creation can take.

    And the third, which is key to my concerns, is a collaborative and supportive environment where chefs can take risks and make mistakes. Research shows a close link between risk-taking workplaces and innovation.

    According to the Michelin Guide, stars are awarded to outstanding restaurants based on: “quality of ingredients, mastery of cooking techniques and flavors, the personality of the chef as expressed in the cuisine, value for money, and consistency of the dining experience both across the menu and over time.”

    The criteria do not mention innovation.

    It’s possible the high-stakes lure of a Michelin star, which awards consistent excellence, could lead Philly’s most vibrant and creative chefs and restaurateurs to pull back on the risks that led to the city’s culinary excellence in the first place.

    Local food writers believe Vernick Fish is a top contender for a Michelin star.
    Photo courtesy of Vernick Fish

    The obvious contenders

    Philadelphia’s preeminent restaurant critic Craig LaBan and journalist and former restaurateur Kiki Aranita discussed local contenders for Michelin stars in a recent article in the Philadelphia Inquirer.

    The 19 restaurants LaBan and Aranita discuss as possible star contenders average just over a one-mile walk from the Pennsylvania Convention Center.

    Together they have received 78 James Beard nominations or awards, which are considered the “Oscars” of the food industry. That’s an average of over four per restaurant.

    And when I tried to book a table for two on a Wednesday and Saturday before 9 p.m., about half were already fully booked for dinner two weeks out, in July, which is the slow season for dining in Philadelphia.

    If LaBan’s and Aranita’s predictions are right, Michelin will be an added recognition for restaurants that are already successful and centrally located.

    Black Dragon Takeout fuses Black American cuisine with the aesthetics of classic Chinese American takeout.
    Jeff Fusco/The Conversation, CC BY-SA

    Off the beaten path

    When the Michelin Guide started in France at the turn of the 19th century, it encouraged diners to take the road less traveled to their next gastronomic experience.

    It has since evolved into recommendations for a road well traveled: safe, lauded and already hard-to-get-into restaurants. In Philly these could be restaurants such as Vetri Cucina, Zahav, Vernick Fish, Provenance, Royal Sushi and Izakaya, Ogawa and Friday Saturday Sunday, to name a few on LaBan and Aranita’s list.

    And yet Philadelphia has over 6,000 restaurants spread across 135 square miles of the city. Philadelphia is known as a city of neighborhoods, and these neighborhoods are rich with food diversity and innovation.

    Consider Jacob Trinh’s Vietnamese-tinged seafood tasting menu at Little Fish in Queen Village; Kurt Evans’ gumbo lo mein at Black Dragon Takeout in West Philly; the beef cheek confit with avocado mousse at Temir Satybaldiev’s Ginger in the Northeast; and the West African XO sauce at Honeysuckle, owned by Omar Tate and Cybille St.Aude-Tate, on North Broad Street.

    I hope the Michelin inspectors will venture far beyond the obvious candidates to experience more of what Philadelphia has to offer.

    The Michelin Guide announced it will include Philadelphia and Boston in its next Northeast Cities edition.
    Matthieu Delaty/Hans Lucas/AFP via Getty Images

    Raising the bar

    In the frenzy surrounding the Michelin scrutiny, chef friends have invited me to dine at their restaurants and share my feedback as they refine their menus in anticipation of visits from anonymous Michelin inspectors.

    Restaurateurs have been asking my colleagues and me for talent suggestions to replace well-liked and capable cooks, servers and managers whom owners perceive to be just not Michelin-star level.

    And managers are texting us names of suspected reviewers, triggered by some tell-tale signs – a solo diner with a weeknight tasting menu reservation, no dietary restrictions or special requests, and a conspicuously light internet presence.

    In all, I am excited about Philadelphians being excited about Michelin. Any opportunity to spotlight the city’s restaurant community and tighten its food and service quality raises the bar among local chefs and restaurateurs and makes the experience better for diners. And the prospect of business travelers and culinary tourists enjoying lunches and early-week dinners can help restaurants, their workers and the city earn more revenue.

    But in the din of the press events and hype, let’s not forget that Philadelphians don’t need an outside arbiter to tell us what we already know: Philly is a great place to eat and drink.

    _Read more of our stories about Philadelphia.

    Jonathan Deutsch does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Michelin Guide scrutiny could boost Philly tourism, but will it stifle chefs’ freedom to experiment and innovate? – https://theconversation.com/michelin-guide-scrutiny-could-boost-philly-tourism-but-will-it-stifle-chefs-freedom-to-experiment-and-innovate-256752

    MIL OSI Analysis

  • MIL-OSI Analysis: Scandinavia has its own dark history of assimilating Indigenous people, and churches played a role – but are apologizing

    Source: The Conversation – USA – By Thomas A. DuBois, Professor of Scandinavian Studies, Folklore, and Religious Studies, University of Wisconsin-Madison

    A church in Kiruna, Sweden, designed by architect Gustaf Wickman to resemble a Sami hut. Apolline Guillerot-Malick/SOPA Images/LightRocket via Getty Images

    In May 2025, Tapio Luoma, archbishop of the Evangelical Lutheran Church of Finland, delivered an apology to the Sámi, the only recognized Indigenous people in the European Union.

    Speaking on behalf of the church to which more than 6 in 10 of the Finnish populace belong, including most Sámi, Luoma acknowledged its role in past activities that stigmatized Sámi language and culture.

    The church “has not respected the rights to self-determination of the Sámi people,” his address began. “Before God and all of you here assembled, we express our regret and ask forgiveness of the Sámi people.”

    Luoma’s words were the latest in a series of apologies through which the former state churches in Scandinavia have sought to reset their relations with the Indigenous population of Sápmi, the natural and cultural area of Sámi people. Today, the region is divided between Finland, Norway, Sweden and Russia.

    As a scholar of Sámi culture, and as a researcher of Nordic folklore and religion, I have studied the difficult, often painful, relations between Sámi and the various Nordic state churches.

    Church’s power

    For thousands of years, the Sámi population lived by hunting, fishing and reindeer husbandry along the northern edges of Scandinavia. The Sámi possessed their own languages and maintained distinctive spiritual traditions and healing practices, drawing on traditional ecological knowledge that they had accrued over countless generations. In times of crisis or uncertainty, for example, communities used ceremonial drums to communicate with the spirit world and divine the future.

    Conflicts emerged by the 13th century, however, as Christian realms expanded north. Christian clerics condemned Sámi spiritual traditions as “heathen devilry.”

    An 18th-century carving of a Sámi shaman with his drum.
    Beskrivelse over Finnmarkens Lapper, deres Tungemaal, Levemaade og forrige Afgudsdyrkelse/O. H. von Lode/Wikimedia Commons

    During the 16th-century Protestant Reformation, Scandinavian rulers shifted from Catholicism to Lutheranism. In addition to tending to the souls of their flocks, ministers were tasked with keeping track of the comings and goings of congregation members, collecting taxes, and administering justice for lesser crimes.

    They aimed to stamp out the spiritual practices that many Sámi continued to practice alongside Christianity. Church authorities arrested, fined and sometimes even executed practitioners, while confiscating sacred drums to be destroyed or sent to distant museums.

    The church’s ritual of confirmation, which marks the passage from adolescence into adulthood, also acquired legal status. Being confirmed required the ability to read and interpret the Bible and Martin Luther’s Catechism, a summary of the Lutheran Church’s beliefs. As the church became part of the state, people who had not received confirmation could not represent themselves in court, own land or even marry.

    Lake Pielpajarvi Wilderness Church, the oldest Sami church still in use, in Inari Municipality, Lapland, Finland.
    VW PICS/Universal Images Group via Getty Images

    And where Luther had called for religious instruction to occur in one’s native language, most Nordic clergy provided catechesis only in the majority language, considering Sámi language and traditions impediments to true conversion.

    Assimilation efforts

    During the late 19th and early 20th centuries, the new “nation states” of Finland, Norway and Sweden emerged on the world stage. In each country, political leaders conflated what the ancient Greeks called the “demos” – members of a political nation – with an “ethnos,” a cultural group. In order to belong to the Finnish, Norwegian and Swedish political nations, political and cultural leaders of these new states asserted that it was necessary to belong to the majority linguistic and cultural community.

    Finland’s 1919 constitution made provision for Swedish, which is still used by about 5% of the population, as a national language alongside Finnish. However, the government accorded no such status to Sámi.

    Both state-run residential boarding schools and schools run by churches included Lutheranism as a subject and strove relentlessly to assimilate Sámi into the majority culture, language and worldview, teaching children to see their culture as backward and shameful. Some church and school authorities cooperated with pseudoscientific racial researchers measuring students’ heads and excavating Sámi graves.

    A ‘nomad school’ for Sami children in Jukkasjarvi, Sweden, 250 miles north of the Arctic Circle, in 1956.
    John Firth/BIPs/Getty Images

    As a result, many students ceased to identify as Sámi and adopted the majority language as their primary mode of communication. Today, only about half the people who identify as Sámi have any facility in Sámi languages, which are considered endangered.

    After World War II, church attendance in all the Nordic countries began to plummet. Where 98% of the Finnish population belonged to the state church in 1900, by 2024 that percentage had dropped to 62%. The bulk of defections consisted of people who registered as having no religious affiliation. Membership in the national church shifted from compulsory to voluntary.

    Yet as anthropologist David Koester shows, some elements of Lutheran tradition remain extremely popular in all the Nordic countries, particularly Confirmation. The ritual remains a key rite of passage for most Sámi today, yet many of them wrestle with whether they should remain faithful to a church that had worked to suppress their community’s language and culture.

    Reconciliation today

    Searching for a path forward, contemporary Sámi artist and Lutheran catechist Lars Levi Sunna began to produce church art that incorporated and celebrated pre-Christian Sámi symbols – some of the very traditions that had been demonized by clergy of the past.

    For example, in a church in the northern Swedish town of Jukkasjärvi, an image of the sun as it appeared on Sámi ceremonial drums now faces the altar, providing a vivid reminder of the spiritual history and past worldview of the church’s Sámi congregation. The symbol now encloses an image of a communion wafer carved of reindeer antler.

    In 2005, Sunna created a traveling art exhibit that portrayed Sámi Christianization as an act of cultural violence. The exhibit, designed for temporary installation in church sanctuaries, aimed to provoke discussion and encourage open dialogue about the past.

    Similarly, in 2008, Norwegian Sámi filmmaker Nils Gaup produced “Kautokeino Rebellion,” a film recounting clergy’s role in suppressing religious activism among followers of a Swedish Sámi minister, Lars Levi Laestadius. The so-called uprising in 1852 led to the imprisonment of several dozen Sámi and the execution of two men – whose skulls were deposited in a research institute and did not receive proper burial until 1997.

    Descended from one of the punished families, Gaup reminded his audience of past injustice shrouded in shame and silence.

    Since church attendance is infrequent in Nordic countries, art and film serve as important vehicles for raising awareness of the church’s past. In November 2021, the archbishop of Sweden, Antje Jackelén, issued a formal apology to the Sámi. Sámi artist and activist Anders Sunna was invited to temporarily redecorate the sanctuary of the Cathedral of Uppsala for the occasion. His decorations included reminders of past Sámi sacrificial traditions that took place both outdoors and around hearth fires. In place of a grand altar, Sunna erected a simple table, surrounded by an octagon of benches where the bishop and members of the Sámi community would sit face to face with a sense of equality and respect.

    As Sámi theologian Tore Johnsen notes, formal apologies are necessary first steps in a process of reconciliation. But only once they are followed by concrete acts of “restoration” can real reconciliation occur.

    When the Finnish archbishop apologized in May 2025, Sámi in attendance at the Turku Cathedral were appreciative, but they were eager to see what actions might follow, according to reporters at the ceremony. The same wait-and-see attitude characterizes Sámi responses to state-run Truth and Reconciliation processes, which occurred in Norway in 2023 and are currently ongoing in Sweden and Finland.

    The process of healing a society injured by colonialism is difficult and slow, requiring extensive discussion – much of it uncomfortable. With Luoma’s words of apology and the arrival of Sámi to listen and witness, an important step in that process occurred.

    Thomas A. DuBois does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Scandinavia has its own dark history of assimilating Indigenous people, and churches played a role – but are apologizing – https://theconversation.com/scandinavia-has-its-own-dark-history-of-assimilating-indigenous-people-and-churches-played-a-role-but-are-apologizing-255827

    MIL OSI Analysis

  • MIL-OSI Analysis: Jews were barred from Spain’s New World colonies − but that didn’t stop Jewish and converso writers from describing the Americas

    Source: The Conversation – USA – By Flora Cassen, Senior Faculty, Hartman Institute and Associate Professor of History and Jewish Studies, Washington University in St. Louis

    An auto-da-fé − a public punishment for heretics − in San Bartolome Otzolotepec, in present-day Mexico. Museo Nacional de Arte via Wikimedia Commons

    Every few years, a story about Columbus resurfaces: Was the Genoese navigator who claimed the Americas for Spain secretly Jewish, from a Spanish family fleeing the Inquisition?

    This tale became widespread around the late 19th century, when large numbers of Jews came from Russia and Eastern Europe to the United States. For these immigrants, 1492 held double significance: the year of Jews’ expulsion from Spain, as well as Columbus’ voyage of discovery. At a time when many Americans viewed the explorer as a hero, the idea that he might have been one of their own offered Jewish immigrants a link to the beginnings of their new country and the American story of freedom from Old World tyranny.

    The problem with the Columbus-was-a-Jew theory isn’t just that it’s based on flimsy evidence. It also distracts from the far more complex and true story of Spanish Jews in the Americas.

    In the 15th century, the kingdom’s Jews faced a wrenching choice: convert to Christianity or leave the land their families had called home for generations. Portugal’s Jews faced similar persecution. Whether they sought a new place to settle or stayed and hoped to be accepted as members of Christian society, both groups were searching for belonging.

    Jewish religious items at the Museo Metropolitano in Monterrey, Mexico.
    Thelmadatter/Wikimedia Commons, CC BY-SA

    We are scholars of Jewish history and have been working on the first English translations of two texts from the 16th century. “The Book of New India,” by Joseph Ha-Kohen, and the spiritual writings of Luis de Carvajal are two of the earliest Jewish texts about the Americas.

    The story of the New World is not complete without the voices of Jewish communities that engaged with it from the very beginning.

    Double consciousness

    The first Jews in the Americas were, in fact, not Jews but “conversos,” meaning “converts,” and their descendants.

    After a millennium of relatively peaceful and prosperous life on Iberian soil, the Jews of Spain were attacked by a wave of mob violence in the summer of 1391. Afterward, thousands of Jews were forcibly converted.

    Synagogue of El Tránsito, a 14th-century Jewish congregation in Toledo, Spain.
    Selbymay/Wikimedia Commons, CC BY-SA

    While conversos were officially members of the Catholic Church, neighbors looked at them with suspicion. Some of these converts were “crypto-Jews,” who secretly held on to their ancestral faith. Spanish authorities formed the Inquisition to root out anyone the church considered heretics, especially people who had converted from Judaism and Islam.

    In 1492, after conquering the last Muslim stronghold in Spain, monarchs Ferdinand and Isabella gave the remaining Spanish Jews the choice of conversion or exile. Eventually, people who converted from Islam would be expelled as well.

    Among Jews who converted, some sought new lives within the rapidly expanding Spanish empire. As the historian Jonathan Israel wrote, Jews and conversos were both “agents and victims of empire.” Their familiarity with Iberian language and culture, combined with the dispersion of their community, positioned them to participate in the new global economy: trade in sugar, textiles, spices – and the trade in human lives, Atlantic slavery.

    Yet conversos were also far more vulnerable than their compatriots: They could lose it all, even end up burned alive at the stake, because of their beliefs. This double consciousness – being part of the culture, yet apart from it – is what makes conversos vital to understanding the complexities of colonial Latin America.

    By the 17th century, once the Dutch and the English conquered parts of the Americas, Jews would be able to live there. Often, these were families whose ancestors had been expelled from the Iberian peninsula. In the first Spanish and Portuguese colonies, however, Jews were not allowed to openly practice their faith.

    Secret spirituality

    One of these conversos was Luis de Carvajal. His uncle, the similarly named Luis de Carvajal y de la Cueva, was a merchant, slave trader and conquistador. As a reward for his exploits he was named governor of the New Kingdom of León, in the northeast of modern-day Mexico. In 1579 he brought over a large group of relatives to help him settle and administer the rugged territory, which was made up of swamps, deserts and silver mines.

    A statue in Monterrey, Mexico, of Luis Carvajal y de la Cueva.
    Ricardo DelaG/Wikimedia Commons, CC BY-SA

    The uncle was a devout Catholic who attempted to shed his converso past, integrating himself into the landed gentry of Spain’s New World empire. Luis the younger, however, his potential heir, was a passionate crypto-Jew who spent his free time composing prayers to the God of Israel and secretly following the commandments of the Torah.

    When Luis and his family were arrested by the Inquisition in 1595, his book of spiritual writings was discovered and used as evidence of his secret Jewish life. Luis, his mother and sister were burned at the stake, but the small, leather-bound diary survived.

    A 19th-century depiction of the execution of Luis de Carvajal the Younger’s sister.
    ‘El Libro Rojo, 1520-1867’ via Wikimedia Commons

    Luis’ religious thought drew on a wide range of early modern Spanish culture. He used a Latin Bible and drew inspiration from the inwardly focused spirituality of Catholic thinkers such as Fray Luis de Granada, a Dominican theologian. He met with the hermit and mystic Gregorio López. He discovered passages from Maimonides and other rabbis quoted in the works of Catholic theologians whom he read at the famed monastery of Santiago de Tlatelolco, in Mexico City, where he worked as an assistant to the rector.

    His spiritual writings are deeply American: The wide deserts and furious hurricanes of Mexico were the setting of his spiritual awakenings, and his encounters with the people and cultures of the emerging Atlantic world shaped his religious vision. This little book is a unique example of the brilliant, creative culture that developed in the crossing from Old World to New, born out of the exchange and conflict between diverse cultures, languages and faiths.

    A glimpse of Luis de Carvajal’s spiritual writings, photographed in New York City.
    Ronnie Perelis

    More than translation

    Spanish Jews who refused to convert in 1492, meanwhile, had been forced into exile and barred from the kingdom’s colonies.

    The journey of Joseph Ha-Kohen’s family illustrates the hardships. After the expulsion, his parents moved to Avignon, the papal city in southern France, where Joseph was born in 1496. From there, they made their way to Genoa, the Italian merchant city, hoping to establish themselves. But it was not to be. The family was repeatedly expelled, permitted to return, and then expelled again.

    Despite these upheavals, Ha-Kohen became a doctor and a merchant, a leader in the Jewish community – earning the respect of the Christian community, too. Toward the end of his life, he settled in a small mountain town beyond the city’s borders and turned to writing.

    After a book on wars between Christianity and Islam, and another one on the history of the Jews, he began a new project. Ha-Kohen adapted “Historia General de las Indias,” an account of the Americas’ colonization by Spanish historian Francisco López de Gómara, reshaping the text for a Jewish audience.

    A 1733 edition of ‘Divrei Ha-Yamim,’ Ha-Kohen’s book about wars between Christian and Muslim cultures.
    John Carter Brown Library via Wikimedia Commons

    Ha-Kohen’s work was the first Hebrew-language book about the Americas. The text was hundreds of pages long – and he copied his entire manuscript nine times by hand. He had never seen the Americas, but his own life of repeated uprooting may have led him to wonder whether Jews would one day seek refuge there.

    Ha-Kohen wanted his readers to have access to the text’s geographical, botanical and anthropological information, but not to Spain’s triumphalist narrative. So he created an adapted, hybrid translation. The differences between versions reveal the complexities of being a European Jew in the age of exploration.

    Ha-Kohen omitted references to the Americas as Spanish territory and criticized the conquistadors for their brutality toward Indigenous peoples. At times, he compared Native Americans with the ancient Israelites of the Bible, feeling a kinship with them as fellow victims of oppression. Yet at other moments he expressed estrangement and even revulsion at Indigenous customs and described their religious practices as “darkness.”

    Translating these men’s writing is not just a matter of bringing a text from one language into another. It is also a deep reflection on the complex position of Jews and conversos in those years. Their unique vantage point offers a window into the intertwined histories of Europe, the Americas and the in-betweenness that marked the Jewish experience in the early modern world.

    The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    ref. Jews were barred from Spain’s New World colonies − but that didn’t stop Jewish and converso writers from describing the Americas – https://theconversation.com/jews-were-barred-from-spains-new-world-colonies-but-that-didnt-stop-jewish-and-converso-writers-from-describing-the-americas-258278

    MIL OSI Analysis

  • MIL-OSI Analysis: Jews were barred from Spain’s New World colonies − but that didn’t stop Jewish and converso writers from describing the Americas

    Source: The Conversation – USA – By Flora Cassen, Senior Faculty, Hartman Institute and Associate Professor of History and Jewish Studies, Washington University in St. Louis

    An auto-da-fé − a public punishment for heretics − in San Bartolome Otzolotepec, in present-day Mexico. Museo Nacional de Arte via Wikimedia Commons

    Every few years, a story about Columbus resurfaces: Was the Genoese navigator who claimed the Americas for Spain secretly Jewish, from a Spanish family fleeing the Inquisition?

    This tale became widespread around the late 19th century, when large numbers of Jews came from Russia and Eastern Europe to the United States. For these immigrants, 1492 held double significance: the year of Jews’ expulsion from Spain, as well as Columbus’ voyage of discovery. At a time when many Americans viewed the explorer as a hero, the idea that he might have been one of their own offered Jewish immigrants a link to the beginnings of their new country and the American story of freedom from Old World tyranny.

    The problem with the Columbus-was-a-Jew theory isn’t just that it’s based on flimsy evidence. It also distracts from the far more complex and true story of Spanish Jews in the Americas.

    In the 15th century, the kingdom’s Jews faced a wrenching choice: convert to Christianity or leave the land their families had called home for generations. Portugal’s Jews faced similar persecution. Whether they sought a new place to settle or stayed and hoped to be accepted as members of Christian society, both groups were searching for belonging.

    Jewish religious items at the Museo Metropolitano in Monterrey, Mexico.
    Thelmadatter/Wikimedia Commons, CC BY-SA

    We are scholars of Jewish history and have been working on the first English translations of two texts from the 16th century. “The Book of New India,” by Joseph Ha-Kohen, and the spiritual writings of Luis de Carvajal are two of the earliest Jewish texts about the Americas.

    The story of the New World is not complete without the voices of Jewish communities that engaged with it from the very beginning.

    Double consciousness

    The first Jews in the Americas were, in fact, not Jews but “conversos,” meaning “converts,” and their descendants.

    After a millennium of relatively peaceful and prosperous life on Iberian soil, the Jews of Spain were attacked by a wave of mob violence in the summer of 1391. Afterward, thousands of Jews were forcibly converted.

    Synagogue of El Tránsito, a 14th-century Jewish congregation in Toledo, Spain.
    Selbymay/Wikimedia Commons, CC BY-SA

    While conversos were officially members of the Catholic Church, neighbors looked at them with suspicion. Some of these converts were “crypto-Jews,” who secretly held on to their ancestral faith. Spanish authorities formed the Inquisition to root out anyone the church considered heretics, especially people who had converted from Judaism and Islam.

    In 1492, after conquering the last Muslim stronghold in Spain, monarchs Ferdinand and Isabella gave the remaining Spanish Jews the choice of conversion or exile. Eventually, people who converted from Islam would be expelled as well.

    Among Jews who converted, some sought new lives within the rapidly expanding Spanish empire. As the historian Jonathan Israel wrote, Jews and conversos were both “agents and victims of empire.” Their familiarity with Iberian language and culture, combined with the dispersion of their community, positioned them to participate in the new global economy: trade in sugar, textiles, spices – and the trade in human lives, Atlantic slavery.

    Yet conversos were also far more vulnerable than their compatriots: They could lose it all, even end up burned alive at the stake, because of their beliefs. This double consciousness – being part of the culture, yet apart from it – is what makes conversos vital to understanding the complexities of colonial Latin America.

    By the 17th century, once the Dutch and the English conquered parts of the Americas, Jews would be able to live there. Often, these were families whose ancestors had been expelled from the Iberian peninsula. In the first Spanish and Portuguese colonies, however, Jews were not allowed to openly practice their faith.

    Secret spirituality

    One of these conversos was Luis de Carvajal. His uncle, the similarly named Luis de Carvajal y de la Cueva, was a merchant, slave trader and conquistador. As a reward for his exploits he was named governor of the New Kingdom of León, in the northeast of modern-day Mexico. In 1579 he brought over a large group of relatives to help him settle and administer the rugged territory, which was made up of swamps, deserts and silver mines.

    A statue in Monterrey, Mexico, of Luis Carvajal y de la Cueva.
    Ricardo DelaG/Wikimedia Commons, CC BY-SA

    The uncle was a devout Catholic who attempted to shed his converso past, integrating himself into the landed gentry of Spain’s New World empire. Luis the younger, however, his potential heir, was a passionate crypto-Jew who spent his free time composing prayers to the God of Israel and secretly following the commandments of the Torah.

    When Luis and his family were arrested by the Inquisition in 1595, his book of spiritual writings was discovered and used as evidence of his secret Jewish life. Luis, his mother and sister were burned at the stake, but the small, leather-bound diary survived.

    A 19th-century depiction of the execution of Luis de Carvajal the Younger’s sister.
    ‘El Libro Rojo, 1520-1867’ via Wikimedia Commons

    Luis’ religious thought drew on a wide range of early modern Spanish culture. He used a Latin Bible and drew inspiration from the inwardly focused spirituality of Catholic thinkers such as Fray Luis de Granada, a Dominican theologian. He met with the hermit and mystic Gregorio López. He discovered passages from Maimonides and other rabbis quoted in the works of Catholic theologians whom he read at the famed monastery of Santiago de Tlatelolco, in Mexico City, where he worked as an assistant to the rector.

    His spiritual writings are deeply American: The wide deserts and furious hurricanes of Mexico were the setting of his spiritual awakenings, and his encounters with the people and cultures of the emerging Atlantic world shaped his religious vision. This little book is a unique example of the brilliant, creative culture that developed in the crossing from Old World to New, born out of the exchange and conflict between diverse cultures, languages and faiths.

    A glimpse of Luis de Carvajal’s spiritual writings, photographed in New York City.
    Ronnie Perelis

    More than translation

    Spanish Jews who refused to convert in 1492, meanwhile, had been forced into exile and barred from the kingdom’s colonies.

    The journey of Joseph Ha-Kohen’s family illustrates the hardships. After the expulsion, his parents moved to Avignon, the papal city in southern France, where Joseph was born in 1496. From there, they made their way to Genoa, the Italian merchant city, hoping to establish themselves. But it was not to be. The family was repeatedly expelled, permitted to return, and then expelled again.

    Despite these upheavals, Ha-Kohen became a doctor and a merchant, a leader in the Jewish community – earning the respect of the Christian community, too. Toward the end of his life, he settled in a small mountain town beyond the city’s borders and turned to writing.

    After a book on wars between Christianity and Islam, and another one on the history of the Jews, he began a new project. Ha-Kohen adapted “Historia General de las Indias,” an account of the Americas’ colonization by Spanish historian Francisco López de Gómara, reshaping the text for a Jewish audience.

    A 1733 edition of ‘Divrei Ha-Yamim,’ Ha-Kohen’s book about wars between Christian and Muslim cultures.
    John Carter Brown Library via Wikimedia Commons

    Ha-Kohen’s work was the first Hebrew-language book about the Americas. The text was hundreds of pages long – and he copied his entire manuscript nine times by hand. He had never seen the Americas, but his own life of repeated uprooting may have led him to wonder whether Jews would one day seek refuge there.

    Ha-Kohen wanted his readers to have access to the text’s geographical, botanical and anthropological information, but not to Spain’s triumphalist narrative. So he created an adapted, hybrid translation. The differences between versions reveal the complexities of being a European Jew in the age of exploration.

    Ha-Kohen omitted references to the Americas as Spanish territory and criticized the conquistadors for their brutality toward Indigenous peoples. At times, he compared Native Americans with the ancient Israelites of the Bible, feeling a kinship with them as fellow victims of oppression. Yet at other moments he expressed estrangement and even revulsion at Indigenous customs and described their religious practices as “darkness.”

    Translating these men’s writing is not just a matter of bringing a text from one language into another. It is also a deep reflection on the complex position of Jews and conversos in those years. Their unique vantage point offers a window into the intertwined histories of Europe, the Americas and the in-betweenness that marked the Jewish experience in the early modern world.

    The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    ref. Jews were barred from Spain’s New World colonies − but that didn’t stop Jewish and converso writers from describing the Americas – https://theconversation.com/jews-were-barred-from-spains-new-world-colonies-but-that-didnt-stop-jewish-and-converso-writers-from-describing-the-americas-258278

    MIL OSI Analysis

  • MIL-OSI Analysis: Why energy markets fluctuate during an international crisis

    Source: The Conversation – USA – By Skip York, Nonresident Fellow in Energy and Global Oil, Baker Institute for Public Policy, Rice University

    Stock and commodities traders found themselves dealing with various price swings as energy markets responded to Israeli and U.S. attacks on Iran. Timothy A. Clary/AFP via Getty Imagesf

    Global energy markets, such as those for oil, gas and coal, tend to be sensitive to a wide range of world events – especially when there is some sort of crisis. Having worked in the energy industry for over 30 years, I’ve seen how war, political instability, pandemics and economic sanctions can significantly disrupt energy markets and impede them from functioning efficiently.

    A look at the basics

    First, consider the economic fundamentals of supply and demand. The risk most people imagine in the current crisis between Israel, the U.S. and Iran is that Iran, which is itself a major oil-producing country, might suddenly expand the conflict by threatening the ability of neighboring countries to supply oil to the world.

    Oil wells, refineries, pipelines and shipping lanes are the backbone of energy markets. They can be vulnerable during a crisis: Whether there is deliberate sabotage or collateral damage from military action, energy infrastructure often takes a hit.

    For instance, after Saddam Hussein invaded Kuwait in August 1990, Iraqi forces placed explosive charges on Kuwaiti oil wells and began detonating them in January 1991. It took months for all the resulting fires to be put out, and millions of barrels of oil and hundreds of millions of cubic meters of natural gas were released into the environment – rather than being sold and used productively somewhere around the world.

    Scenes of Kuwaiti life during and after the Gulf War of 1990 and 1991 include images of oil wells burning as a result of Iraqi sabotage.

    Logistics can mess markets up too. For instance, closing critical maritime routes like the Strait of Hormuz or the Suez Canal can cause transportation delays.

    Whether supply is lost from decreased production or blocked transportation routes, the effect is less oil available to the market, which not only causes prices to rise in general, but it also makes them more volatile – tending to change more frequently and by larger amounts.

    On the flip side, demand can also shift radically. During the 1990-1991 Gulf War, demand rose: U.S. forces alone used more than 2 billion gallons of fuel, according to an Army analysis. By contrast, during the COVID-19 pandemic, industries shut down, travel came to a halt and energy demand plummeted.

    When crisis looms, countries and companies often start stockpiling oil and other raw materials rather than buying only what they need right now. That creates even more imbalance, resulting in price volatility that leaves everyone, both consumers and producers, with a headache.

    Regional considerations

    In addition to uncertainties around market fundamentals, it’s important to note that many of the world’s energy reserves are located in regions that have not been models of stability. In the Middle East, wars, revolutions and diplomatic disputes there can raise concerns about supply, demand or both.

    Those worries send shock waves through the world’s energy markets. It’s like walking on a tightrope: One wrong move – or even the perception of a misstep – can make the market wobble.

    Governments’ economic sanctions, such as those restricting trade with Iran, Russia or Venezuela, can distort production and investment decisions and disrupt trade flows. Sometimes markets react even before sanctions are officially in place: Just the rumor of a possible embargo can cause prices to spike as buyers scramble to secure resources.

    In 2008, for example, India and Vietnam imposed rice export bans, and rumors of additional restrictions fueled panic buying and nearly doubled prices in months.

    In those scrambles, the role of investor speculation enters the picture. Energy commodities, such as oil and gas, aren’t just physical resources; they’re also traded as financial assets like stocks and bonds. During uncertain times, traders don’t wait around for actual changes in supply and demand. They react to news and forecasts, sometimes in large groups, which can shift the market just with the actions that result from their fears or hopes.

    The events on June 22, 2025, are a good example of how this dynamic works. The Iranian parliament passed a resolution authorizing the country’s Supreme Council to close the Strait of Hormuz. Immediately, oil prices started rising, even though the strait was still open, with oil tankers steaming through unimpeded.

    The next day, Iran launched a missile strike on Qatar, but coordinated in advance with Qatari officials to minimize damage and casualties. Traders and analysts perceived the action as a de-escalatory signal and anticipated that the Supreme Council was not going to close the strait. So prices started to fall.

    It was a price roller coaster, fueled by speculation rather than reality. And computer algorithms and artificial intelligence, which assist in making automated trades, only add to the chaos of price changes.

    Shipping activity in the Persian Gulf and the Strait of Hormuz decreased after Israel’s attacks on Iranian nuclear facilities.

    A broader look

    International crises can also cause wider changes in countries’ economies – or the global economy as a whole – which in turn affect the energy market.

    If a crisis sparks a recession, rising inflation or high unemployment, those tend to cause people and businesses to use less energy. When the underlying situation stabilizes, recovery efforts can mean energy consumption resumes. But it’s like a pendulum swinging back and forth, with energy markets caught in the middle.

    Renewable energy is not immune to international crisis and chaos. The supply is less affected by market forces: The amount of available sunlight and wind isn’t tied to geopolitical relations. But overall economic conditions still affect demand, and a crisis can disrupt the supply chains for the equipment needed to harness renewable energy, like solar panels and wind turbines.

    It’s no wonder energy markets are so jittery during international crises. A mix of imbalances between supply and demand, vulnerable infrastructure, political tensions, corporate worries and speculative trading all weave together into a complex web of volatility.

    For policymakers, investors and consumers, understanding these dynamics is key to navigating the ups and downs of energy markets in a crisis-prone world. The solutions aren’t simple, but being informed is the first step toward stability.

    Skip York is a nonresident fellow for Global Oil and Energy with the Center for Energy Studies at Rice University’s Baker Institute for Public Policy. He also is the Chief Energy Strategist at Turner Mason & Company, an energy consulting firm.

    ref. Why energy markets fluctuate during an international crisis – https://theconversation.com/why-energy-markets-fluctuate-during-an-international-crisis-259839

    MIL OSI Analysis

  • MIL-OSI Analysis: What Trump’s budget proposal says about his environmental values

    Source: The Conversation – USA – By Stan Meiburg, Executive Director, Sabin Center for Environment and Sustainability, Wake Forest University

    The president’s spending proposal doesn’t leave much behind. Alexey Kravchuk/iStock / Getty Images Plus

    To understand the federal government’s true priorities, follow the money.

    After months of saying his administration is committed to clean air and water for Americans, President Donald Trump has proposed a detailed budget for the U.S. Environmental Protection Agency for fiscal year 2026. The proposal is more consistent with his administration’s numerous recent actions and announcements that reduce protection for public health and the environment.

    To us, former EPA leaders – one a longtime career employee and the other a political appointee – the budget proposal reveals a lot about what Trump and EPA Administrator Lee Zeldin want to accomplish.

    According to the administration’s Budget in Brief document, total EPA funding for the fiscal year beginning October 2025 would drop from US$9.14 billion to $4.16 billion – a 54% decrease from the budget enacted by Congress for fiscal 2025 and less than half of EPA’s budget in any year of the first Trump administration.

    Without taking inflation into account, this would be the smallest EPA budget since 1986. Adjusted for inflation, it would be the smallest budget since the Ford administration, even though Congress has for decades given EPA more responsibility to clean up and protect the nation’s air and water; handle hazardous chemicals and waste; protect drinking water; clean up environmental contamination; and evaluate the safety of a wide range of chemicals used in commerce and industry. These expansions reflected a bipartisan consensus that protecting public health and the environment is a national priority.

    The budget process in brief

    Federal budgeting is complicated, and EPA’s budget is particularly so. Here are some basics:

    Each year, the president and Congress determine how much money will be spent on what things, and by which agencies. The familiar aphorism that “the president proposes, Congress disposes” captures the Constitution’s process for the federal budget, with Congress firmly holding the “power of the purse.”

    EPA’s budget can be difficult to understand because individual programs may be funded from different sources. It is useful to consider it as a pie sliced into five main pieces:

    • Environmental programs and management: the day-to-day work of protecting air, water and land.
    • Science and technology: research on pollution, health effects and new environmental tools.
    • Superfund and trust funds: cleaning up contaminated sites and responding to emergency releases of pollution.
    • State and Tribal operating grants: supporting local implementation of environmental laws.
    • State capitalization grants: revolving loans for water infrastructure.

    The Trump administration’s budget proposals for EPA represent a striking retreat from the national goals of clean air and clean water enacted in federal laws over the past 55 years. In the budget document, the administration argues that the federal government has done enough and that the protection of gains already achieved, as well as any further progress, should not be paid for with federal money.

    This budget would reduce EPA’s ability to protect public health and the environment to a bare minimum at best. Most dramatic and, in our view, most significant are the elimination of operating grants to state governments, drastic reductions in funding for science of all kinds, and elimination of EPA programs relating to climate change and environmental justice, which addresses situations of disproportionate environmental harm to vulnerable populations. It would cut regulatory and enforcement activities that the administration sees as inconsistent with fossil energy development. Other proposed changes, notably for Superfund and capitalization grants, are more nuanced.

    These changes to EPA’s regular budget allocation are separate from changes to supplementary EPA funding that have also been in the news, including for projects specified in the Inflation Reduction Act and other specific laws.

    Environmental programs and management

    Funding for basic work to protect the environment and prevent pollution would be cut by 22%. The reductions are not spread equally, however. All activities related to climate change would be eliminated, including the Energy Star program and greenhouse gas reporting and tracking. Funding for civil and criminal enforcement of environmental laws and regulations would be cut by 69% and 50%, respectively.

    The popular Brownfields program would be cut by 50%. Since 1995, $2.9 billion in federal funds have produced public and private investments totaling $42 billion for cleaning and redeveloping contaminated sites, and created more than 200,000 jobs.

    A program to set standards and conduct training for safe removal of lead paint and other lead-containing materials from homes and businesses would be eliminated.

    The administration has been clear that EPA will no longer do environmental justice work, such as funding to monitor toxic air emissions in low-income neighborhoods adjacent to industrial areas. This budget is consistent with that.

    Science and technology

    Scientific support functions would be cut by 34%. The Office of Research and Development would go from about 1,500 staff to about 500 and would be redistributed throughout the agency. This would diminish science that supports not just EPA’s work but that of organizations, industries, health care professionals and public and private researchers who benefit from EPA’s research.

    A former uranium mill in Colorado is just one of the nation’s extremely contaminated Superfund sites awaiting federal money for cleanup.
    RJ Sangosti/MediaNews Group/The Denver Post via Getty Images

    Superfund and other trust funds

    Superfund is by far the largest of EPA’s cleanup trust funds. It allows EPA to clean up contaminated sites. It also forces the parties responsible for the contamination to either perform cleanups or reimburse the government for EPA-led cleanup work. When there is no viable responsible party, Superfund gives EPA the funds and authority to clean up contaminated sites.

    Prior to 2021, Superfund was funded through EPA’s annual budget. In 2021 and 2022, Congress restored taxes on selected chemicals and petroleum products to help pay for Superfund. During the Biden administration, EPA reduced the Superfund’s line in the general budget, with the expectation that the Superfund tax revenues would more than make up for the reduction. Administrator Zeldin, who has said that site cleanup is a priority, is proposing to shift virtually all funding for cleanups to these new tax revenues.

    There is risk in this approach, however. The Superfund tax expires in 2031 and has raised less than Treasury Department predictions in both 2023 and 2024. In fiscal year 2024, available tax receipts were predicted to be $2.5 billion, but only $1.4 billion was collected. Future funding is uncertain because it depends on the amounts of various chemicals that companies actually use. Experts disagree on whether this is significant for the Superfund program. The petrochemical industry, on whom this tax largely falls, is lobbying for its repeal.

    Funds to address leaks at gas station tanks would be cut nearly in half. Funds to clean up oil and petroleum spills would be cut by 24%.

    State operating grants

    The budget proposal seeks to reset the EPA’s relationship with state agencies, which implement the vast majority of environmental regulations.

    EPA has long delegated some of its powers to state environmental agencies, including permitting, inspections and enforcement of regulations that govern air, water and soil pollution. Since the 1970s, EPA has helped fund those activities through basic operating grants that require minimum state contributions and reward larger state investments with additional federal dollars.

    The proposed budget would eliminate all of those grants to states – totaling $1 billion. The document itself explains that federal funding over decades has totaled “hundreds of billions of dollars” and has resulted in programs that “are mature or have accomplished their purpose.”

    States disagree. They note that EPA has delegated 90% of the nation’s environmental protection work to state authorities, and states have accepted that workload based on the expectation of federal funding. The states say reduced funding would greatly diminish the actual work of environmental protection – site inspections, air and water monitoring, and enforcement – across the country.

    State capitalization grants

    Since 1987, EPA has given states money for revolving loan programs that provide low-interest loans to state and local governments to clean up waterways and provide safe drinking water. The proposed budget would cut that funding by 89%, from $2.8 billion to $305 million.

    These capitalization grants were originally envisioned as seed money, with future loans available as the initial and subsequent loans were repaid. But the need for water infrastructure continues to grow, and Congress has for many years allocated additional money to the program.

    In protecting the environment, you get what you pay for. In past years, Congress has refused to accept proposed drastic cuts to EPA’s budget. It remains to be seen whether this Congress will go along with these proposed rollbacks.

    Stan Meiburg is a volunteer with the Environmental Protection Network. He was an employee of the Environmental Protection Agency from 1977 to 2017.

    i have worked at the US EPA twice. During the Obama Administration, i was first principal deputy to the Assistant Administrator of the Office of Air and Radiation and then Acting Assistant Administrator. During the Biden Administration, I was Deputy Administrator. I am also a volunteer with the Environmental Protection Network.

    ref. What Trump’s budget proposal says about his environmental values – https://theconversation.com/what-trumps-budget-proposal-says-about-his-environmental-values-258962

    MIL OSI Analysis

  • MIL-OSI Analysis: Cyberattacks shake voters’ trust in elections, regardless of party

    Source: The Conversation – USA – By Ryan Shandler, Professor of Cybersecurity and International Relations, Georgia Institute of Technology

    An election worker installs a touchscreen voting machine. Ethan Miller/Getty Images

    American democracy runs on trust, and that trust is cracking.

    Nearly half of Americans, both Democrats and Republicans, question whether elections are conducted fairly. Some voters accept election results only when their side wins. The problem isn’t just political polarization – it’s a creeping erosion of trust in the machinery of democracy itself.

    Commentators blame ideological tribalism, misinformation campaigns and partisan echo chambers for this crisis of trust. But these explanations miss a critical piece of the puzzle: a growing unease with the digital infrastructure that now underpins nearly every aspect of how Americans vote.

    The digital transformation of American elections has been swift and sweeping. Just two decades ago, most people voted using mechanical levers or punch cards. Today, over 95% of ballots are counted electronically. Digital systems have replaced poll books, taken over voter identity verification processes and are integrated into registration, counting, auditing and voting systems.

    This technological leap has made voting more accessible and efficient, and sometimes more secure. But these new systems are also more complex. And that complexity plays into the hands of those looking to undermine democracy.

    In recent years, authoritarian regimes have refined a chillingly effective strategy to chip away at Americans’ faith in democracy by relentlessly sowing doubt about the tools U.S. states use to conduct elections. It’s a sustained campaign to fracture civic faith and make Americans believe that democracy is rigged, especially when their side loses.

    This is not cyberwar in the traditional sense. There’s no evidence that anyone has managed to break into voting machines and alter votes. But cyberattacks on election systems don’t need to succeed to have an effect. Even a single failed intrusion, magnified by sensational headlines and political echo chambers, is enough to shake public trust. By feeding into existing anxiety about the complexity and opacity of digital systems, adversaries create fertile ground for disinformation and conspiracy theories.

    Just before the 2024 presidential election, Director of the Cybersecurity and Infrastructure Security Agency Jen Easterly explains how foreign influence campaigns erode trust in U.S. elections.

    Testing cyber fears

    To test this dynamic, we launched a study to uncover precisely how cyberattacks corroded trust in the vote during the 2024 U.S. presidential race. We surveyed more than 3,000 voters before and after election day, testing them using a series of fictional but highly realistic breaking news reports depicting cyberattacks against critical infrastructure. We randomly assigned participants to watch different types of news reports: some depicting cyberattacks on election systems, others on unrelated infrastructure such as the power grid, and a third, neutral control group.

    The results, which are under peer review, were both striking and sobering. Mere exposure to reports of cyberattacks undermined trust in the electoral process – regardless of partisanship. Voters who supported the losing candidate experienced the greatest drop in trust, with two-thirds of Democratic voters showing heightened skepticism toward the election results.

    But winners too showed diminished confidence. Even though most Republican voters, buoyed by their victory, accepted the overall security of the election, the majority of those who viewed news reports about cyberattacks remained suspicious.

    The attacks didn’t even have to be related to the election. Even cyberattacks against critical infrastructure such as utilities had spillover effects. Voters seemed to extrapolate: “If the power grid can be hacked, why should I believe that voting machines are secure?”

    Strikingly, voters who used digital machines to cast their ballots were the most rattled. For this group of people, belief in the accuracy of the vote count fell by nearly twice as much as that of voters who cast their ballots by mail and who didn’t use any technology. Their firsthand experience with the sorts of systems being portrayed as vulnerable personalized the threat.

    It’s not hard to see why. When you’ve just used a touchscreen to vote, and then you see a news report about a digital system being breached, the leap in logic isn’t far.

    Our data suggests that in a digital society, perceptions of trust – and distrust – are fluid, contagious and easily activated. The cyber domain isn’t just about networks and code. It’s also about emotions: fear, vulnerability and uncertainty.

    Firewall of trust

    Does this mean we should scrap electronic voting machines? Not necessarily.

    Every election system, digital or analog, has flaws. And in many respects, today’s high-tech systems have solved the problems of the past with voter-verifiable paper ballots. Modern voting machines reduce human error, increase accessibility and speed up the vote count. No one misses the hanging chads of 2000.

    But technology, no matter how advanced, cannot instill legitimacy on its own. It must be paired with something harder to code: public trust. In an environment where foreign adversaries amplify every flaw, cyberattacks can trigger spirals of suspicion. It is no longer enough for elections to be secure − voters must also perceive them to be secure.

    That’s why public education surrounding elections is now as vital to election security as firewalls and encrypted networks. It’s vital that voters understand how elections are run, how they’re protected and how failures are caught and corrected. Election officials, civil society groups and researchers can teach how audits work, host open-source verification demonstrations and ensure that high-tech electoral processes are comprehensible to voters.

    We believe this is an essential investment in democratic resilience. But it needs to be proactive, not reactive. By the time the doubt takes hold, it’s already too late.

    Just as crucially, we are convinced that it’s time to rethink the very nature of cyber threats. People often imagine them in military terms. But that framework misses the true power of these threats. The danger of cyberattacks is not only that they can destroy infrastructure or steal classified secrets, but that they chip away at societal cohesion, sow anxiety and fray citizens’ confidence in democratic institutions. These attacks erode the very idea of truth itself by making people doubt that anything can be trusted.

    If trust is the target, then we believe that elected officials should start to treat trust as a national asset: something to be built, renewed and defended. Because in the end, elections aren’t just about votes being counted – they’re about people believing that those votes count.

    And in that belief lies the true firewall of democracy.

    Anthony DeMattee receives funding from National Science Foundation and various academic institutions. He is the Data Scientist in the Democracy Program at The Carter Center.

    Bruce Schneier and Ryan Shandler do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    ref. Cyberattacks shake voters’ trust in elections, regardless of party – https://theconversation.com/cyberattacks-shake-voters-trust-in-elections-regardless-of-party-259368

    MIL OSI Analysis

  • MIL-OSI Analysis: How Zohran Mamdani’s win in the New York City mayoral primary could ripple across the country

    Source: The Conversation – USA – By Lincoln Mitchell, Lecturer, School of International and Public Affairs, Columbia University

    New York mayoral candidate Zohran Mamdani speaks to supporters in Brooklyn on May 4, 2025. Madison Swart/Hans Lucas/AFP via Getty Images

    Top Republicans and Democrats alike are talking about the sudden rise of 33-year-old Zohran Mamdani, a state representative who won the Democratic mayoral primary in New York on June 24, 2025, in a surprising victory over more established politicians.

    While President Donald Trump quickly came out swinging with personal attacks against Mamdani, some establishment Democratic politicians say they are concerned about how the democratic socialist’s progressive politics could harm the broader Democratic Party and cause it to lose more centrist voters.

    New York is a unique American city, with a diverse population and historically liberal politics. So, does a primary mayoral election in New York serve as any kind of harbinger of what could come in the rest of the country?

    Amy Lieberman, a politics and society editor at The Conversation U.S., spoke with Lincoln Mitchell, a political strategy and campaign specialist who lectures at Columbia University, to understand what Mamdani’s primary win might indicate about the direction of national politics.

    New York mayoral candidate Zohran Mamdani, center, greets voters with New York Comptroller Brad Lander, right, on the Upper West Side on June 24, 2025.
    Michael M. Santiago/Getty Images

    Does Mamdani’s primary win offer any indication of how the Democratic Party might be transforming on a national level?

    Mamdani’s win is clearly a rebuke of the more corporate wing of the Democratic Party. I know there are people who say that New York is different from the rest of the country. But from a political perspective, Democrats in New York are less different from Democrats in the rest of country than they used to be.

    That’s because the rest of America is so much more diverse than it used to be. But if you look at progressive politicians now in the House of Representatives and state legislatures, they are being elected from all over – not just in big cities like New York anymore.

    Andrew Cuomo, the former governor of New York, ran an absolutely terrible mayoral campaign. He tried to build a political coalition that is no longer a winning one, which was made up of majorities of African Americans, outer-borough white New Yorkers and orthodox and conservative Jews. Thirty or 40 years ago, that was a powerful coalition. Today, it could not make up a majority.

    Mamdani visualized and created what a 2025 progressive coalition looks like in New York and recognized that it is going to look different than the past. Mamdani’s coalition was based around young, white people – many of them with college degrees who are worried about affordability – ideological lefties and immigrants from parts of the Global South, including the Caribbean and parts of Africa, South Asia and South America.

    When you say a new kind of political coalition, what policy priorities bring Mamdani’s supporters together?

    Mamdani reframed what I would call redistributive economic policies that have long been central to the progressive agenda. A pillar of his campaign is affordability – a brilliant piece of political marketing because who is against affordability? He came up with some affordability-related policies that got enough buzz, like promising free buses. Free buses are great, but it won’t help most working and poor New Yorkers get to work – they take the subway.

    He has been very critical of Israel and has weathered charges of antisemitism.

    In the older New York, progressive politicians such as the late Congressman Charlie Rangel were very hawkish on Israel.

    What Mamdani understood is that in today’s America, the progressive wing of the Democratic Party does not care if somebody is, sounds like or comes close to being antisemitic. For those people, calling someone antisemitic sounds Trumpy, and they understand it as a right-wing hit, rather than the legitimate expression of concerns from Jewish people. Some liberals think that claims of antisemitism are simply something done just by those on the right to damage or discredit progressive politicians, but antisemitism is real.

    Therefore, Mamdani’s record on the Jewish issue did not hurt him in the campaign, but he needs to build bridges to Jewish voters, or he will not be able to govern New York City.

    How else did Mamdani appeal to a base of supporters?

    He got the support of “limousine liberals” – including rich, high-profile, progressive people. His supporters include Ella Emhoff, a model and the stepdaughter of Kamala Harris, and the actress Cynthia Nixon, but there were many others. Supporting Mamdani became stylish – almost de rigueur – among certain segments of affluent New York.

    Mamdani is also a true New Yorker and the voice of a new kind of immigrant. His parents are from Uganda and India. But he is also the child of extreme privilege – his mother, Mira Nair, is a well-known filmmaker, and his father is an accomplished professor. Mamdani went to top schools in New York and knows how to play in elite circles, and with white people. He is a Muslim man whose roots are in the Global South, not threatening because he knows how to speak their language.

    But to people of color and immigrants, Mamdani is also one of them. Because of Mamdani’s interesting background, he brought the limousine liberals together with the aunties from Bangladesh.

    Finally, on the charisma scale, Mamdani was so far ahead of other Democratic candidates. Who is going to make better TikTok videos – the good-looking, young man whose mother is a world-famous movie producer, or the older guy who is a loving father and husband but gives off dependable dad, rather than hip young guy, vibes?

    People arrive to vote in the New York mayoral primary in Brooklyn on June 24, 2025.
    Spencer Platt/Getty Images

    Is New York City so distinct that you cannot compare politics there to what happens nationwide?

    I think that nationwide or at the state level there is a potential for something similar to a Mamdani coalition, but not a Mamdani coalition exactly. But in a place like Oklahoma, there are people who are in bad economic shape and who will also respond positively to an affordability-focused, Democratic political campaign. Mamdani remade a progressive New York coalition for this moment. Other progressives politicians should copy the spirit of that and reimagine a winning coalition in their city, state or district.

    When Trump was campaigning, he focused at least on making groceries cheaper. Mamdani is one of the few Democrats who took the affordability issue back from Trump and addressed it head on and in a much more honest and relevant way. Trump has the phrase, “Make America Great Again!” That’s a popular slogan on baseball caps for Trump supporters.

    If Mamdani wanted to make a baseball cap, he could just print “Affordability” on it. Boom.

    Other Democratic politicians can take that approach of affordability and reframe it in a way that works in Kansas City or elsewhere.

    Lincoln Mitchell supported Brad Lander in the primary election.

    ref. How Zohran Mamdani’s win in the New York City mayoral primary could ripple across the country – https://theconversation.com/how-zohran-mamdanis-win-in-the-new-york-city-mayoral-primary-could-ripple-across-the-country-259951

    MIL OSI Analysis

  • MIL-OSI Analysis: Cascading disasters like those created by Hurricane Helene show why hazard models can’t rely on the past

    Source: The Conversation – USA – By Brian J. Yanites, Associate Professor of Earth and Atmospheric Science. Professor of Surficial and Sedimentary Geology, Indiana University

    The Carter Lodge hangs precariously over the flood-scoured bank of the Broad River in Chimney Rock Village, N.C., on May 13, 2025, eight months after Hurricane Helene. AP Photo/Allen G. Breed

    Hurricane Helene lasted only a few days in September 2024, but it altered the landscape of the Southeastern U.S. in profound ways that will affect the hazards local residents face far into the future.

    Mudslides buried roads and reshaped river channels. Uprooted trees left soil on hillslopes exposed to the elements. Sediment that washed into rivers changed how water flows through the landscape, leaving some areas more prone to flooding and erosion.

    Helene was a powerful reminder that natural hazards don’t disappear when the skies clear – they evolve.

    These transformations are part of what scientists call cascading hazards. They occur when one natural event alters the landscape in ways that lead to future hazards. A landslide triggered by a storm might clog a river, leading to downstream flooding months or years later. A wildfire can alter the soil and vegetation, setting the stage for debris flows with the next rainstorm.

    Satellite images before (top) and after Hurricane Helene (bottom) show how the storm altered landscape near Pensacola, N.C., in the Blue Ridge Mountains.
    Google Earth, CC BY

    I study these disasters as a geomorphologist. In a new paper in the journal Science, I and a team of scientists from 18 universities and the U.S. Geological Survey explain why hazard models – used to help communities prepare for disasters – can’t just rely on the past. Instead, they need to be nimble enough to forecast how hazards evolve in real time.

    The science behind cascading hazards

    Cascading hazards aren’t random. They emerge from physical processes that operate continuously across the landscape – sediment movement, weathering, erosion. Together, the atmosphere, biosphere and the earth are constantly reshaping the conditions that cause natural disasters.

    For instance, earthquakes fracture rock and shake loose soil. Even if landslides don’t occur during the quake itself, the ground may be weakened, leaving it primed for failure during later rainstorms.

    That’s exactly what happened after the 2008 earthquake in Sichuan Province, China, which led to a surge in debris flows long after the initial seismic event.

    A strong aftershock after a 7.8 magnitude earthquake in Sichuan province, China, in May 2008 triggered more landslides in central China.
    AP Photo/Andy Wong

    Earth’s surface retains a “memory” of these events. Sediment disturbed in an earthquake, wildfire or severe storm will move downslope over years or even decades, reshaping the landscape as it goes.

    The 1950 Assam earthquake in India is a striking example: It triggered thousands of landslides. The sediment from these landslides gradually moved through the river system, eventually causing flooding and changing river channels in Bangladesh some 20 years later.

    An intensifying threat in a changing world

    These risks present challenges for everything from emergency planning to home insurance. After repeated wildfire-mudslide combinations in California, some insurers pulled out of the state entirely, citing mounting risks and rising costs among the reasons.

    Cascading hazards are not new, but their impact is intensifying.

    Climate change is increasing the frequency and severity of wildfires, storms and extreme rainfall. At the same time, urban development continues to expand into steep, hazard-prone terrain, exposing more people and infrastructure to evolving risks.

    The rising risk of interconnected climate disasters like these is overwhelming systems built for isolated events.

    Yet climate change is only part of the equation. Earth processes – such as earthquakes and volcanic eruptions – also trigger cascading hazards, often with long-lasting effects.

    Mount St. Helens is a powerful example: More than four decades after its eruption in 1980, the U.S. Army Corps of Engineers continues to manage ash and sediment from the eruption to keep it from filling river channels in ways that could increase the flood risk in downstream communities.

    Rethinking risk and building resilience

    Traditionally, insurance companies and disaster managers have estimated hazard risk by looking at past events.

    But when the landscape has changed, the past may no longer be a reliable guide to the future. To address this, computer models based on the physics of how these events work are needed to help forecast hazard evolution in real time, much like weather models update with new atmospheric data.

    A March 2024 landslide in the Oregon Coast Range wiped out trees in its path.
    Brian Yanites, June 2025
    A drone image of the same March 2024 landslide in the Oregon Coast Range shows where it temporarily dammed the river below.
    Brian Yanites, June 2025

    Thanks to advances in Earth observation technology, such as satellite imagery, drone and lidar, which is similar to radar but uses light, scientists can now track how hillslopes, rivers and vegetation change after disasters. These observations can feed into geomorphic models that simulate how loosened sediment moves and where hazards are likely to emerge next.

    Researchers are already coupling weather forecasts with post-wildfire debris flow models. Other models simulate how sediment pulses travel through river networks.

    Cascading hazards reveal that Earth’s surface is not a passive backdrop, but an active, evolving system. Each event reshapes the stage for the next.

    Understanding these connections is critical for building resilience so communities can withstand future storms, earthquakes and the problems created by debris flows. Better forecasts can inform building codes, guide infrastructure design and improve how risk is priced and managed. They can help communities anticipate long-term threats and adapt before the next disaster strikes.

    Most importantly, they challenge everyone to think beyond the immediate aftermath of a disaster – and to recognize the slow, quiet transformations that build toward the next.

    Brian J. Yanites receives funding from the National Science Foundation.

    ref. Cascading disasters like those created by Hurricane Helene show why hazard models can’t rely on the past – https://theconversation.com/cascading-disasters-like-those-created-by-hurricane-helene-show-why-hazard-models-cant-rely-on-the-past-259502

    MIL OSI Analysis

  • MIL-OSI Analysis: Toxic algae blooms are lasting longer in Lake Erie − why that’s a worry for people and pets

    Source: The Conversation – USA – By Gregory J. Dick, Professor of Biology, University of Michigan

    A satellite image from Aug. 13, 2024, shows an algal bloom covering approximately 320 square miles (830 square km) of Lake Erie. By Aug. 22, it had nearly doubled in size. NASA Earth Observatory

    Federal scientists released their annual forecast for Lake Erie’s harmful algal blooms on June 26, 2025, and they expect a mild to moderate season. However, anyone who comes in contact with the blooms can face health risks, and it’s worth remembering that 2014, when toxins from algae blooms contaminated the water supply in Toledo, Ohio, was considered a moderate year, too.

    We asked Gregory J. Dick, who leads the Cooperative Institute for Great Lakes Research, a federally funded center at the University of Michigan that studies harmful algal blooms among other Great Lakes issues, why they’re such a concern.

    The National Oceanic and Atmospheric Administration’s prediction for harmful algal bloom severity in Lake Erie compared with past years.
    NOAA

    1. What causes harmful algal blooms?

    Harmful algal blooms are dense patches of excessive algae growth that can occur in any type of water body, including ponds, reservoirs, rivers, lakes and oceans. When you see them in freshwater, you’re typically seeing cyanobacteria, also known as blue-green algae.

    These photosynthetic bacteria have inhabited our planet for billions of years. In fact, they were responsible for oxygenating Earth’s atmosphere, which enabled plant and animal life as we know it.

    The leading source of harmful algal blooms today is nutrient runoff from fertilized farm fields.
    Michigan Sea Grant

    Algae are natural components of ecosystems, but they cause trouble when they proliferate to high densities, creating what we call blooms.

    Harmful algal blooms form scums at the water surface and produce toxins that can harm ecosystems, water quality and human health. They have been reported in all 50 U.S. states, all five Great Lakes and nearly every country around the world. Blue-green algae blooms are becoming more common in inland waters.

    The main sources of harmful algal blooms are excess nutrients in the water, typically phosphorus and nitrogen.

    Historically, these excess nutrients mainly came from sewage and phosphorus-based detergents used in laundry machines and dishwashers that ended up in waterways. U.S. environmental laws in the early 1970s addressed this by requiring sewage treatment and banning phosphorus detergents, with spectacular success.

    How pollution affected Lake Erie in the 1960s, before clean water regulations.

    Today, agriculture is the main source of excess nutrients from chemical fertilizer or manure applied to farm fields to grow crops. Rainstorms wash these nutrients into streams and rivers that deliver them to lakes and coastal areas, where they fertilize algal blooms. In the U.S., most of these nutrients come from industrial-scale corn production, which is largely used as animal feed or to produce ethanol for gasoline.

    Climate change also exacerbates the problem in two ways. First, cyanobacteria grow faster at higher temperatures. Second, climate-driven increases in precipitation, especially large storms, cause more nutrient runoff that has led to record-setting blooms.

    2. What does your team’s DNA testing tell us about Lake Erie’s harmful algal blooms?

    Harmful algal blooms contain a mixture of cyanobacterial species that can produce an array of different toxins, many of which are still being discovered.

    When my colleagues and I recently sequenced DNA from Lake Erie water, we found new types of microcystins, the notorious toxins that were responsible for contaminating Toledo’s drinking water supply in 2014.

    These novel molecules cannot be detected with traditional methods and show some signs of causing toxicity, though further studies are needed to confirm their human health effects.

    Blue-green algae blooms in freshwater, like this one near Toledo in 2014, can be harmful to humans, causing gastrointestinal symptoms, headache, fever and skin irritation. They can be lethal for pets.
    Ty Wright for The Washington Post via Getty Images

    We also found organisms responsible for producing saxitoxin, a potent neurotoxin that is well known for causing paralytic shellfish poisoning on the Pacific Coast of North America and elsewhere.

    Saxitoxins have been detected at low concentrations in the Great Lakes for some time, but the recent discovery of hot spots of genes that make the toxin makes them an emerging concern.

    Our research suggests warmer water temperatures could boost its production, which raises concerns that saxitoxin will become more prevalent with climate change. However, the controls on toxin production are complex, and more research is needed to test this hypothesis. Federal monitoring programs are essential for tracking and understanding emerging threats.

    3. Should people worry about these blooms?

    Harmful algal blooms are unsightly and smelly, making them a concern for recreation, property values and businesses. They can disrupt food webs and harm aquatic life, though a recent study suggested that their effects on the Lake Erie food web so far are not substantial.

    But the biggest impact is from the toxins these algae produce that are harmful to humans and lethal to pets.

    The toxins can cause acute health problems such as gastrointestinal symptoms, headache, fever and skin irritation. Dogs can die from ingesting lake water with harmful algal blooms. Emerging science suggests that long-term exposure to harmful algal blooms, for example over months or years, can cause or exacerbate chronic respiratory, cardiovascular and gastrointestinal problems and may be linked to liver cancers, kidney disease and neurological issues.

    The water intake system for the city of Toledo, Ohio, is surrounded by an algae bloom in 2014. Toxic algae got into the water system, resulting in residents being warned not to touch or drink their tap water for three days.
    AP Photo/Haraz N. Ghanbari

    In addition to exposure through direct ingestion or skin contact, recent research also indicates that inhaling toxins that get into the air may harm health, raising concerns for coastal residents and boaters, but more research is needed to understand the risks.

    The Toledo drinking water crisis of 2014 illustrated the vast potential for algal blooms to cause harm in the Great Lakes. Toxins infiltrated the drinking water system and were detected in processed municipal water, resulting in a three-day “do not drink” advisory. The episode affected residents, hospitals and businesses, and it ultimately cost the city an estimated US$65 million.

    4. Blooms seem to be starting earlier in the year and lasting longer – why is that happening?

    Warmer waters are extending the duration of the blooms.

    In 2025, NOAA detected these toxins in Lake Erie on April 28, earlier than ever before. The 2022 bloom in Lake Erie persisted into November, which is rare if not unprecedented.

    Scientific studies of western Lake Erie show that the potential cyanobacterial growth rate has increased by up to 30% and the length of the bloom season has expanded by up to a month from 1995 to 2022, especially in warmer, shallow waters. These results are consistent with our understanding of cyanobacterial physiology: Blooms like it hot – cyanobacteria grow faster at higher temperatures.

    5. What can be done to reduce the likelihood of algal blooms in the future?

    The best and perhaps only hope of reducing the size and occurrence of harmful algal blooms is to reduce the amount of nutrients reaching the Great Lakes.

    In Lake Erie, where nutrients come primarily from agriculture, that means improving agricultural practices and restoring wetlands to reduce the amount of nutrients flowing off of farm fields and into the lake. Early indications suggest that Ohio’s H2Ohio program, which works with farmers to reduce runoff, is making some gains in this regard, but future funding for H2Ohio is uncertain.

    In places like Lake Superior, where harmful algal blooms appear to be driven by climate change, the solution likely requires halting and reversing the rapid human-driven increase in greenhouse gases in the atmosphere.

    Gregory J. Dick receives funding for harmful algal bloom research from the National Oceanic and Atmospheric Administration, the National Science Foundation, the United States Geological Survey, and the National Institutes for Health. He serves on the Science Advisory Council for the Environmental Law and Policy Center.

    ref. Toxic algae blooms are lasting longer in Lake Erie − why that’s a worry for people and pets – https://theconversation.com/toxic-algae-blooms-are-lasting-longer-in-lake-erie-why-thats-a-worry-for-people-and-pets-259954

    MIL OSI Analysis