Category: Analysis

  • MIL-OSI Global: Suffocating seas: low oxygen levels emerging as third major threat to tropical coral reefs

    Source: The Conversation – UK – By Jennifer Mallon, Postdoctoral research fellow, Nova Southeastern University

    Corals in low-oxygen seawater may not show visible signs of stress. Mike Workman/Shutterstock

    Coral reef research has focused on the twin evils birthed by record-high greenhouse gas emissions: warming oceans and increasingly acidic seawater. These global threats are caused by seawater absorbing the excess heat and carbon dioxide that fossil fuel burning has added to the atmosphere. But there is another consequence that is seldom discussed.

    Globally, oceanic oxygen is being depleted because seawater holds less oxygen as it heats up. In the warm coastal waters where tropical coral reefs grow, the immediate effects of low oxygen concentrations can be catastrophic. Short-term hypoxia events are increasingly reported in which dissolved oxygen levels suddenly plummet – often triggered or exacerbated by chemical pollution running off the land, like nutrient-rich fertilisers – which can kill entire coral communities and decimate reefs within days.

    Corals are animals, and like other aquatic animals, they breathe in oxygen from the water to fuel their metabolism. Thanks to a symbiotic relationship with microscopic algae, corals also turn the Sun’s energy into food – oxygen is the byproduct.

    Oxygen levels on coral reefs naturally fluctuate in a daily cycle, with dissolved oxygen peaking around noon and gradually falling as the light fades. At night when photosynthesis stops, corals continue to respire (consume oxygen), and seawater oxygen is depleted.

    This cyclic rise and fall in oxygen means that some corals have already evolved strategies to withstand changes in dissolved oxygen. When the amount of oxygen available to corals falls below this natural range, corals can get stressed and their normal biological processes are disrupted, in many cases leading to death.

    Just like us, corals need oxygen to survive. But I (Jennifer Mallon) discovered that the effects of low oxygen on corals are not always obvious to the naked eye, and that juvenile corals may be especially vulnerable.

    Hard to spot signs

    To understand the effects of low oxygen levels on corals I travelled to the Smithsonian Marine Station in Florida, as part of a research project led by the University of Florida’s Andrew Altieri and the Smithsonian’s Maggie Johnson and Valerie Paul.

    At the Smithsonian, 24 climate-controlled seawater tanks simulate varying levels of deoxygenation already present on coral reefs around the world, ranging from severe deoxygenation, which our research observed on the Caribbean coast of Panama, to normal conditions, such as those replicated in aquariums around the world.

    Researchers recreated environmental conditions for corals in the lab.
    Jennifer Mallon

    While some corals, like the Caribbean staghorn coral (Acropora cervicornis),
    died within a few days of severe deoxygenation, other important reef-building species such as the mountainous star coral (Orbicella faveolata) survived, demonstrating that tolerance of low oxygen was different between species.

    When we studied the corals that survived deoxygenation, we discovered that hypoxic stress may not always be visible. Even when exposed to deoxygenation for two weeks, some corals showed no signs of bleaching, which is when the colourful algae depart and corals turn a ghostly white. More detailed measurements revealed something worrying: despite outward appearances, low oxygen exposure had impaired coral metabolism, potentially stunting their growth and reef-building abilities.

    Existing methods for measuring coral health in the field are mainly visual, and include assessments by trained divers who search for signs of paling or bleaching corals. The hypoxic stress responses we saw in our experiment could be going under the radar.

    Baby corals at risk

    We also wanted to know how deoxygenation affects a coral’s ability to breed.

    Coral sexual reproduction is already a tricky business. Spawning events, when corals release egg bundles into the water, occur just a few nights a year, and the resulting larvae are highly vulnerable. Few survive the multi-day swim to the reef where they settle and metamorphose into juvenile corals.

    On modern Caribbean reefs, wild juvenile corals are rare. People involved in restoring reefs help corals to sexually reproduce in the lab and rear the juveniles in order to later transplant them onto the reef.

    Juvenile corals often settle in reef crevices where they are exposed to lower oxygen levels for longer than in open water, because less water flows over them. When we incubated coral larvae in deoxygenated water throughout the settlement process, we found that initial rates of larval survival and settlement were not significantly affected.

    Things changed once the larvae had settled and begun to form juvenile corals. Early-stage juvenile corals, known as primary polyps, lack symbiotic algae to help them meet their nutritional needs via photosynthesis and so rely on respiration for energy. Without enough oxygen, they cannot respire properly and begin to die off.

    A coral spawning event off the coast of Queensland, Australia.
    Coral Brunner/Shutterstock

    Coral conservation in breathless waters

    Our research can help those involved in restoring reefs understand the oxygen needs of corals, as well as highlight a previously overlooked threat.

    Even corals that survive deoxygenation show signs of a weaker metabolism that will make it harder to conserve healthy reefs, as restoration relies on healthy coral growth to regenerate what is damaged.

    As a next step, field measurements of coral metabolism will be carried out on Florida’s barrier reef tract when oxygen levels are predicted to drop during the warm summer months, to capture the real impact of deoxygenation on coral health.

    Dissolved oxygen data has not always been collected as part of reef monitoring, even during warm water bleaching events when oxygen is low. As the climate crisis worsens, it will be imperative to do more of this monitoring in tropical coastal waters. Further research into how distinct coral species respond to hypoxia is also essential for targeted conservation strategies.

    By confronting the silent threat of deoxygenation head on, we can safeguard the future of coral reefs and the countless marine species that depend on them.

    Jennifer Mallon receives funding from US-UK Fulbright Commission, Smithsonian Institution Fellowship Program, University of Glasgow Early Career Mobility Award and the Link Foundation.

    Adrian Michael Bass receives funding from the Natural Environmental Research Council.

    Maggie D. Johnson has received funding from NOAA’s Coastal Hypoxia Research Program and the Smithsonian Marine Global Earth Observatory.

    ref. Suffocating seas: low oxygen levels emerging as third major threat to tropical coral reefs – https://theconversation.com/suffocating-seas-low-oxygen-levels-emerging-as-third-major-threat-to-tropical-coral-reefs-224805

    MIL OSI – Global Reports

  • MIL-OSI Global: What the looming federal election could mean for the Bank of Canada’s independence

    Source: The Conversation – Canada – By Andrew Allison, Philosophy PhD Student, University of Calgary

    The independence of central banks from the democratic process has been a bedrock of economic policy for decades. The Bank of Canada is no exception, maintaining distance from elected officials to ensure monetary policy is free from political pressures.

    However, a clear division between central bank and government could be tested with Mark Carney, former governor of both the Bank of Canada and the Bank of England who’s running for leadership of the Liberal Party and, in turn, the role of prime minister.




    Read more:
    Mark Carney might have the edge as potential Liberal leader, but still faces major obstacles


    His bid raises concerns about how central bank independence might be perceived under a Carney-led government. Could his tenure as a central banker result in the Bank of Canada’s independence being clawed back? After all, he has demonstrated his ability to manage monetary policy at the highest levels.

    The answer, if we want to preserve the economic benefits of central bank independence, is clear: the Bank of Canada’s independence must be preserved. And Carney, who has championed the importance of politically neutral monetary policy, would likely agree.

    Incentives, not ignorance

    The idea that central banks should operate independently of the democratic process is a widely held view among economists and central bankers. This is largely because there is an extremely low likelihood of elected officials committing to implement monetary policy that produces low inflation and stable prices.

    If elected officials controlled monetary policy, incumbent governments would be tempted to “juice” the economy with “loose money” by reducing the interest rates right before elections.

    In the short run, this would reduce unemployment, raise wages and potentially boost the chances of incumbent governments being re-elected. But, in the long run, citizens would pay the price in the form of inflation.

    With repeated political interference, market entities would no longer react to injections of loose-money by investing in capital and labour and low interest rates would no longer produce the desired short-term benefits of more jobs and higher wages. But inflation would still persist. As economist Garrett Jones puts it, it would be “all hangover, no buzz.”

    Empirical evidence bears this out. Central banks that with greater independence tend to have more price stability and less inflation.

    This is why governments delegate monetary policy to independent central banks. Central bankers are able to implement monetary policy without the temptation to manipulate the economy for electoral gain.

    It’s worth noting that the need for central bank independence is not exclusively due to politicians’ ignorance about managing monetary policy. Rather, it’s because the electoral incentives they face prevents them from being trusted to pull the levers of monetary power effectively.

    This principle applies even to someone like Carney. If he were to become prime minister, he would face the same incentives as all other incumbent governments. Despite his expertise, he would still need independent central bankers to ensure monetary policy remains insulated from the political cycle.

    Central bank independence in Canada

    Central bank independence is not a binary, but exists on a spectrum. When studying the effects of independence, central banks are usually scored on a number of indicators, including whether central bankers can be fired by elected officials, how long central bankers’ terms are, and the extent to which they can be instructed by democratically elected bodies.

    Widespread support for central bank independence among economists only began in the mid-1980s. Prior to that, central banks often gained their independence due to political and legal circumstances, rather then a deliberate attempt to adhere to a principle of independence. Both the Federal Reserve and the Bank of Canada have this in common.

    The independence of the Bank of Canada had a tumultuous 25 years after its establishment in 1935. When pressed, finance ministers could not answer whether they or the Bank of Canada were ultimately responsible for the country’s monetary policy, often giving conflicting answers.

    It would not be until 1961 that this uncertainty would come to a head during the Coyne Affair. Prime Minister John Diefenbaker wanted James Coyne, governor of the Bank of Canada at the time, fired for embarrassing his government and taking a hefty pension. The House of Commons passed a one-line bill that fired Coyne, but the Senate refused to pass it. Coyne resigned the next day.

    After the Coyne Affair, central bank independence grew into the de facto status quo. In 1985, the Bank of Canada Act was passed, setting some limits on the power of the governor and their responsibility to the finance minister. As a result, Canada’s central bank independence falls somewhere in the middle of the spectrum compared to other wealthy, western nations.

    Carney on central bank independence

    In 2022, Conservative Party leader Pierre Poilievre threatened to fire the governor of the Bank of Canada, Tiff Macklem, if he became prime minister.

    While the Bank of Canada Act does permit this through a formal procedure, setting the precedent that cabinets can and will fire governors could undermine central bank independence. It would risk making central bankers more beholden to the political aims of incumbent governments and more likely to produce inflationary monetary policy.

    Compared to Poilievre, Carney is the conservative choice, likely aiming to maintain the status quo by leaving central bankers alone. During and after his time as a central banker, Carney has favoured central bank independence. And, as it stands, it doesn’t appear that he’s changed his mind now that he’s running for Liberal leader.

    So, what would a Carney government mean for the Bank of Canada’s independence? Likely, not much — and from a monetary economic perspective, that’s a good thing. Preserving the status quo would ensure the Bank of Canada remains insulated from political interference, allowing it to focus on long-term price stability.

    Andrew Allison receives funding from the Social Sciences and Humanities Research Council.

    ref. What the looming federal election could mean for the Bank of Canada’s independence – https://theconversation.com/what-the-looming-federal-election-could-mean-for-the-bank-of-canadas-independence-247886

    MIL OSI – Global Reports

  • MIL-OSI Global: Why fizzy water won’t help you lose weight – despite what some studies might suggest

    Source: The Conversation – UK – By Duane Mellor, Visiting Academic, Aston Medical School, Aston University

    Fizzy water will probably not have a measurable effect on metabolism and weight. Jari Hindstroem/ Shutterstock

    For years it has been claimed that sparkling water may aid weight loss by helping you feel fuller – reducing your desire to snack and overeat.

    Now, a recent hypothesis has suggested that sparkling water may help you lose weight by boosting your body’s blood sugar (glucose) uptake and metabolism.

    But before you go and stock your fridge up with fizzy water, it’s important to actually take a look at the study itself and how it was conducted. This publication makes it clear that it isn’t new research – rather, it’s a new hypothesis formed by referencing the results of a study published in 2004 — alongside additional supplementary research to support the theory.

    It should be noted that the old study was not even looking at the effect of fizzy water on body weight. It was actually an observation of what happens to blood when it goes through a kidney dialysis machine (haemodialysis) and how it might lower blood glucose. No fizzy water was consumed as part of this study either.

    The effect of haemodialysis is said to mimic the effect of carbon dioxide in the blood – which increases the pH or alkalinity inside red blood cells. This then encourages the red blood cells to metabolise more glucose.

    Using the figures from the 20-year-old paper, it’s estimated that a four hour dialysis session seems to increase glucose use by 9g – only around 36 additional calories burned.

    But the study the hypothesis was based on wasn’t looking at the effects of carbon dioxide in the blood. Rather, it was looking at how haemodialysis changes the pH of red blood cells — and how that affects blood glucose. This makes it difficult to compare how the carbon dioxide in fizzy water may affect blood glucose when it enters the bloodstream.

    So why the fuss?

    The paper itself contains a valid scientific idea worthy of discussion. But unfortunately, some of its nuance has been lost in the way the study has been promoted – with media headlines exaggerating the paper’s findings.

    To understand whether this hypothesis stands, research will need to be done which investigates whether a significant amount of carbon dioxide actually does enter our bloodstream when we drink sparkling water, and how quickly this is absorbed by the body – which will tell us how long the potential effects last.

    But a glass of sparkling water contains less than a gram of carbon dioxide – and this will be absorbed in minutes. This amount of carbon dioxide is a tiny fraction compared to the kilogram our body naturally produces in an average day) through respiration – how our body uses energy.

    Unfortunately, it looks like sparkling water isn’t a miracle weight loss remedy.
    Christian Moro/ Shutterstock

    Looking at these numbers, fizzy water will probably not have a measurable effect on blood carbon dioxide levels – and therefore no effect on metabolism and weight.

    The hypothesis’s author itself is careful to state in the paper that carbonated water is not a standalone solution for weight loss and that healthy diet and physical activity are both key.

    Does fizzy water at least help with appetite?

    Another claim that has sometimes been made about fizzy water in the media and in other studies (though not by the author of this latest hypothesis) is that it can help you feel fuller for longer, which may aid in weight loss. However, the evidence here is not conclusive.

    While some studies have found that people who drank carbonated water reported it helped them feel fuller for longer, other studies have actually shown it may have the opposite effect. Research in rats that looked specifically at weight and appetite hormones found that sparkling water increased both weight and levels of the hunger hormone ghrelin. In a parallel study these researchers conducted on 20 men, it was shown that fizzy water also increased their ghrelin levels. This suggests fizzy water could actually make people more hungry.

    It seems the data is not conclusive about the effect of fizzy water on hunger. In theory, fizzy water might help to stretch our stomach causing us to feel full. However, the data does not seem to agree with this theory.

    In order for fizzy water to do this, it would need to stay in the stomach longer than still water – and science suggests this isn’t the case. A study which compared drinking fizzy water versus drinking still water after a meal found both seem to leave the stomach at the same rate.

    What’s more, drinking water with meals does not have a significant effect on appetite and feeling full. This is all down to the shape of the stomach and how it churns and breaks down our food. The bottom curve of our stomach has a channel called the Magenstrasse or “stomach road” which allows liquids to flow quickly into the small intestine where it can be absorbed.

    While we might wish a glass of sparkling water could help support weight loss or at least help us feel fuller for longer, there’s currently little to no data to support this. The only real effect that drinking fizzy water (or even still water) has on body weight seems to be that when people use it to replace sugary drinks, it means they consume fewer calories on average.


    The Conversation has spoken with Akira Takahashi, doctor of medicine and head of department at Tesseikai Neurosurgical Hospital, the author of the hypothesis. He writes that based on the 2004 study’s findings, it would be difficult to simulate the effect of haemodialysis through drinking carbonated water – and that it’s unlikely fizzy water alone could lead to weight loss.

    He states that the mechanism shown in the haemodialysis study, by which CO2 can reduce blood sugar levels, may behave similarly to the CO2 absorbed from drinking fizzy water — and that this may result in glucose consumption in the blood near the stomach. However, he says more research will be needed to measure blood sugar levels before and after drinking carbonated water to validate this effect. Takahashi also thinks the feeling of fullness caused by drinking carbonated beverages warrants further research, as carbon dioxide releases bubbles that stimulate the stomach’s stretch receptors – creating a sensation of fullness.

    Takahashi writes: “It is important to note that carbonated water alone is unlikely to contribute significantly to weight loss. A balanced diet and regular exercise remain essential for effective weight management.”

    Duane Mellor a member of the British Dietetic Association. He has in the past undertaken advisory and consultative work with the soft drinks, sweetener and sugar industry.

    ref. Why fizzy water won’t help you lose weight – despite what some studies might suggest – https://theconversation.com/why-fizzy-water-wont-help-you-lose-weight-despite-what-some-studies-might-suggest-247940

    MIL OSI – Global Reports

  • MIL-OSI Global: Why not all plans for a four-day working week would be a win for health

    Source: The Conversation – UK – By Anne Skeldon, Professor of Mathematics, Head of School, School of Mathematics & Physics, University of Surrey

    Dusan Petkovic/Shutterstock

    The right to request a short working week, with four longer “shifts” and three days off is being proposed as part of new flexible working legislation in the UK. Also known as working “compressed hours”, this schedule can sound attractive, with reports claiming improved efficiency and productivity. And, of course, no pay cut for workers.

    It could result in fewer commutes, which saves time for workers and can be more environmentally friendly. And it could provide more flexibility for workers with childcare or care for other dependants, for example.

    But there could be negative consequences to squeezing typical workloads into fewer days. Under these plans, there is no suggestion that by compressing the working week, people will work fewer hours.

    Compressed hours mean that, instead of working 7.5 hours a day for five days, you would work 9.4 hours per day for four days – putting in almost two hours more work every working day. There is strong evidence that longer work hours result in more errors and accidents. Long work hours are also linked to poorer decision-making and make it more likely people will have an accident on their drive home.

    For example, it has long been understood that working longer shifts increases the risk of workplace accident and injuries. The risk of a workplace accident is on average 13% higher for a ten-hour shift than an eight-hour shift.

    Accident risk remains more or less constant for the first eight or so hours of work but then rises rapidly, so that the risk of an accident in the tenth hour of work is 90% higher than in the first eight hours.

    To function effectively and safely at work relies on sufficient sleep, ideally at the right time of day and in a regular pattern. This is based on fundamental physiological factors that cannot be changed by training, motivation or professionalism.

    Getting into sleep debt

    These factors that determine our ability to function are driven by time of day, how long we have been awake and accumulated sleep debt. For example, humans are sleepier during the night than the day, and it can take between two and four hours after waking to achieve full alertness.

    What’s more, our ability to function decreases rapidly after we have been awake for 16 hours, and especially so at night.

    But what are the health consequences of a compressed hours schedule? It is already commonplace for people to have shorter periods of sleep during the working week and then try to catch up with sleep at the weekend, with mixed results.

    If people work compressed hours, then on working days they have to fit in two extra hours of work but still carry out all the other activities in their daily lives. They still need to wash, eat, communicate, provide care for children and others.

    So there’s a real chance that compressed hours then also lead to “compressed sleep” and accentuate irregular patterns of rest or chronic sleep debt. Irregular or insufficient sleep is increasingly associated with a higher risk of diabetes, cardiovascular disease, obesity, certain cancers and dementia – the leading causes of mortality in wealthy nations. In 2017, the economic cost of insufficient sleep in the UK alone was estimated as US$50 billion (£40 billion), up to 1.86% of GDP.




    Read more:
    The science behind why you love a weekend lie-in


    The negative effect of chronic sleep loss accumulates more rapidly than experts previously realised. This knock-on effect is most severe during night shifts, especially when those shifts are long. There are good reasons why the UK regulator, the Health and Safety Executive, supports the EU working time directive, which imposes constraints on the length, timing and number of shifts.

    If the concept of fewer but longer work shifts is accepted, what happens next? Why not propose three 12.5-hour workdays a week, or two 18.75-hour workdays? Why not work 24 hours a day and then work only eight days a month?

    And at the end of a long day, many workers have to get behind the wheel.
    Andrey_Popov/Shutterstock

    This sounds fanciful, and yet it is happening. Several UK fire services have moved to 24-hour shifts, following the trend in North America where 24, 48 or even longer duty hours are common for firefighters. Also in North America, many physicians work 24-hour shifts or longer, with well-documented negative consequences including higher rates of serious medical errors and surgical complications, and increased accident risk on the drive home when compared to shorter shifts.

    It’s certainly true that some workers prefer to work longer days, for example to have longer blocks of time off for childcare. But at what point do concerns over the safety of employees and the people they interact with – as well as the negative effects (and financial costs) on long-term health – outweigh employee preference?

    Compressed hours of work may be effective in some scenarios for some people and businesses. But if compressed hours of work lead to compressed sleep, then we need to recognise the negative consequences.

    New legislation should build in sufficient guidance and protections for both employers and employees, plus it should be evidence-based. With wearable tech like smartwatches to track behaviour, it should be feasible to collect information on sleep, health, near misses and accidents. Then mathematical models and AI could be used to design individualised work schedules that are healthy and productive for everyone.

    Anne Skeldon has received funding from Transport for London and from Scotia Gas Network.

    Derk-Jan Dijk received funding from AFOSR USA.

    Steven W Lockley is a consultant to Timeshifter Inc, KBR Wyle Services, Apex 2100 Ltd and Illumalife Inc.

    ref. Why not all plans for a four-day working week would be a win for health – https://theconversation.com/why-not-all-plans-for-a-four-day-working-week-would-be-a-win-for-health-247839

    MIL OSI – Global Reports

  • MIL-OSI Global: Armenia and Azerbaijan are at loggerheads again – here’s why tensions are rising

    Source: The Conversation – UK – By Svante Lundgren, Researcher, Lund University

    Azerbaijan’s president, Ilham Aliyev, has launched a fierce verbal attack on Armenia, which he has called a fascist state. “Fascism must be destroyed,” he said in an interview on local TV networks on January 7. “Either the Armenian leadership will destroy it, or we will.”

    This rhetoric is strongly reminiscent of baseless claims used by Vladimir Putin about Ukraine to justify Russia’s invasion. He has claimed that Ukraine must be “denazified”.

    There are also reports that Azerbaijan’s acquisition of advanced Israeli weapons have increased recently, according to Israeli journalist Avi Sharf, national security, cyber and open source intelligence editor at Israeli news outlet Haaretz.

    Armenia and Azerbaijan have a long history of conflict over Nagorno-Karabakh, a region within Azerbaijan until recently mainly populated by Armenians. The first war between them in the 1990s led to the establishment of a self-proclaimed Armenian republic, which no country recognised.

    Then, after a 44-day war in 2020, Azerbaijan took control over most of the enclave. The rest was conquered in September 2023, prompting Armenians living there (more than 100,000 people) to flee to Armenia.

    In the last few months Aliyev accused Armenia of preparing a “war of revenge”. Since its devastating defeat in the second Karabakh war in 2020, Armenia has taken steps to strengthen its defences. Among other things, it has made significant arms purchases from France. This has also provoked Aliyev to criticise France and its president, Emmanuel Macron.

    But, although Armenia has been trying to reduce Azerbaijan’s military advantage through reforms in the army and arms purchases, the country is still militarily inferior to its neighbour. Any military confrontation is likely to result in an early defeat for Armenia.




    Read more:
    Future of Russian gas looking bleak as Ukraine turns off taps and Europe eyes ending all imports


    The argument from Azerbaijan is clearly that if there is conflict in the region, it will be part of an Armenian “preparation for a war”. Baku suggests that therefore the responsibility for any conflict would lie with Armenia and those who arm the country (in particular, France). It’s possible that this rhetoric is intended to legitimise some kind of military action.

    Because of escalating tension in the past few years, Armenia invited the European Union to monitor the border between the countries. This was to help address Azerbaijani accusations that Armenia was preparing for war, and to monitor, and prevent, shootings along the border.


    Peter Hermes Furian/Shutterstock

    Over the past two years Azerbaijan has denied these unarmed EU observers permission to operate on its territory, so they were only able to work from the Armenian side. It has also strongly condemned the EU for this mission.

    The EU monitors have been in place since February 2023, and should be due to withdraw next month. Armenia has suggested the EU monitors continue but Baku has made clear it wants them removed.

    So, why might Azerbaijan want to reignite tensions with Armenia? One point of contention between them is access to the “Zangezur corridor”, a land connection between Azerbaijan and its autonomous republic, Nakhichevan,.

    Long-running regional conflict

    Azerbaijan has long demanded access to, and control of, this route. The natural corridor runs through Armenia’s Syunik region (in Azerbaijani “Zangezur”, hence the Zangezur corridor). Armenia has declared its willingness to open up transport connections throughout the region – including between Azerbaijan and Nakhichevan – but opposes a corridor through its territory that it does not control.

    The south Caucasus (the region including Georgia, Armenia and Azerbaijan) has long been an area that Putin sees as part of his sphere of influence. After the break-up of the Soviet Union, Russia tried to keep the region relatively calm, but in 2020 Putin allowed the war to continue until Armenia was defeated, before putting pressure on Aliyev to stop. Three years later, Azerbaijan took what was left of Nagorno-Karabakh while Russian peacekeepers looked on.

    Armenian concern over what it sees as Russian bias towards Azerbaijan has led Yerevan to increasingly turn towards the west. On January 14 2025, a “strategic partnership charter” was signed between Armenia and the US, which includes an economic and defence partnership, but whether the new Trump administration will want to build on, or even ignore, that relationship is not yet clear.

    In what is considered an important symbolic move Armenia is also currently negotiating with Russia over the removal of its Federal Security Service (FSB security service) guards along the Armenian border in an attempt to reduce reliance on Moscow for its security. Armenian prime minister Nikol Pashinyan said in 2024 that the nation would pull out of the Russian-led Collective Security Treaty Organization in another move that signals Armenia’s increasingly fragile relationship with Moscow.

    Will there be a war?

    The EU has meanwhile strengthened relations with Armenia.

    While Azerbaijan may have escaped international fallout over the attack on Nagorno-Karabakh in the autumn of 2020, and over the ethnic cleansing of the enclave’s Armenian population in 2023. But if a new war led to a large-scale attack on Armenia it would unlikely to be ignored by the west.

    Despite the west’s minimal reactions to Azerbaijani incursions across the Armenian border in May 2021 and September 2022, in 2025 there is more international focus on the region and on the potential consequences of ignoring what’s going on around Russia’s borders.

    Although military intervention from the west is unlikely, the possibility of sanctions against Azerbaijan could be enough of an incentive for Aliyev to try to maintain the peace.

    Svante Lundgren does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Armenia and Azerbaijan are at loggerheads again – here’s why tensions are rising – https://theconversation.com/armenia-and-azerbaijan-are-at-loggerheads-again-heres-why-tensions-are-rising-247533

    MIL OSI – Global Reports

  • MIL-OSI Global: Omagh bombing: why a public inquiry is being held more than 25 years after the atrocity

    Source: The Conversation – UK – By Peter John McLoughlin, Lecturer in Politics, Queen’s University Belfast

    The 1998 Good Friday agreement is commonly seen to have ended what were euphemistically termed “the Troubles” in Northern Ireland. However, just four months after the peace accord was signed, an attack on the town of Omagh resulted in the greatest loss of life in any single incident of the conflict.

    The bombing, on August 15 1998, killed 29 people and injured an estimated 220 more. Among those who lost their lives were nine children and a woman who was pregnant with twins.

    A group called “the Real IRA” claimed responsibility for the atrocity. It was one of the so-called “dissident” republican factions which broke away from the mainstream IRA after its political wing, Sinn Féin, turned toward peaceful politics. The Real IRA’s assault on Omagh was clearly intended to derail the Northern Ireland peace process and destroy the Good Friday agreement.

    It could be argued, however, that the bombing had the opposite effect. The atrocity encouraged Northern Ireland’s politicians to come together and redouble their commitment to the peace process.

    Public outcry over the attack also forced the Real IRA to announce a ceasefire. It later returned to violence, but widespread revulsion against the Omagh atrocity would undermine the support base that any dissident republican faction might draw upon.

    Political representatives of the Real IRA and other such groups have never been able to mobilise electoral support in the way that Sinn Féin was able to, in spite of its association with the IRA.

    The Omagh bombing also aided the ability of Sinn Féin leader Gerry Adams and others to steer mainstream republicans towards purely peaceful politics. The atrocity had shown the utter futility of violence.

    Adams’ condemnation of the attack provoked accusations of hypocrisy as he had previously defended IRA violence. Nonetheless, Adams continued to lead republicanism in ways that would cement its commitment to peaceful methods.

    The indiscriminate nature of the Omagh attack also helps explain the galvanising effect that it had on the peace process. People from both sides of the communal divide in Northern Ireland were killed, and from both sides of the Irish border. Two Spanish tourists also died visiting a region which the Good Friday agreement seemed to have made safe.

    The visit of Bill Clinton a month after the attack also brought global attention to Omagh. The US president had first visited Northern Ireland following the paramilitary ceasefires of 1994, receiving a rapturous reception when he turned on the Christmas lights in Belfast.

    But his return was as sombre as his first visit had been joyous. Despite this, the obvious sincerity of Clinton’s words and actions in Omagh would encourage the people and politicians of Northern Ireland to continue their efforts to build a peaceful society.

    Bill and Hillary Clinton visit the site of the Omagh bombing with Tony and Cheri Blair.
    Clinton Digital Library

    Unanswered questions

    More than 25 years on from the attack, they have largely succeeded in this endeavour. However, serious questions remain about the Omagh atrocity. Authorities in both parts of Ireland have been criticised for their response.

    In Northern Ireland, a former policing watchdog has argued that the security services failed to properly act on intelligence that might have prevented the attack.

    In the Irish Republic, where the bomb was constructed, the only person that was ever jailed over the attack would later see his conviction overturned. The latter ruling was also seen to result from the mishandling of evidence, this time by the Irish police.

    This explains why survivors and families of those killed and injured in the Omagh bombing have fought long and hard for an independent investigation into the attack. Neither the British nor the Irish government seemed eager to allow this, but legal action by members of the Omagh families led to a ruling by Belfast’s High Court in July 2021 which found it plausible that the attack might have been prevented by security services. This bolstered support for a public inquiry.

    Finally, in February 2023, the British government acceded and Lord Turnbull, a senior Scottish judge, was appointed to chair the investigation. The Irish government has not followed suit, but has committed to supporting the British inquiry.

    The inquiry officially opened in July of last year, but is only now beginning in earnest with a period of commemorative and personal statement hearings.

    Over four weeks, it will receive testimony from people who were injured, those who responded to the attack, or who were simply witnesses to the atrocity and its aftermath. Each submission will be read by Turnbull, and he has said that they will “inform the direction and approach of the Inquiry”.

    The inquiry begins

    There has, however, been some controversy regarding contributions to the investigation, and specifically that of a former British Army agent who infiltrated republican paramilitaries. This operative took legal action after being refused key status at the inquiry, a role which would have entitled him to make opening and closing statements, and to propose lines of questioning.

    He was instead granted witness status, and the inquiry will naturally be expected to examine evidence relating to information passed on to the police in the time leading up to the bombing.

    As a result, Turnbull has sought to assure those who might doubt the value of the investigation: “My inquiry may be the final opportunity to get to the truth of whether the bombing could have been prevented by the UK state.”

    Survivors and victims’ families will surely hope that this is the last time that that they will have to relive their trauma, and that the end result will indeed establish the truth as to what exactly the authorities knew about the Omagh attack. Then, the families may finally experience some closure, and be able to move on from what remains the deadliest attack in Northern Ireland’s history

    Peter John McLoughlin has received funding in the past from the AHRC, Leverhulme Trust, the Irish Research Council, and the Fulbright Commission. He is a member of Greenpeace.

    ref. Omagh bombing: why a public inquiry is being held more than 25 years after the atrocity – https://theconversation.com/omagh-bombing-why-a-public-inquiry-is-being-held-more-than-25-years-after-the-atrocity-248192

    MIL OSI – Global Reports

  • MIL-OSI Global: Five reasons why vertical farming is still the future, despite all the recent business failures

    Source: The Conversation – UK – By Gail Taylor, Dean of Life Sciences, UCL

    Don’t believe the tripe. Amorn Suriyan

    Plant factories are failing, with multiple companies closing or going bankrupt in recent months. This includes the largest vertical farm on the planet, in Compton, Los Angeles.

    Owned by San Francisco-based startup Plenty, the farm opened in 2023 to grow salads in partnership with Walmart. It was mothballed at the end of 2024, with the company citing the rising cost of energy in California as a major problem.

    Despite raising over US$1 billion (£802 million) from investors, the company’s value has reportedly plummeted from US$1.9 billion to below US$15 million. It now aims to focus solely on strawberry production in Virginia.

    New York-based Bowery Farming also halted all operations in late 2024, having previously being valued at US$2.3 billion. Fellow American vertical farmers AeroFarms, Kalera and AppHarvest have similarly filed for bankruptcy in the past two years, as has the UK’s Growing Underground, among various others.

    Clearly these are major setbacks. Year-round illuminated greenhouses and stacked, controlled-environment warehouses for producing food have been hailed as a sustainable alternative to traditional farming, promising fresh food close to populations.

    This reduces the need for transportation, which together with other issues in traditional farming such as soil degradation and forest clearing see it contributing around 20% of the greenhouse gases that lead to planetary warming and climate change.

    Multiple new indoor-farming companies sprang into life in the past decade, driven by significant venture capital. They harnessed the latest in LED lighting and hydroponic and aeroponic growing systems, using land and water ten to 100 times more efficiently than in a field and with far fewer pesticides.

    Initially developed to grow leafy greens and microgreens, these farms have more recently turned to higher value produce including herbs, strawberries, tomatoes and grapes.

    Grow, baby, grow.
    Gorodenkoff

    Among the reasons for the business failures are rising energy costs; the fact that traditional farming is cheaper, making it hard to compete on price; and the fact that rising interest rates have made financing more expensive.

    Together with other challenges such as high energy consumption and finding enough skilled labour, many opponents are writing this sector off as a fad that is unlikely to ever make a big impact on food security.

    This ignores success stories, such as JFC and Grow-up Farms, which are regular suppliers to the UK supermarkets. But more broadly, there are various reasons why the critics are likely to be wrong:

    1. We’re still early

    Vertical farming has been proving itself by “learning by doing” for the past decade. Kicked off by Nasa space scientists seeking to grow food in hostile environments with zero gravity and heavy radiation, this field is still highly experimental.

    New technologies like this one often conform to the Gartner hype cycle, where big initial expectations are rarely met, leading to a trough of disillusionment. Following this, the benefits start to crystallise as new players enter the market and mainstream adoption begins.

    Vertical farming is only a very small proportion of total farming, but it looks very likely to flourish given the need to reduce greenhouse-gas emissions, and the threats to food security from climate change and population growth. In addition, the costs are likely to be reduced by the arrival of much more renewable energy at cheaper prices in years to come.

    2. Heavy plant demand is coming

    Society stands on the edge of an unprecedented transformation as it shifts away from fossil fuels. We’re going to move to a circular bioeconomy, in which green plants will be central as feedstocks for everything from aviation fuels to alternative proteins to vaccine production to plant-based plastics.

    All this means greater pressure on land resources for food production, and an enhanced need for vertically stacked agriculture that recycles water and nutrients and requires fewer chemicals.

    3. Science is on its side

    Unexpected scientific discoveries continue to drive vertical farming. For example, tunable wavelength LEDs have shown that certain spectral bands can affect crops profoundly.

    Far-red light, which is just beyond visible red light, promotes growth and flowering, raising lettuce yields by 30%, for example. Blue light can improve shelf-life and nutritional quality, even enhancing certain plant chemicals known to help prevent cancers.

    The significance of these discoveries has yet to be fully realised, but by the complete control of the farming environment that indoor farming makes possible, we will be able to more easily tailor food quality for the betterment of people and the planet.

    4. It’s horses for courses

    Growing leafy greens indoors in California, as Plenty did, was always going to be challenging. This is the state where they invented the iceberg lettuce, where wall-to-wall sunshine and even temperatures enable farmers to grow enough salad greens to supply the whole of the US.

    Contrast Singapore, where only 6% of fresh produce is locally grown. This has prompted the government to develop the “30 x 30” goal to supply 30% of nutritional needs by 2030, with vertical farming a key part of the strategy.

    Similarly the United Arab Emirates imports over 90% of its food, and is looking towards a future that includes vertical farming. The UK and much of northern Europe, where the outdoor growing season is short and land is limited, can also benefit from these technologies (and indeed, do already).

    It’s a different story in Singapore.
    PrasitRodphan

    5. Baby and bathwater

    Unlike the cutting-edge LED-illuminated, stacked warehouses, intensive hydroponic greenhouses have been operating commercially for decades. The Netherlands leads the way in supplying year-round fresh produce from these structures, and is now the second biggest food exporter in the world.

    Even in the UK, its common for such greenhouses to supply potted herbs, tomatoes and strawberries all year round.

    These are a half-way house to vertical farming, and are also likely to be in greater demand in the coming decades. They could well extend their reach to supply fresh nutritious food to places where food security may be particularly challenged, such as Africa, south Asia and the Middle East.

    Gail Taylor has received funding for research on vertical farming from the John B. Orr Endowment from the University of California, Davis and gift funding from the company, Plenty. Between 2021 and 2024 she was a member of the Scientific Advisory Board for the company Plantible Foods.

    ref. Five reasons why vertical farming is still the future, despite all the recent business failures – https://theconversation.com/five-reasons-why-vertical-farming-is-still-the-future-despite-all-the-recent-business-failures-248270

    MIL OSI – Global Reports

  • MIL-Evening Report: As the ‘digital oligarchy’ grows in power, NZ will struggle to regulate its global reach and influence

    Source: The Conversation (Au and NZ) – By Alexandra Andhov, Chair in Law and Technology, University of Auckland, Waipapa Taumata Rau

    The images of President Donald Trump at his inauguration surrounded by the titans of the global tech industry is a warning of what could come: a global digital oligarchy dominated by a tiny tech elite.

    Companies like Meta, Google, Microsoft, Amazon, X Corp, and OpenAI (all based in the United States) now operate beyond the control of most governments. Countries like New Zealand are increasingly struggling to keep these companies in check.

    In the past decade, New Zealand has taken several measures to curb the influence of powerful tech companies through voluntary agreements and tax legislation.

    But the digital age has fundamentally changed national sovereignty – the right of individual countries to decide the rules within their own borders.

    Big tech companies are gradually taking on functions traditionally reserved for government institutions. For example, these companies have begun to function as the arbiters of speech, controlling the visibility of certain ideas and comments.

    As recently as this month, Meta obscured searches for left-leaning topics including “Democrats”, later blaming the issue on a “technical glitch”.

    And as was widely covered in the media, Amnesty International released a report claiming that Facebook’s algorithms “proactively amplified” anti-Rohingya content in Myanmar, substantially contributing to human rights violations against the ethnic group.

    New Zealand’s attempts to regulate big tech

    A number of governments are now facing the question of how to temper the influence of these companies within their current legal frameworks.

    As New Zealand (among others) has discovered in the past decade, influencing the behaviour of these companies is easier said than done. It has repeatedly found itself struggling to effectively manage big tech’s impact on its society and economy.

    In 2018, for example, New Zealand’s Privacy Commissioner said Facebook had failed to comply with its obligations under the New Zealand Privacy Act. The company told the commission the Privacy Act did not apply to it.

    When the Christchurch terrorist attack was livestreamed on Facebook (owned by Meta), New Zealand authorities found themselves largely powerless to prevent the video’s spread across global platforms.

    This crisis prompted then-prime minister Jacinda Ardern to launch the Christchurch Call initiative aimed at combating online extremism by fostering collaboration between governments and tech companies.

    The goal was to develop and enforce measures such as improved content moderation, removal of extremist material, and the creation of safer online environments.

    While gaining support from more than 120 countries and tech companies, its effect depends on voluntary ongoing cooperation. Recent events suggest this ongoing cooperation is unlikely.

    In January, Meta CEO Mark Zuckerberg announced plans to get rid of content moderation in the US and possibly elsewhere. Zuckerberg has also pushed back against European Union regulations, claiming the EU’s data laws censored social media.

    Taxing big tech

    In 2019, New Zealand proposed a 3% digital tax on big tech revenue. A similar measure was introduced by France in 2020 and by Canada and Australia last year.

    While these proposals signify important steps toward holding big tech accountable, their implementation remains uncertain.

    Although the relevant tax provisions have been adopted in New Zealand, the law includes clauses allowing tax collections to be deferred until as late as 2030.

    Meanwhile, big tech continues to push back aggressively against regulation in various ways. These have included threatening reduced services (such as the brief closure of TikTok in the US) to leveraging their relationships with the Trump government against other countries.

    Using competition regulation to rein in big tech

    In December 2024, the Australian government unveiled draft legislation on big tech to level the playing field.

    The proposed law seeks to foster fair competition, prevent price gouging, and give smaller tech and news companies a chance to thrive in a landscape increasingly dominated by global giants.

    The legislation would grant the Australian Competition and Consumer Commission the authority to investigate and penalise companies with fines of up to A$50 million for restricting competition.

    The targeted behaviour includes tactics such as restricting data transfers between platforms (for example, moving contacts or photos from iPhone to Android) and limiting third-party payment options in app stores.

    The proposed law aims to put an end to these unfair advantages, ensuring a level playing field where businesses of all sizes can compete and consumers have more choices.

    Democractic governance in the digital age

    The growing power of tech platforms raises critical questions about democratic governance in the digital age.

    There is an urgent need to reconcile the global influence of tech companies with local democratic processes and to create mechanisms that safeguard individual and national sovereignty in an increasingly digital world.

    Governments need to recognise these platforms are not immutable forces of nature, but human-created systems that can be challenged, reformed or dismantled. The same digital connectivity that has empowered these corporations can become the very tool of their transformation.

    Alexandra Andhov is conducting research on Big Tech Governance, funded by the Independent Research Fund Denmark under the Inge Lehmann Programme. The author is grateful for this support and wishes to acknowledge that the research was conducted entirely independently.

    ref. As the ‘digital oligarchy’ grows in power, NZ will struggle to regulate its global reach and influence – https://theconversation.com/as-the-digital-oligarchy-grows-in-power-nz-will-struggle-to-regulate-its-global-reach-and-influence-247899

    MIL OSI AnalysisEveningReport.nz

  • MIL-OSI Global: France’s military withdrawal presents opportunities and risks to West African states

    Source: The Conversation – Canada – By Yolaine Frossard de Saugy, PhD Candidate, International Relations, McGill University

    In early January, Côte d’Ivoire announced that French troops would be withdrawing from the country and the military base of Port-Bouët would be handed over to Côte d’Ivoire’s army. The announcement is part of a seismic shift in France’s decades-long presence across francophone Africa.

    It is the latest echo of a larger trend that’s seen French troops withdraw or be expelled from its former sphere of influence, losing diplomatic and military weight in countries France had formerly colonized. Since 2022, Burkina Faso, Chad, Mali, Niger, Senegal, and now Côte d’Ivoire, have terminated defence agreements with France.

    This may present an opportunity for a long overdue assertion of sovereignty by the region’s countries. However, an ongoing threat from terror groups and the eagerness of other entities to step in could instead lead to more instability and a reinforcement of authoritarianism or regime fragmentation.

    France’s withdrawal

    Following the wave of independence in the 1960s, France entered in an array of agreements with its former colonies. These helped ensure France’s continued influence in Western Africa and its international standing.

    In addition to close political and economic ties, which included currency control by France and support to friendly leaders, this also involved the largest permanent military presence by a former colonial power, with troops stationed at various times in Cameroon, Gabon, Senegal, Burkina Faso, the Central African Republic, Djibouti, Chad, Niger, Mali and Côte d’Ivoire, as well as military assistance to others.

    This large military presence has long been controversial. Historically, France was involved in a number of covert or overt military operations with dubious ends, including deadly interventions in Cameroon in the 1960s and support for the Rwandan government during the 1994 genocide.

    More recently, it was criticized for backing of authoritarian regimes and leaders and an inadequate approach to anti-terrorism, including through the Serval and Barkhane missions in Mali and the broader Sahel region — the vast semi-arid region of Africa separating the Sahara Desert to the north and tropical savannahs to the south — between 2012 and 2022.

    Criticism has also been leveraged at the neocolonial intent of France’s policy, especially in the wake of comments such as President Emmanuel Macron’s remark that African countries were not sufficiently grateful for France’s interventions, which many decried as insensitive to the historical context and implications of France’s role.

    Change was therefore long overdue, and over the past three years, a number of developments have seemed to show that France’s star was waning.

    A surge of anti-French sentiment spread across the Sahel and beyond. A series of coups in Mali, Niger and Burkina Faso put in power military leaders who were eager to shake off French presence, leading to the departure of French forces from bases there.

    Leaving Côte d’Ivoire’s Port-Bouët was done in a more orderly fashion, and France presented it as part of a voluntary reorganization of its presence.

    Still, it is hard not to read this withdrawal as part of a wider reckoning with the failure of past policies and a rising desire of African leaders to reclaim sovereignty. This was indeed voiced out loud in the cases of Burkina Faso, Chad and Senegal, where a symbolic repudiation of French heritage is also taking place through the changing of street names.

    Risks of foreign influence

    This moment could provide an opportunity for West African states to shake off the remnants of the power imbalance that characterized France’s presence, and reshuffle the cards of military and diplomatic co-operation. This could lead to an era of more equal partnerships and responsiveness to popular aspirations.

    There are signs that such moves are taking place in the economic area, with Mali, for instance, asserting its sovereignty on resource extraction.

    However, the security situation in the Sahel has continued to deteriorate since the French withdrawal. New partners of Burkina Faso, Chad, Mali and Niger — such as the new iterations of the Wagner group, a Russian mercenary corps used as a proxy by the Russian government to widen its influence — have failed to protect civilians or undermine insurgencies.

    In some cases, they have even been accused of taking part in the violence. The military juntas in power have delayed promised democratic transitions, and sometimes turned to the scapegoating of minorities as a litmus test of their anti-western credentials instead.

    This situation is therefore more likely to lead to further instability, especially as Russia is consolidating its involvement in the Sahel, China seeks to make further inroads in the region to strengthen its stance as the alternative to western support, and new nations such as Turkey and even Ukraine are seeking to widen their influence and reach.




    Read more:
    Ukrainian special operations abroad are part of its broader war effort against Russia


    Governments in countries like Chad seem to be turning to multiple new partners for support in maintaining security. This could help them conclude fairer agreements, but it also heightens the risk of regime fragmentation and internal violence if competing forces vie for influence.

    Sudan’s civil war, fuelled by the support of external countries =like Egypt and the United Arab Emirates, offers a cautionary tale of what is at risk when multiple new entities seek access or export their rivalries to the continent.

    Asserting sovereignity

    The political landscape across West Africa is rapidly changing. France seeks new partners outside of its traditional area but sees its influence diminishing across the board. The potential for a more isolationist United States under President Donald Trump is likely to leave a power vacuum in many parts of the world, further opening the door to new forces drawn to Africa’s natural resources and geostrategic importance.

    These trends provide African countries with an opportunity to change longstanding patterns. However, they also come with heightened risks, especially in an emerging multipolar world order where mid-level powers, rising major powers and reconstituting great powers seek opportunities to assert their influence.

    The only potential counterbalance to these dangers is strong regional co-ordination between West African states.

    Mali, Niger and Burkina Faso have left the historical regional grouping ECOWAS, whose effectiveness had been hampered by its historical dependence on western funding. They have, however, formed their own alliance and there are now talks of expanding co-operation with neighbours, including Togo and Ghana.

    Whether this can at last provide truly African solutions to the continent’s challenges and offset the centrifugal forces already at play remains to be seen.

    Yolaine Frossard de Saugy does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. France’s military withdrawal presents opportunities and risks to West African states – https://theconversation.com/frances-military-withdrawal-presents-opportunities-and-risks-to-west-african-states-248098

    MIL OSI – Global Reports

  • MIL-OSI Global: Trump 2.0: the rise of an ‘anti-elite’ elite in US politics

    Source: The Conversation – France – By William Genieys, Directeur de recherche CNRS au CEE, Sciences Po

    US president Donald Trump is surrounded by a new cohort of politicians and officials. While one of his campaign promises was to overthrow the “corrupt elites” he accuses of flooding the American political arena, his second term in office has elevated elites chosen, above all, for their political loyalty to him.

    The media’s focus on Trump’s comments on making Canada the 51st US state and annexing Greenland and billionaire Elon Musk’s support for some far-right parties in Europe has obscured the ambitious programme to transform the federal government that the new political elite intends to implement.

    In the wake of Trump’s inauguration on January 20, the Republican elites most loyal to the MAGA (“Make America Great Again”) leader, who staunchly oppose Democratic elites and their policies, are operating amid their party’s control over the executive and legislative branches (at least until the midterm elections in 2026), a conservative-dominated Supreme Court that includes three Trump-appointed justices, and a federal judiciary that shifted right during his first term.

    However, the political project of the Trumpist camp consists less of challenging elitism in general than attacking a specific elite: one particular to liberal democracies.

    Castigating democratic elitism

    Typical anti-elite political propaganda, along the lines of “I speak for you, the people, against the elites who betray and deceive you,” claims that a populist leader would be able to exercise power for and on behalf of the people without the mediation of an elite disconnected from their needs.

    Political theorist John Higley sees behind this form of anti-elite discourse an association between so-called “forceful leaders” and “leonine elites” (who take advantage of the former and their political success): a phenomenon that threatens the future of Western democracies.

    Since the Second World War, there has been a consensus in US politics on the idea of democratic elitism. According to this principle, elitist mediation is inevitable in mass democracies and must be based on two criteria: respect for the results of elections (which must be free and competitive); and the relative autonomy of political institutions.

    The challenge to this consensus has been growing since the 1990s with the increased polarization of American politics. It gained new momentum during and after the 2016 presidential campaign, which was marked by anti-elite rhetoric from both Republicans and Democrats (such as senators Bernie Sanders and Elizabeth Warren). At the heart of some of their diatribes was an aversion to “the Establishment” on the east and west coasts of the United States, where many prestigious financial, political and academic institutions are based, and the conspiracy notion of the “deep state”.

    The re-election of Trump, who has never admitted defeat in the 2020 presidential vote, growing political hostility and the direct involvement of tech tycoons in political communication –especially on the Republican side– further reinforce the denial of democratic elitism.

    Trump’s populism from above: a revolt of the elites

    The idea that democracy could be betrayed by “the revolt of the elites”, put forward by the US historian Christopher Lasch (1932-1994), is not new. For the anthropologist Arjun Appadurai, it is a particular feature of contemporary populism, which comes “from above.” Indeed, if the 20th century was the era of the “revolt of the masses”, the 21st century, according to Appadurai, “is characterized by the ‘revolt of the elites’.” This would explain the rise of populist autocracies (such as those currently led by Viktor Orban in Hungary, Recep Tayyip Erdogan in Turkey and Narendra Modi in India, and formerly led by Jair Bolsonaro in Brazil), but also the election successes of populist leaders in consolidated democracies (including those of Trump in the US, Giorgia Meloni in Italy, and Geert Wilders in the Netherlands, for example).

    As Appadurai explains, the success of Trumpian populism, which represents a revolt by ordinary Americans against the elites, casts a veil over the fact that, following Trump’s victory in November, “it is a new elite that has ousted from power the despised Democratic elite that had occupied the White House for nearly four years.”

    The aim of this “alter elite” is to replace the “regular” Democrat elites, but also the moderate Republicans, by deeply discrediting their values (such as liberalism and so-called “wokeism”) and their supposedly corrupt political practices. As a result, this populism “from above” carried out by the President’s supporters constitutes an alternative elite configuration, the effects of which on American democratic life could be more significant than those observed during Trump’s first term.

    Beyond the idea of a ‘Muskoligarchy’

    The idea that we are witnessing the formation of a “Muskoligarchy” –in other words, an economic elite (including tech barons such as Jeff Bezos, Mark Zuckerberg and Marc Andreessen) rallying around the figurehead of Elon Musk, whom Trump asked to lead what the president has called a “Department of Government Efficiency” (DOGE) –is seductive. It perfectly combines the vision of an alliance between a “conspiratorial, coherent, conscious” ruling class and an oligarchy made up of the “ultra-rich”. For the Financial Times columnist Martin Wolf, it is even a sign of the development of “pluto-populism”. (It is also worth noting that former president Joe Biden, in his farewell speech, referred to “an oligarchy… of extreme wealth” and “the potential rise of a tech-industrial complex.”)

    However, some observers are cautious about the advent of a “Muskoligarchy.” They point to the sociological eclecticism of the new Trumpian elite, whose facade of unity is held together above all by a political loyalty, for the time being unfailing, to the MAGA leader. The fact remains, however, that the various factions of this new “anti-elite” elite are converging around a common agenda: to rid the federal government of the supposed stranglehold of Democratic “insiders.”

    An ‘anti-elite’ elite against the ‘deep state’

    In his presidential inauguration speech in 1981, Ronald Reagan said: “Government is not the solution to our problem; government is the problem.” The anti-elitism of the Trump elite is inspired by this diagnosis, and defends a simple political programme: rid democracy of the “deep state.”


    Although the idea that the US is “beleaguered” by an “unelected and unaccountable elite” and “insiders” who subvert the general interest has been shown to be unfounded, it is nonetheless predominant in the new Trump Administration.

    This conspiracy theory has been taken to the extreme by Kash Patel, the candidate being considered to head the FBI. In his book, Government Gangsters, a veritable manifesto against the federal administration, the former lawyer writes about the need to resort to “purges” in order to bring elite Democrats to justice. He lists around 60 people, including Biden, ex-secretary of state Hillary Clinton and ex-vice president Kamala Harris.

    Government Gangsters, Kash Patel’s controversial book.
    Google Books

    The appointment of Russell Vought as head of the Office of Management and Budget at the White House, a person who is known for having sought to obstruct the transition to the Biden Administration in 2021, also highlights the hard turn that the Trump administration is likely to take.

    Reshaping the state around political loyalty

    To “deconstruct the administrative state”, the “anti-elite” elites are relying on Project 2025, a 900-plus page programme report that the conservative think-tank The Heritage Foundation, which published it, says was produced by “more than 400 scholars and policy experts.” According to former Project 2025 director Paul Dans, “never before has the entire movement… banded together to construct a comprehensive plan” for this purpose. On this basis, the “anti-elite” elite want to impose loyalty to Project 2025 on federal civil servants.

    But this idea is not new. At the end of his first term, Trump issued an executive order facilitating the dismissal of statutory federal civil servants occupying “policy-related positions” and considered to be “disloyal”. The decree was rescinded by president Biden, but Trump on his first day back in office signed an executive order that seeks to void Biden’s rescindment. As President, Trump is also able to allocate senior positions within the federal administration to his supporters.

    The “anti-elite” elite not only want to reduce the size of the state, as was the case under Reagan’s “neoliberalism”, but to deconstruct and rebuild it in their own image. Their real aim is a more lasting victory: the transformation of democratic elitism into populist elitism.

    Les auteurs ne travaillent pas, ne conseillent pas, ne possèdent pas de parts, ne reçoivent pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’ont déclaré aucune autre affiliation que leur organisme de recherche.

    ref. Trump 2.0: the rise of an ‘anti-elite’ elite in US politics – https://theconversation.com/trump-2-0-the-rise-of-an-anti-elite-elite-in-us-politics-248180

    MIL OSI – Global Reports

  • MIL-OSI Global: Engineering the social: Students in this course use systems thinking to help solve human rights, disease and homelessness

    Source: The Conversation – USA – By Raúl Ordóñez, Professor of Electrical and Computer Engineering, University of Dayton

    An engineering education can equip students to work on broader social issues. Photosomnia/E+ via Getty Images

    Uncommon Courses is an occasional series from The Conversation U.S. highlighting unconventional approaches to teaching.

    Title of course:

    Engineering Systems for the Common Good

    What prompted the idea for the course?

    As a control systems researcher, I have long felt that control systems – and systems science in general – have much to contribute to solving social problems.

    Control systems make other systems behave in some desired manner. Think of the cruise control in a car, which keeps its speed constant, or the thermostat in a house that regulates temperature.

    I wanted to know whether engineers could treat society and social phenomena as systems in the engineering sense. That way, students and researchers could mathematically model and even simulate these phenomena using computers.

    Control systems engineering offers a set of powerful analysis and design tools. I wanted to know whether my students and I could apply these methods to things such as policymaking to help address societal problems.

    What does the course explore?

    In this course, students learn fundamental systems theory concepts, such as block diagrams, feedback loops and discrete-time dynamics. They apply these concepts to mathematically model and analyze social systems.

    In the class, I talk with the students about human rights. We think about how this powerful idea applies to social systems. This systems framework helps us approach social justice issues in a methodical, mathematical manner.

    In Raúl Ordóñez’s class at the University of Dayton, students take engineering concepts and apply them to societal issues.
    Shawn Robinson/University of Dayton

    Students use simulation software to model systems such as disease epidemics, the viral spread of ideas, the tragedy of the commons and homelessness, among others.

    Importantly, they learn that some social phenomena can be methodically studied and engineered, in a quantifiable manner. For example, they can use numbers and data to experiment and evaluate how introducing vaccines affects disease spread.

    By the end of the course, students gain a deeper understanding of the connection between engineering principles and tools and human rights and society.

    Why is this course relevant now?

    This course helps bridge the gap between engineering and social sciences by bringing concepts from human rights and social justice to engineering students. It teaches them how the powerful engineering tools they learn throughout the engineering curriculum can directly serve the common good.

    What’s a critical lesson from the course?

    The course is a concrete step toward teaching engineering and science students that engineering has more to offer to society than its direct applications. Students learn that a partnership between the humanities and engineering is not only possible but strongly desirable for the advancement of the common good.

    What materials does the course feature?

    There is no one textbook that deals with all the topics in this course, although the book “Humanitarian Engineering: Advancing Technology for Sustainable Development,” third edition, by Kevin M. Passino, is a very useful resource. I have mostly developed my own materials, including my set of lecture notes, projects and numerical simulation code.

    Many engineers use tools in engineering to help people and communities.

    What will the course prepare students to do?

    The course aims to prepare students to apply common engineering tools such as differential equations, signals and systems, systems analysis, mathematical models and numerical simulation to the analysis of social problems, with an emphasis on human rights implications.

    It also introduces social modeling as a powerful method for understanding social issues and assessing how various policies affect human rights.

    My goal is to produce engineering students who can meaningfully contribute to policymaking by using engineering tools to assess the consequences of social and economic policies.

    Dr. Kevin M. Passino was my doctoral research adviser at the Ohio State University, where I did my PhD.

    ref. Engineering the social: Students in this course use systems thinking to help solve human rights, disease and homelessness – https://theconversation.com/engineering-the-social-students-in-this-course-use-systems-thinking-to-help-solve-human-rights-disease-and-homelessness-242893

    MIL OSI – Global Reports

  • MIL-OSI Global: Nutrition advice is rife with misinformation − a medical education specialist explains how to tell valid health information from pseudoscience

    Source: The Conversation – USA – By Aimee Pugh Bernard, Assistant Professor of Immunology and Microbiology, University of Colorado Anschutz Medical Campus

    If a health claim about a dietary intervention sounds too good to be true, it probably is.
    Mizina/iStock via Getty Images Plus

    The COVID-19 pandemic illuminated a vast landscape of misinformation about many topics, science and health chief among them.

    Since then, information overload continues unabated, and many people are rightfully confused by an onslaught of conflicting health information. Even expert advice is often contradictory.

    On top of that, people sometimes deliberately distort research findings to promote a certain agenda. For example, trisodium phosphate is a common food additive in cakes and cookies that is used to improve texture and prevent spoilage, but wellness influencers exploit the fact that a similarly named substance is used in paint and cleaning products to suggest it’s dangerous to your health.

    Such claims can proliferate quickly, creating widespread misconceptions and undermining trust in legitimate scientific research and medical advice. Social media’s rise as a news and information source further fuels the spread of pseudoscientific views.

    Misinformation is rampant in the realm of health and nutrition. Findings from nutrition research is rarely clear-cut because diet is just one of many behaviors and lifestyle factors affecting health, but the simplicity of using food and supplements as a cure-all is especially seductive.

    I am an assistant professor specializing in medical education and science communication. I also train scientists and future health care professionals how to communicate their science to the general public.

    In my view, countering the voices of social media influencers and health activists promoting pseudoscientific health claims requires leaning into the science of disease prevention. Extensive research has produced a body of evidence-based practices and public health measures that have consistently been shown to improve the health of millions of people around the world. Evaluating popular health claims against the yardstick of this work can help distinguish which ones are based on sound science.

    To parse pseudoscientific claims from sound advice about health and nutrition, it’s crucial to evaluate the information’s source.
    tadamichi/Getty Images

    Navigating the terrain of tangled information

    Conflicting information can be found on just about everything we eat and drink.

    That’s because a food or beverage is rarely just good or bad. Instead, its health effects can depend on everything from the quantity a person consumes to their genetic makeup. Hundreds of scientific studies describe coffee’s health benefits and, on the flip side, its health risks. A bird’s-eye view can point in one direction or another, but news articles and social media posts often make claims based on a single study.

    Things can get even more confusing with dietary supplements because people who promote them often make big claims about their health benefits. Take apple cider vinegar, for example – or ACV, if you’re in the know.

    Apple cider vinegar has been touted as an all-natural remedy for a variety of ailments, including digestive issues, urinary health and weight management. Indeed, some studies have shown that it might help lower cholesterol, in addition to having other health benefits, but overall those studies have small sample sizes and are inconclusive.

    Advocates of this substance often claim that one particular component of it – the cloudy sediment at the bottom of the bottle termed “the mother” – is especially beneficial because of the bacteria and yeast it contains. But there is no research that backs the claim that it offers any health benefits.

    One good rule of thumb is that health hacks that promise quick fixes are almost always too good to be true. And even when supplements do offer some health benefits under specific circumstances, it’s important to remember that they are largely exempt from Food and Drug Administration regulations. That means the ingredients on their labels might contain more or less of the ingredients promised or other ingredients not listed, which can potentially cause harms such as liver toxicity.

    It’s also important to keep in mind that the global dietary supplements industry is worth more than US$150 billion per year, so companies – and wellness influencers – selling supplements have a financial stake in convincing the public of their value.

    Misinformation about nutrition is nothing new, but that doesn’t make it any less confusing.

    How nutrition science gets twisted

    There’s no doubt that good nutrition is fundamental for your health. Studies consistently show that a balanced diet containing a variety of essential nutrients can help prevent chronic diseases and promote overall well-being.

    For instance, minerals such as calcium and iron support bone health and oxygen circulation in the blood, respectively. Proteins are essential for muscle repair and growth, and healthy fats, like those found in avocados and nuts, are vital for brain health.

    However, pseudoscientific claims often twist such basic facts to promote the idea that specific diets or supplements can prevent or treat illness. For example, vitamin C is known to play a role in supporting the immune system and can help reduce the duration and severity of colds.

    But despite assertions to the contrary, consuming large quantities of vitamin C does not prevent colds. In fact, the body needs only a certain amount of vitamin C to function properly, and any excess is simply excreted.

    Companies sometimes claim their supplement is “scientifically proven” to cure illness or boost brain function, with no credible research to back it up.

    Some companies overstate the benefits while underplaying the hazards.

    For example, wellness influencers have promoted raw milk over pasteurized milk as a more natural and nutritious choice, but consuming it is risky. Unpasteurized milk can contain harmful bacteria that leads to gastrointestinal illness and, in some cases, much more serious and potentially life-threatening diseases such as avian influenza, or bird flu.

    Such dietary myths aren’t harmless. Reliance on nutrition alone can lead to neglecting other critical aspects of health, such as regular medical checkups and lifesaving vaccinations.

    The lure of dietary myths has led people with cancer to replace proven science-backed treatments, such as chemotherapy or radiation, with unproven and misleading nutrition programs.

    How to spot less-than-solid science

    Pseudoscience exploits your insecurities and emotions, taking advantage of your desire to live the healthiest life possible.

    While the world around you may be uncertain and out of your control, you want to believe that at the very least, you have control over your own health. This is where the wellness industry steps in.

    What makes pseudoscientific claims so confusing is that they use just enough scientific jargon to sound believable. Supplements or powders that claim to “boost immunity” often list ingredients such as adaptogens and superfoods. While these words sound real and convincing, they actually don’t mean anything in science. They are terms created by the wellness industry to sell products.

    I’ve researched and written about reliable ways to distinguish science facts from false health claims. To stay alert and find credible information, I’d suggest you follow a few key steps.

    First, check your emotions – strong emotional reactions, such as fear and anger, can be a red flag.

    Next, check that the author has experience or expertise in the field of the topic. If they’re not an expert, they might not know what they are talking about. It’s always a good idea to make sure the source is reputable – ask yourself, would this source be trusted by scientists?

    Finally, search for references that back up the information. If very little or nothing else exists in the science world to back up the claims, you may want to put your trust in a different source.

    Following these steps will separate the facts from fake news and empower you to make evidence-based decisions.

    Aimee Pugh Bernard is an unpaid board member for Immunize Colorado

    ref. Nutrition advice is rife with misinformation − a medical education specialist explains how to tell valid health information from pseudoscience – https://theconversation.com/nutrition-advice-is-rife-with-misinformation-a-medical-education-specialist-explains-how-to-tell-valid-health-information-from-pseudoscience-246478

    MIL OSI – Global Reports

  • MIL-OSI Global: Getting mail to your door is just one part of what the postmaster general does

    Source: The Conversation – USA – By Jena Martin, Professor of Law, St. Mary’s University

    Postal workers sort through mail and packages. Frederic J. Brown/AFP via Getty Images

    The postmaster general is responsible for getting billions of pieces of mail across the globe, managing hundreds of thousands of employees and caring for some of the country’s most vulnerable Americans.

    The agency is currently run by Postmaster General Louis DeJoy, who served in President Donald Trump’s first administration and during President Joe Biden’s term as well. He is one of the few key advisers to serve in both Trump administrations.

    I’m a law professor who has studied the United States Postal Service and the role of the postmaster general.

    Here’s what having the job of overseeing the Postal Service entails. Spoiler: It’s about more than getting your mail delivered.

    Sprawling duties of the postmaster

    The postmaster general overseas a vast operation.

    Over 44% of the world’s mail is processed and delivered by the U.S. Postal Service, making it the largest delivery service in the world.

    In 2023 alone, the Postal Service handled 116.2 billion pieces of mail. And while processing and delivering mail is the key component of the Postal Service’s mission, it has other responsibilities as well.

    In many ways, in fact, it’s the nondelivery parts of the organization that have the biggest impact on the U.S. economy.

    In 2023, USPS owned or leased 22,873 properties around the country. To place this in perspective, the General Services Administration – known as “America’s landlord” – owns or leases only 8,800 properties.

    The agency also paid US$2 billion in salary and benefits to its 525,469 career employees and processed more than 8.5 million passport applications.

    Finally, USPS has a mandate that supports the health of many Americans. The service’s “last mile” delivery commitment ensures that all Americans – even those living in rural communities – receive mail delivery six days a week. This is particularly important for people without easy access to medical services, as it often provides lifesaving medications to people in need.

    Those are all official duties. Unofficially, the Postal Service has long been known to assist elderly citizens and respond to emergency situations that occur on letter carriers’ routes. In early January 2025, for example, a Massachusetts mail carrier was able to save a house from burning by quickly extinguishing a fire.

    As my co-author Matt Titolo and I have written elsewhere, “Americans depend on USPS for a host of essential services including food, medicine, paying bills, shopping, and running small businesses.”

    Deep roots in US history

    That deep connection with communities has been a part of USPS since its founding. In fact, the postal system is older than the nation itself, with Benjamin Franklin serving as the first head of the organization beginning in 1775.

    When the U.S. Constitution was ratified in 1789, it included Article 1, Section 8 – generally known as the postal clause – which explicitly gives Congress the power “to establish Post Offices and post Roads” and “to make all Laws which shall be necessary and proper” to implement the task.

    A faded postcard sent in 1912.
    Jena Ardell via GettyImages

    Until 1971 the postmaster general was a Cabinet-level position and fifth in the presidential line of succession – coming right after the attorney general and right before the secretary of the Interior. The postmaster general was removed from the Cabinet, and the line of succession, in 1971 when Congress reorganized the Post Office and gave it its new name of the U.S. Postal Service.

    Since that reorganization, the president no longer has the power to appoint – or fire – the postmaster general. That power lies with the Board of Governors of the Postal Service, whose members are appointed by the president with the advice and consent of the Senate.

    The future of the Postal Service

    Over the years, postmaster generals have discussed moving USPS away from its roots as a service-oriented organization and toward a typical business operation. Presidential candidates, including Trump, have called for either full or partial privatization of the agency.

    Indeed, USPS faces continuous deficit problems. But privatization and a resulting focus on profits would likely increase the cost of mailing a letter, a change that would disproportionately affect low-income individuals and small businesses – and could even result in service cuts to rural areas, making life for Americans living there harder and less healthy.

    As Forbes reports, critics and proponents of the move to privatize acknowledge it could result in “fewer days of mail services, longer mail delivery timelines or less access to USPS services.”

    This story is part of a series of profiles of Cabinet and high-level administration positions.

    Prof. Martin’s husband has been employed with the Postal Service for the last twenty-nine years.

    ref. Getting mail to your door is just one part of what the postmaster general does – https://theconversation.com/getting-mail-to-your-door-is-just-one-part-of-what-the-postmaster-general-does-246861

    MIL OSI – Global Reports

  • MIL-OSI Global: Medical research depends on government money – even a day’s delay in the intricate funding process throws science off-kilter

    Source: The Conversation – USA – By Aliasger K. Salem, Associate Vice President for Research and Professor of Pharmaceutical Sciences, University of Iowa

    Of the tens of thousands of grant applications submitted to the National Institutes of Health, only around 1 in 5 is funded. Sean Gladwell/Moment via Getty Images

    In the early days of the second Trump administration, a directive to pause all public communication from the Department of Health and Human Services created uncertainty and anxiety among biomedical researchers in the U.S. This directive halted key operations of numerous federal agencies like the National Institutes of Health, including those critical to advancing science and medicine.

    These operations included a hiring freeze, travel bans and a pause on publishing regulations, guidance documents and other communications. The directive also suspended the grant review panels that determine which research projects receive funding.

    As a result of these disruptions, NIH staff has reported being unable to meet with study participants or recruit patients into clinical trials, delays submitting research findings to science journals, and rescinded job offers.

    Shorter communication freezes in the first few days of a new administration aren’t uncommon. But the consequences of a freeze lasting weeks or potentially longer underscore the critical role the federal government plays in supporting biomedical research. It also brings the intricate processes through which federal research grants are evaluated and awarded into the spotlight.

    I am a member of a federal research grant review panel, as well as a scientist whose own projects have undergone this review process. My experience with the NIH has shown me that these panels come to a decision on the best science to fund through rigorous review and careful vetting.

    How NIH study sections work

    At the heart of the NIH’s mission to advance biomedical research is a careful and transparent peer review process. Key to this process are study sections – panels of scientists and subject matter experts tasked with evaluating grant applications for scientific and technical merit. Study sections are overseen by the Center for Scientific Review, the NIH’s portal for all incoming grant proposals.

    A typical study section consists of dozens of reviewers selected based on their expertise in relevant fields and with careful screening for any conflicts of interest. These scientists are a mix of permanent members and temporary participants.

    I have had the privilege of serving as a permanent chartered member of an NIH study section for several years. This role requires a commitment of four to six years and provides an in-depth understanding of the peer review process. Despite media reports and social media posts indicating that many other panels have been canceled, a section meeting I have scheduled in February 2025 is currently proceeding as planned.

    Evaluating projects for their scientific merit and potential impact is an involved process.
    Center for Scientific Review

    Reviewers analyze applications using key criteria, including the significance and innovation of the research, the qualifications and training of the investigators, the feasibility and rigor of the study design, and the environment the work will be conducted in. Each criterion is scored and combined into an overall impact score. Applications with the highest scores are sent to the next stage, where reviewers meet to discuss and assign final rankings.

    Because no system is perfect, the NIH is constantly reevaluating its review process for potential improvements. For example, in a change that was proposed in 2024, new submissions from Jan. 25, 2025, onward will be reviewed using an updated scoring system that does not rate the investigator and environment but takes these criteria into account in the overall impact score. This change improves the process by increasing the focus of the review on the quality and impact of the science.

    From review to award

    Following peer review, applications are passed to the NIH’s funding institutes and centers, such as the National Institute of Allergy and Infectious Diseases or the National Cancer Institute, where program officials assess the applications’ alignment with the priorities and budgets of institutes’ relevant research programs.

    A second tier of review is conducted by advisory councils composed of scientists, clinicians and public representatives. In my experience, study section scores and comments typically carry the greatest weight. Public health needs, policy directives and ensuring that one type of research is not overrepresented relative to other areas are also considered in funding decisions. These factors can change with shifts in administrative priorities.

    Grant awards are typically announced several months after the review process, although administrative freezes or budgetary uncertainties can extend this timeline. Last year, approximately US$40 billion was awarded for biomedical research, largely through almost 50,000 competitive grants to more than 300,000 researchers at over 2,500 universities, medical schools and other research institutions across the U.S.

    Getting federal funding for research is a highly competitive process. On average, only 1 in 5 grant applications is funded.

    Medical research often follows a strict timeline.
    gorodenkoff/iStock via Getty Images Plus

    Consequences of an administrative freeze

    The Trump administration’s initial freeze paused some of the steps in the federal research grant review process. Some study section meetings have been postponed indefinitely, and program officials faced delays in processing applications. Some research groups relying on NIH funding for ongoing projects can face cash flow challenges, potentially resulting in a need to scale back research activities or temporarily reassign staff.

    Because my own study section meeting is still scheduled to take place in February, I believe these pauses are temporary. This is consistent with a recent follow-up memo from acting HHS Secretary Dorothy Fink, stating that the directive would be in effect through Feb. 1.

    Importantly, the pause underscores the fragility of the research funding pipeline and the cascading effects of administrative uncertainty. Early-career scientists who often rely on timely grant awards to establish their labs are particularly vulnerable, heightening concerns about workforce sustainability in biomedical research.

    As the NIH and research community navigate these pauses, this chapter serves as a reminder of the critical importance of stable and predictable funding systems. Biomedical research in the U.S. has historically maintained bipartisan support. Protecting the NIH’s mission of advancing human health from political or administrative turbulence is critical to ensure that the pursuit of scientific innovation and public health remains uncompromised.

    Aliasger K. Salem receives funding from the National Institutes of Health. He serves on the Executive Board of the American Association for Pharmaceutical Scientists.

    ref. Medical research depends on government money – even a day’s delay in the intricate funding process throws science off-kilter – https://theconversation.com/medical-research-depends-on-government-money-even-a-days-delay-in-the-intricate-funding-process-throws-science-off-kilter-248290

    MIL OSI – Global Reports

  • MIL-OSI Global: 4 steps to building a healthier relationship with your phone

    Source: The Conversation – Canada – By Jamie Gruman, Professor of Organizational Behaviour, University of Guelph

    Being constantly connected to your electronic devices, and the social media they enable, may be bad for your health and well-being and working remotely only compounds these challenges.

    Until very recently, I didn’t have a smartphone. In 2018, I wrote an article outlining the benefits of not being connected to the world through a phone. I was perfectly content living a largely disconnected life.

    However, since that time, things have changed.

    It is increasingly difficult to manage life without a smartphone. I recently took my family to a baseball game and would have been unable to access the ballpark without a smartphone because the phone serves as your tickets. Without a phone, I might not be able to enter a concert I bought tickets for, and it is increasingly difficult to order takeout. Reluctantly, I now own a smartphone.


    Ready to make a change? The Quarter Life Glow-up is a new, six-week newsletter course from The Conversation’s UK and Canada editions.

    Every week, we’ll bring you research-backed advice and tools to help improve your relationships, your career, your free time and your mental health – no supplements or skincare required. Sign up here to start your glow-up at any time.


    Working from home, or remotely, has only magnified these challenges. Being constantly electronically connected can make it difficult to separate work from home, leading you to being constantly “on call.” This can further keep you in a perpetual state of activation.

    In general, excessive smartphone use is associated with anxiety and depression and compromised sleep. Further evidence suggests that being in contact with work when physically outside of the workplace can lead to higher levels of distress as opposed to those who leave the workplace behind them when they depart.

    So how can you manage if your home is your remote workplace? These four tactics can help you establish a clear boundary between work and home.

    1. Create physical boundaries

    Use physical space or objects to create a separation between work and home. For example, closing or locking the door to a home office creates a physical and psychological barrier that keeps you away from your laptop and helps you split your work life from your home life.

    If you do not have a home office, you may have a dedicated work area. Erecting a divider, such as a folding screen or even an unused bed sheet, can serve the same purpose.

    To maintain a strict separation of work and home, consider getting a work phone to separate work from personal communications. Outside of work, consider leaving your phone at home when going out for leisure activities in the evening or on weekends to help you escape electronics completely — though be sure to let trusted individuals know where you will be if you plan on disconnecting for an extended period of time.

    Simply put, keep your work space separate and view your phone as nothing more than a highly advanced landline of old, plugged into a specific area of your home and unable to be taken further.

    2. Create temporal boundaries

    Set boundaries around when you will address things, and how much time you will devote to work. It is more and more common to see messages in email signatures noting the days and hours during which people will respond to messages. This is a positive development.

    You can also block out time in your schedule to address work and non-work issues. If you have a phone that you use exclusively for work, turn it off and charge it during the times you don’t intend to be working. Protecting your time with such tactics is an effective way to promote work-life balance and maintain a healthy relationship with technology.

    3. Create behavioural boundaries

    Establish behaviours which help you separate work from home. Turning off the ringer and buzzer on your phone prevents you from being distracted and disturbed when enjoying leisure time.

    If your work involves social media, then try using different social media platforms for work and non-work to help you avoid being inadvertently drawn into work-related matters when you are trying to enjoy personal time. Or, consider switching to one of the many new “dumbphones” entering the market.




    Read more:
    Does being away from your smartphone cause you anxiety? The fact that it makes you available 24/7 could be the reason


    You can also team up with others. In the same way that doctors in a clinic will schedule one partner to be on call at a time so that the other partners can fully escape from work after hours, you can join forces with others who do similar work and redirect calls on a rotating basis so you do not have to worry about always being contacted.

    4. Create communication boundaries

    Once these tactics have been established, you should communicate them. Establish expectations about when you will and won’t be available. Note that this may require some negotiation.

    If people contact you out of ignorance of your personal policy, simply advise them of it. If they intentionally violate your boundary, consider your relationship with the violator before addressing them. You don’t want to rebuke your boss, but you should be firm in protecting your boundaries.

    Stay in control

    In the end, you need to ensure that you own your phone and not the other way around.

    When used excessively, electronic devices can become a chain that shackles us, as opposed to a tool that enables us. Our phones can become an addiction. Like any other form of addiction, we lose control of our phones when they make demands of us that we feel compelled to answer.

    There are times when work or urgent situations require us to be electronically available. However, outside of the times you must be available, any time you feel your phone making a demand of you, turn it off.




    Read more:
    What millennials and gen Z professionals need to know about developing a meaningful career


    Now that I have a smartphone, some things in life are easier and more pleasant. I can avoid traffic jams when driving. My wife and I can discuss purchases before buying, and I can play games on my phone while waiting for a friend to arrive at a restaurant. But I don’t allow the phone to dictate how I live.

    Acquaintances of mine will sometimes get upset when they text me. Because I don’t keep my phone on my hip, I usually don’t respond right away. If they voice their displeasure, I’m secretly pleased; it reminds me that I have a healthy relationship with my phone. I’m in command of it. It’s not in command of me.

    Jamie Gruman does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. 4 steps to building a healthier relationship with your phone – https://theconversation.com/4-steps-to-building-a-healthier-relationship-with-your-phone-235920

    MIL OSI – Global Reports

  • MIL-OSI Global: $Trump and $Melania crypto tokens illustrate the risks posed by trendy meme coins

    Source: The Conversation – Canada – By Anwar Sheluchin, PhD Candidate, Political Science, McMaster University

    An image on a Trump meme coin website. (GetTrumpMemes.com)

    Meme coins like the ones recently launched by United States President Donald Trump and his wife, Melania, are a hot trend in the cryptocurrency ecosystem. The rise of these digital tokens reflects the influence of internet culture and community-driven hype on the market, distinguishing them from more traditional cryptocurrencies with well-defined uses or technical foundations.

    The value of a meme coin is often driven by social media hype, community engagement and celebrity endorsements. But political meme coins seem to offer a new use: the potential to turn civic engagement into speculative assets.

    As someone who researches financial governance and digital currencies, I want to delve into various cryptocurrency initiatives. This is not intended as financial advice.

    Politics meets crypto

    In recent years, the cryptocurrency landscape has witnessed the emergence of political meme coins, digital tokens centred around political figures or movements.

    During the 2024 U.S. presidential election, a number of political meme coins emerged, inspired by political figures like Trump, Joe Biden and Kamala Harris. These coins, often unaffiliated with the politicians they reference, typically have misspelled names (for example, Jeo Boden instead of Joe Biden).

    Political meme coins merge finance, technology and politics in an unprecedented way, potentially serving as a gauge of public sentiment and political trends.

    Trump’s official $Trump token is a prime example of how cryptocurrencies can transform political support into a financial product. However, the value of a meme coin is highly speculative, as it often relies on public perception and market demand, among other things, rather than any intrinsic worth.

    According to the terms and conditions on the site where the coins are sold, “Trump Memes are intended to function as an expression of support” and come with “absolutely no promise or guarantee that the Trump Memes will increase in value or maintain the same value as the amount you paid.”

    This disclaimer highlights the speculative nature of such tokens while also raising ethical concerns about the potential to exploit political supporters for financial gain.

    MAGA credit card

    Trump’s meme coin isn’t his first venture into crypto. Previously, he released a series of digital trading cards (NFTs) that enabled cardholders to have dinner with the president.
    Third parties are building on the hype around Trump and his brand, releasing products like the limited-edition MAGA Card.

    Described as “a collector’s item and the ultimate way to spend your $TRUMP tokens,” the credit card claims to integrate Trump’s meme coin with everyday financial transactions in a bid to appeal to supporters of the president’s MAGA movement.

    However, The American Patriot’s Card — the company behind the credit card — does not appear to have any affiliation with Trump. Unlike the $Trump token, which clearly discloses its connection to Trump, the MAGA Card lacks such transparency, illustrating how the door has been opened to misrepresentation and opportunistic marketing schemes that exploit political supporters.

    Regulatory environment

    The cryptocurrency industry spent millions during the 2024 U.S. election backing crypto-friendly candidates and selling the story that crypto voters are an important voting bloc.

    This investment aimed to shape political discourse, leading presidential candidates to make promises and propose policies that aligned with the interests of the cryptocurrency industry.

    While Trump has signalled his intention to provide clear regulatory guidelines for the cryptocurrency industry, the launch of his meme coin — coupled with low public understanding of cryptoassets — could lead to financial losses from risky and speculative investments.

    Take for example, what are known as pump-and-dump schemes that have become relatively common in the cryptocurrency ecosystem. These schemes involve artificially inflating the price of an asset to sell it at a profit. After the asset is “dumped,” the price crashes, leaving investors with significant losses.

    Without appropriate guardrails in place, the need to protect investors becomes increasingly urgent.

    Relevance to Canada

    The Canadian government has expressed some concern over the role of cryptocurrency in politics. Compared to the U.S., Canada has strict campaign financing rules aimed at preventing the undue influence of money in politics and ensuring a fair and transparent democratic process.

    This means that the cryptocurrency industry likely won’t be able to influence Canadian elections in the same way they might have south of the border. Canada’s existing regulatory framework has already led to several cryptocurrency exchanges leaving the country.

    Currently, political entities in Canada can only accept cryptocurrency contributions if Elections Canada can verify the public wallet addresses and transaction amounts involved.

    However, Bill C-65 — the Electoral Participation Act — proposes regulatory requirements related to contributions that are “difficult to trace.” Specifically, political parties and candidates would be prohibited from accepting contributions in the form of “a cryptoasset, money order or prepaid payment method.” The recent prorogation of Parliament has shelved the amendments proposed in C-65, but these concerns remain relevant for future legislation.

    Risky convergence

    Discussions in the House of Commons on Bill C-65, particularly regarding cryptoasset donations, emphasize the need for a ban to prevent foreign entities from influencing Canadian elections.

    This was likely a response to concerns about foreign entities financially supporting the so-called Freedom Convoy through cryptocurrency donations, despite CSIS stating that the money did not appear to be coming from foreign states, organizations or citizens.

    The rise of political meme coins demonstrates how politics, finance and technology are merging in new and sometimes risky ways. While these coins may seem like a joke or a new way to engage with politics, the absence of proper regulations could leave political supporters vulnerable to exploitation for financial gain.

    Anwar Sheluchin receives funding from the Social Sciences and Humanities Research Council of Canada.

    ref. $Trump and $Melania crypto tokens illustrate the risks posed by trendy meme coins – https://theconversation.com/trump-and-melania-crypto-tokens-illustrate-the-risks-posed-by-trendy-meme-coins-247781

    MIL OSI – Global Reports

  • MIL-OSI Global: Donors are down, but dollars are up – how US charitable giving is changing

    Source: The Conversation – USA – By Una Osili, Professor of Economics and Philanthropic Studies; Associate Dean for Research and International Programs, Lilly Family School of Philanthropy, Indiana University

    Although the pie is shrinking, the remaining slices are giving more.
    Say-Cheese/iStock via Getty Images Plus

    Although the US$557 billion Americans gave to charity in 2023 marked a 2.1% decline in inflation-adjusted terms, U.S. donations have increased significantly over the past two decades. Giving has grown by about 42% since 2003, according to the annual Giving USA report – which our team at the Indiana University Lilly Family School of Philanthropy researches and writes in partnership with the Giving USA Foundation.

    While overall charitable funds have expanded according to the most recent data available, the share of Americans who give to charitable causes has fallen. It plummeted from 66.2% of all U.S. adults in 2000 to 45.8% in 2020, our team determined in a different study we released in 2024. In short, the number of dollars is up, while the share of Americans who are donors is down.

    As the second Trump administration gets underway, having fewer people donating more is one reason why scholars of philanthropy like us are watching how the federal government handles tax policy and other measures that could influence charitable giving.

    Decline continued when the COVID-19 pandemic began

    Our latest study regarding the donors’ side of the American giving equation included data from 2020 – the first year of the COVID-19 pandemic.

    We found that a long-term decline in Americans’ participation in charitable giving accelerated during the first year of the pandemic. The share of Americans who gave to charity fell from 49.6% in 2018, the prior year for which data is available, to 45.8% in 2020 – a nearly 4-percentage-point decline in two years. This data is only available for every other year.

    Those findings may appear to contradict many anecdotal reports about charitable activity and other acts of generosity being on the rise at that time.

    The share of Americans who give to charity had fallen by 3.5 percentage points in the prior two-year period – a sign that the pandemic may have sped up the decline in the giving participation rate.

    Giving is growing more concentrated

    How can the total amount contributed rise while the share of donors declines?

    The answer is simple: The donors who still give to charity are giving more than they used to, even after adjusting for inflation.

    The total amount the typical U.S. donor gave in a year rose from $3,131 in 2018 to $3,651 in 2020. That’s an 16.6% increase in just two years.

    We also found that American donors with higher incomes, more education and more wealth are giving larger amounts than they used to.

    Bouts of economic volatility and, in recent years, inflation running at levels not seen since the 1980s may have left many American families with less money to donate to charities.

    Other factors include cultural shifts, a decline in religious affiliation and a loss of trust in institutions of all kinds.

    What’s around the corner

    Changes enacted during the first Trump administration have been reverberating in recent years, and the second Trump administration’s policies are also likely to influence giving trends.

    Most of the taxpayers who had previously been able to take advantage of the charitable deduction, which reduces taxable income in accordance to the value of a taxpayer’s donations, stopped itemizing and instead took advantage of the standard deduction after President Donald Trump signed the Tax Cuts and Jobs Act into law in late 2017.

    That’s because the 2017 tax reforms increased the standard deduction. As a result, many people stopped itemizing their tax returns and started using the standard deduction instead.

    About 30% of taxpayers itemized in 2017, which meant they could benefit from the charitable deduction. But since 2018, only about 10% of them have been itemizing. A recent study one of us worked on determined that the tax changes reduced charitable giving by $20 billion in 2018 alone.

    The White House could attempt to address the sustained decline in the share of Americans making charitable donations by considering policies that have the potential to encourage more people to give to charity.

    The shrinking ranks of American donors matters because philanthropy plays a prominent role in fulfilling Americans’ spiritual, intellectual and material needs and aspirations for people of all backgrounds.

    Una Osili receives funding from Bill and Melinda Foundation, Charles Stewart Mott Foundation, Fidelity Charitable Catalyst Fund, John Templeton Foundation, Google.org

    Xiao “Jimmy” Han receives funding from Bill & Melinda Gates Foundation, Charles Stewart Mott Foundation, Fidelity Charitable Catalyst Fund, Google.org Charitable Giving Fund, and the John Templeton Foundation.

    ref. Donors are down, but dollars are up – how US charitable giving is changing – https://theconversation.com/donors-are-down-but-dollars-are-up-how-us-charitable-giving-is-changing-246473

    MIL OSI – Global Reports

  • MIL-OSI Global: Canada and Greenland aren’t likely to join the US anytime soon – but ‘GrAmeriCa’ is a revealing thought experiment

    Source: The Conversation – USA – By Peter A. Coclanis, Professor of History and Director of the Global Research Institute, University of North Carolina at Chapel Hill

    For some time now, pundits have been debating whether to take Donald Trump “seriously” or “literally,” as the clever binary coined by journalist Salena Zito in 2016 has it.

    This choice comes to mind when I think about the 47th president’s frequent comments recently about incorporating Greenland and Canada into the United States. A few cases in point: Before delivering an inaugural address in which he vaguely but forcefully expressed a desire for the U.S. to expand its territory, Trump raised the issue on a confrontational phone call with the prime minister of Denmark, which handles Greenland’s international affairs. More recently, he spoke of Canada becoming a U.S. state to reporters on Air Force One.

    It’s hard to imagine a plausible scenario in which either, let alone both, joins the United States. The governments of Canada and Greenland alike have made it clear that they’re not for sale.

    But as an economic historian, I believe that thought experiments can be a useful way of understanding truths about the world. And one such truth is that Greenland and Canada play a key role in the global economy. If the U.S. were to absorb either or both, it would be a strategic, economic and political game changer.

    So, for a moment, let’s take Trump both seriously and literally. Below, I’ve laid out some very rough measures of how a reconstituted megastate including the U.S., Canada or Greenland would look in comparison to other leading countries and blocs.

    Bigger, but not more crowded

    At first glance, the most obvious thing to note about the new country would be its physical size. Today the U.S. is the third-largest nation-state in terms of area – about 57.5% of the size of Russia, by far the world’s largest country.

    By incorporating Canada, the second-largest country in the world in terms of area, the U.S., so reconstituted, would be 14% larger than Russia. If both Canada and Greenland became part of the reconstituted U.S., the country would be 22% larger than Russia.

    How about China? Today, China is slightly smaller than the U.S. in area, but China would be less than half the size of a combined U.S. and Canada, and only about 44% of the size of the U.S.-Canada-Greenland. And the European Union? It would be less than 20% of the size of a U.S.-Canada-Greenland combo.

    Incorporating Canada and Greenland into the U.S would have less of an impact in demographic terms, adding just under 40 million people to the current U.S. total of 342 million.

    Similarly, if the U.S. absorbed Canada and Greenland — two countries that are wealthy, but not nearly as wealthy as the U.S. — it wouldn’t have much of an impact on gross domestic product per capita. Why not? Because the U.S. would comprise about 90% of the total population of the new megastate. Given the figures for GDP per capita (PPP, international dollars) in Canada and Greenland and weighting for population, GDP per capita in the megastate would be about $79,000.

    A strategic shift

    The biggest effects of absorbing either country into the U.S. would come in the geopolitical, strategic and resource realms. Here, the changes would be seismic. First, by incorporating both countries into the U.S., the new entity would not only consolidate its already considerable power in the Western Hemisphere, but it would also establish a much more formidable position in the Arctic region. This is increasingly important as sea lanes are opening up with climate change.

    By adding territory, the U.S. could potentially enhance its strategic and defense posture, forcing its principal adversaries, Russia and China, to pursue more cautious tacks. These geopolitical and strategic effects would be magnified by the bounty of natural resources in the new megastate.

    Consider that the U.S. is already the largest oil-producing country in the world – producing over 13.3 million barrels a day in 2023 – and Canada is No. 4, with 5 million. Together, the two countries produced over 18 million barrels per day in 2023, while Russia produced about 10.3 million, Saudi Arabia about 9 million, and China 4.2 million. In other words, the U.S. and Canada together produce 8 million barrels of oil more than Russia does each day – a staggering differential.

    The U.S. is also by far the largest producer of natural gas in the world, with Russia a distant second. Incorporating Canada, currently the fifth-largest producer, would add considerably to the U.S. lead.

    Nor does the resource bounty begin and end with oil and natural gas. Greenland is rich in minerals of all types, particularly the rare earth elements in such demand for batteries, electronics and the like.

    And perhaps most important of all is the impact of integration regarding freshwater resources. Integrating the U.S. and Canada would bring that new entity into a virtual tie with Brazil as the leading repository of freshwater resources in the world. Canada and the U.S. are currently Nos. 3 and 4, respectively, in the world in freshwater resources; together, their freshwater stock far surpasses Russia, which is currently No. 2.

    And this doesn’t factor in Greenland, with its massive – if declining – freshwater ice shield. In any case, given the increasing demand for water around the world, control over freshwater resources will prove more and more important for the overall security posture of the U.S. going forward.

    So what do we make of this little exercise? One thing seems clear: “GrAmeriCa” would be amazingly rich in resources, as the president likely knows well. But should we take Trump literally or seriously – or both – on this issue? It may be a case of “Too soon to tell,” to invoke Zhou Enlai’s famous line about one or another revolutionary upheaval in France. But the world will know soon enough.

    Peter A. Coclanis does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Canada and Greenland aren’t likely to join the US anytime soon – but ‘GrAmeriCa’ is a revealing thought experiment – https://theconversation.com/canada-and-greenland-arent-likely-to-join-the-us-anytime-soon-but-gramerica-is-a-revealing-thought-experiment-248214

    MIL OSI – Global Reports

  • MIL-OSI Global: Disaster evacuations can take much longer than people expect − computer simulations could help save lives and avoid chaos

    Source: The Conversation – USA – By Ashley Bosa, Postdoctoral Researcher, Hazards and Climate Resilience Institute, Boise State University

    Wildfire smoke rises beyond homes near Castaic Lake as another California wildfire spread on Jan. 22, 2025. AP Photo/Marcio Jose Sanchez

    When a wildfire notification goes off on your mobile phone, it can trigger all kinds of emotions and confusion.

    You might glance outside and see no smoke. Across the street, your neighbors have mixed reactions: One is leisurely walking their dog, another is calmly packing a small bag, while a third appears to be preparing for an extended vacation.

    The notification advises you to grab your “go bag,” but then panic can set in as you realize you don’t have one ready. So, you scour the local emergency management website for guidance and discover how much you’ve overlooked: important documents such as birth certificates, an extra flashlight, your children’s medications, a phone charger.

    Before you can gather your thoughts, a second notification arrives – this time telling you to evacuate.

    Packing the car, wrangling children or a skittish cat, figuring out where to go – it can feel frenzied in the face of danger. As you pull out, you join a traffic jam on your street, with a black smoke plume rising nearby and neighbors still loading their cars.

    This chaos highlights a worst-case scenario for wildfire evacuations – one that can cause delays, heighten risks for evacuees and complicate access for emergency responders. It’s why researchers like me who study natural hazards are developing ways to help communities recognize where residents may need the most help and avoid evacuation bottlenecks in the face of future disasters.

    The importance of being prepared

    Confusion is common in the face of disasters, and it underscores the need for communities and individuals to be prepared.

    Delays in evacuating, or the inability to evacuate safely, can have catastrophic consequences, not only for those trying to flee but also for the first responders and emergency managers working to manage the crisis. These delays often stem from a lack of preparedness or uncertainty about when and how to act.

    A study of survivors of an Australian wildfire that killed 172 people in the state of Victoria in 2009 found that two-thirds of survivors reported that they had carried out an existing disaster plan, while researchers found the majority of those who died either didn’t follow a disaster plan or couldn’t. Forecasters had warned that high temperatures were coming with very low humidity, and public alerts had gone out about the high fire risk.

    Residents had little time to evacuate as the Eaton Fire spread into Altadena, Calif., on Jan. 7, 2025. Source: NBC.

    How people perceive risks and the environmental and social cues around them – such as how much smoke they see, their neighbors’ choices or the wording of the notification – will directly affect the speed of their response.

    Past experience with a disaster evacuation also has an impact. Rapid population growth in recent years in the wildland-urban interface – areas where human development meets wildfire-prone areas – has meant that more people with little or no experience with wildfires are living in fire-risk areas. Wildland areas also tend to have fewer evacuation routes, making mass evacuations more difficult and time-consuming.

    Adding to the complexity is the fact that large wildfires are occurring in regions not historically prone to such events and during times of the year traditionally considered outside of wildfire season. This shift has left communities and emergency response teams grappling with unprecedented challenges, particularly when it comes to evacuations.

    Computer models can help spot risks

    To address these challenges, researchers are developing systems to help communities model how their residents are likely to respond in the event of a disaster.

    The results can help emergency crews understand where bottlenecks are likely to occur along evacuation routes, depending on the timing of the notice and the movement of the fire. They can also help fire managers understand where neighborhoods may need to be notified faster or need more help evacuating.

    Firefighters inspect burned out cars along a road in Paradise, Calif., after a deadly fire swept through the wooded area in November 2018. Some people abandoned their cars when they became trapped in traffic with few ways out.
    AP Photo/John Locher

    My team at the Hazard and Climate Resilience Institute at Boise State University is working on one of these projects. We have been surveying communities across Idaho and Oregon to assess how people living in the wildland-urban interface areas perceive wildfire risks and prepare for evacuations.

    Using those surveys, we can capture household-level decision data, such as which evacuation routes these residents would take, how many cars they plan to drive and where they would evacuate to.

    We can also gauge how prepared residents would be to evacuate, or whether they would likely stay and try to defend their home instead.

    Evacuating nursing homes takes time and special resources, including evacuation sites that can meet people’s health needs. When the Eaton Fire swept into Altadena, Calif., on Jan. 7, 2025, a senior care facility had little time to get its residents safely away.
    AP Photo/Ethan Swope

    With that data, we can simulate how long it will take emergency response teams to evacuate an entire community safely. The models could also show where difficulties with evacuations might be likely to arise and help residents understand how they can adjust their evacuation plans for a safer escape for everyone.

    Bridging the gap between awareness and action

    One of the key goals of this research is to bridge the gap between awareness and action.

    While many residents in wildfire-prone areas understand the risks, translating that knowledge into concrete preparations remains a challenge. The concept of a “go bag,” for example, is widely promoted but often poorly understood. Essential items such as medications, important documents and pet supplies are frequently overlooked until it’s too late.

    Clear and timely communication during wildfire crises is also essential. Evacuation warning messages such as “Ready, Set, Go!” are designed to prompt specific actions, but their effectiveness depends on residents understanding and trusting the system. Delayed responses or mixed signals can create confusion.

    As wildfire risk rises for many communities, preparedness is no longer optional – it’s a necessity. Emergency notifications vary by state and county, so check your local emergency management office to understand what to expect and sign up for alerts. Being prepared can help communities limit some of the most devastating impacts of wildfires.

    Ashley Bosa receives funding from the National Science Foundation Grant No. 2230595 for the project titled “Collaborative Research: Household Response to Wildfire ? Integrating Behavioral Science and Evacuation Modeling to Improve Community Wildfire Resilience.”

    ref. Disaster evacuations can take much longer than people expect − computer simulations could help save lives and avoid chaos – https://theconversation.com/disaster-evacuations-can-take-much-longer-than-people-expect-computer-simulations-could-help-save-lives-and-avoid-chaos-247668

    MIL OSI – Global Reports

  • MIL-OSI Global: The global wildlife trade is an enormous market – the US imports billions of animals from nearly 30,000 species

    Source: The Conversation – USA – By Michael Tlusty, Professor of Sustainability and Food Solutions, UMass Boston

    U.S. Fish and Wildlife agents inspect a shipment of reptiles at the Port of Miami. U.S. GAO

    When people think of wildlife trade, they often picture smugglers sneaking in rare and endangered species from far-off countries. Yet most wildlife trade is actually legal, and the United States is one of the world’s biggest wildlife importers.

    New research that we and a team of colleagues published in the Proceedings of the National Academy of Sciences shows that, over the last 22 years, people in the U.S. legally imported nearly 2.85 billion individual animals representing almost 30,000 species.

    Some of these wild animals become pets, such as reptiles, spiders, clownfish, chimpanzees and even tigers. Thousands end up in zoos and aquariums, where many species on display come directly from the wild.

    Medical research uses macaque monkeys and imports up to 39,000 of them every year. The fashion trade imports around 1 million to 2 million crocodile skins every year. Hunting trophies are also included in wildlife.

    How many species are legally traded worldwide?
    Benjamin Marshall, et al., 2024, PNAS, CC BY-SA

    The largest number of imported species are birds – 4,985 different species are imported each year, led by Muscovy ducks, with over 6 million imported. Reptiles are next, with 3,048 species, led by iguanas and royal pythons. These largely become pets.

    Not all wildlife are wild

    We found that just over half of the animals imported into the U.S. come from the wild.

    Capturing wildlife to sell to exporters can be an important income source for rural communities around the world, especially in Africa. However, wild imported species can also spread diseases or parasites or become invasive. In fact, these risks are so worrying that many imported animals are classed as “injurious wildlife” due to their potential role in transmitting diseases to native species.

    Captive breeding has played an increasingly dominant role in recent years as a way to limit the impact on wild populations and to try to reduce disease spread.

    However over half the individual animals from most groups of species, such as amphibians or mammals, still come from the wild, and there is no data on the impact of the wildlife trade on most wild populations.

    Trade may pose a particular risk when species are already rare or have small ranges. Where studies have been done, the wild populations of traded species decreased by an average of 62% across the periods monitored.

    Sustainable wildlife trade is possible, but it relies on careful monitoring to balance wild harvest and captive breeding.

    Data is thin in many ways

    For most species in the wildlife trade, there is still a lot that remains unknown, including even the number of species traded.

    With so many species and shipments, wildlife inspectors are overwhelmed. Trade data may not include the full species name for groups like butterflies or fish. The values in many customs databases are reported by companies but never verified.

    Macaques, used in medical research, are the most-traded primates globally, according to an analysis of U.S. Fish and Wildlife data.
    Davidvraju, CC BY-SA

    In our study, we relied on the U.S. Fish and Wildlife Service’s Law Enforcement Management Information System, a wildlife import-export data collection system. However, few countries collate and release data in such a standardized way; meaning that for the majority of species legally traded around the world there is no available data.

    For example, millions of Tokay geckos are imported as pets and for medicine, and are often reported to be bred in captivity. However, investigators cannot confirm that they weren’t actually caught in the wild.

    Why tracking the wildlife trade is important

    Biodiversity has a great number of economic and ecological benefits. There are also risks to importing wildlife. Understanding the many species and number of animals entering the country, and whether they were once wild or farmed, is important, because imported wildlife can cause health and ecological problems.

    Wildlife can spread diseases to humans and to other animals. Wild-caught monkeys imported for medical research may carry diseases, including ones of particular risk to humans. Those with diseases are more likely to be wild than captive-bred.

    The most-traded mammals worldwide are minks, which are valued for their fur but can spread viruses to humans and other species. About 48 million minks are legally traded annually, about 2.8% wild-caught and the majority raised, according to U.S. Fish and Wildlife data.
    Colin Canterbury/USFWS

    Species that aren’t native to the U.S. may also escape or be released into the wild. Invasive species can cause billions of dollars in damage by consuming and outcompeting native wildlife and spreading diseases.

    We believe better data on the wildlife trade could be used to set management goals, such as harvest quotas or no-take policies for those species in their country of origin.

    What’s next

    The researchers involved in this study come from institutes around the world and are all interested in improving data systems for wildlife trade.

    Some of us focus on how e-commerce platforms such as Etsy and Instagram have become hotspots of wildlife trade and can be challenging to monitor without automation. Esty announced in 2024 that it would remove listings of endangered or threatened species. Others build tools to help wildlife inspectors process the large number of shipments in real time. Many of us examine the problems imported species cause when they become invasive.

    In the age of machine learning, artificial intelligence and big data, it’s possible to better understand the wildlife trade. Consumers can help by buying less, and making informed decisions.

    Michael Tlusty is a founding member of the Wildlife Detection Partnership and co-developed the Nature Intelligence System, which assists governments in collecting more accurate wildlife data..

    Andrew Rhyne is currently on sabbatical funded by the Canada Border Services Agency (CBSA), focused on the wildlife trade data. He is a founding member of the Wildlife Detection Partnership and co-developed the Nature Intelligence System, which assists governments in collecting more accurate wildlife data.

    Alice Catherine Hughes does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. The global wildlife trade is an enormous market – the US imports billions of animals from nearly 30,000 species – https://theconversation.com/the-global-wildlife-trade-is-an-enormous-market-the-us-imports-billions-of-animals-from-nearly-30-000-species-247197

    MIL OSI – Global Reports

  • MIL-OSI Global: In asking Trump to show mercy, Bishop Budde continues a long tradition of Christian leaders ‘speaking truth to power’

    Source: The Conversation – USA – By Joanne M. Pierce, Professor Emerita of Religious Studies, College of the Holy Cross

    Bishop Mariann Budde leads the national prayer service attended by President Donald Trump at the National Cathedral in Washington on Jan. 21, 2025. AP Photo/Evan Vucci

    Episcopal Bishop Mariann Edgar Budde’s sermon on Jan. 21, 2025, in which she appealed to President Donald Trump to have mercy toward groups frightened by his position on immigrants and LGBTQ+ people – especially children – drew reactions from both sides of the aisle.

    In a post on his social networking site, Truth Social, Trump called her comments “nasty in tone” and remarked that she “brought her church into the World of politics in a very ungracious way.”

    “She and her church owe the public an apology!,” he posted. Several conservatives criticized her sermon, while many progressives saw her as “speaking truth to power.”

    As a specialist in medieval Christianity, I was not surprised by the bishop’s words, as I know that Christian history is full of examples of people who have spoken out, unafraid to risk official censure, or even death.

    Early voices

    Even in the early centuries of Christianity, followers of Jesus Christ’s teachings could be outspoken toward political leaders.

    For example, in the first-century Gospels, John the Baptist, a contemporary of Jesus, confronts the ruler of Galilee, Herod Antipas, for marrying his brother’s wife – a practice forbidden in the Hebrew scriptures. For that, John the Baptist was ultimately beheaded.

    In a prayer later called the Magnificat, Mary, the mother of Jesus, praises the glory and power of God who casts down the mighty and raises the lowly. In recent interpretations, these words have been understood as a call for those in authority to act more justly.

    In the late fourth century – a time when Christianity had been made the official religion of the Roman Empire – a respected civil official named Ambrose became bishop of the imperial city of Milan in northern Italy. He became well known for his preaching and theological treatises.

    However, after imperial troops massacred innocent civilians in the Greek city of Thessaloniki, Ambrose reproached Emperor Theodosius and refused to admit him to church for worship until he did public penance for their deaths.

    Ambrose’s writings on scripture and heresy, as well as his hymns, had a profound influence on Western Christian theology; since his death, he has been venerated as a saint.

    In the early sixth century, the Christian Roman senator and philosopher Boethius served as an official in the Roman court of the Germanic king of Italy, Theodoric. A respected figure for his learning and personal integrity, Boethius was imprisoned on false charges after defending others from accusations by corrupt court officials acting out of greed or ambition.

    During his time in prison, he wrote a philosophical volume about the nature of what is true good – “On the Consolation of Philosophy” – that is studied even today. Boethius, who was executed in 524, is venerated as a saint and martyr in parts of Italy.

    Thomas Becket and St. Catherine

    One of the most famous examples of a medieval bishop speaking truth to power is that of Thomas Becket, former chancellor – that is, senior minister – of England in the 12th century. On becoming archbishop of Canterbury, Becket resigned his secular office and opposed the efforts of King Henry II to bring the church under royal control.

    A stained glass window at the Canterbury Cathedral in England depicting the murder of Thomas Becket, archbishop of Canterbury.
    Dukas/Universal Images Group via Getty Images

    After living in exile in France for a time, Becket returned to England and was assassinated by some of Henry’s knights. The king later did public penance for this at Becket’s tomb in Canterbury. Soon after, Becket was canonized a saint.

    Another influential saint was the 14th-century Italian mystic and writer Catherine of Siena. Because of the increasing power of the kings of France, the popes had moved their residence and offices from Rome to Avignon, on the French border. They remained there for most of the century, even though this Avignon papacy increased tensions in western Europee.

    Many Christian clerics and secular rulers in western Europe believed that the popes needed to return to Rome, to distance papal authority from French influence. Catherine herself even traveled to Avignon and stayed there for months, writing letters urging Pope Gregory XI to return to Rome and restore peace to Italy and the church – a goal the pope finally fulfilled in 1377.

    Leaders speak up across denominations

    The Reformation era of the 16th and early 17th centuries led to the splitting of Western Christianity into several different denominations. However, many Christian leaders across denominations continued to raise their voices for justice.

    One important and ongoing voice is that of the Religious Society of Friends, or Quakers. Early leaders, like Margaret Fell and George Fox, wrote letters to King Charles II of England in the mid-17th century, defending their beliefs, including pacifism, in the face of persecution.

    In the 18th century, based on their belief in the equality of all human beings, Quaker leaders spoke in favor of the abolition of slavery in both the United Kingdom and the United States.

    In fact, it was Bayard Rustin, a Black Quaker, who coined the phrase “to speak truth to power” in the mid-20th century. He adhered to the Quaker commitment to nonviolence in social activism and was active for decades in the American Civil Rights Movement. During the Montgomery bus boycott in the mid-1950s, he met and began working with Martin Luther King Jr., who was an ordained Baptist minister.

    In Germany, leaders from various Christian denominations have also united to speak truth to power. During the rise of the Nazis in the 1930s, several pastors and theologians joined forces to resist the influence of Nazi doctrine over German Protestant churches.

    Their statement, the Barmen Declaration, emphasized that Christians were answerable to God, not the state. These leaders – the Confessing Church – continued to resist Nazi attempts to create a German Church.

    Desmond Tutu and other leaders

    Bishop Desmond Tutu opposed the racial policies of the South African government.
    AP Photo/Jim Abrams

    Christians on other continents, too, continued this vocal tradition. Óscar Romero, the Roman Catholic archbishop of San Salvador, preached radio sermons criticizing the government and army for violence and oppression of the poor in El Salvador during a national civil war. As a result, he was assassinated while celebrating Mass in 1980. Romero was canonized a saint by Pope Francis in 2018.

    In South Africa, the Anglican bishop Desmond Tutu, archbishop of Cape Town, spent much of his active ministry condemning the violence of apartheid in his native country. After the end of the apartheid regime, Tutu also served as chair of the Truth and Reconciliation Commission, which was established to investigate acts of violence committed both by government forces and violent activists. Before his death in 2021, Tutu continued to speak out against other international acts of oppression. He won the Nobel Peace Prize in 1984.

    For some, Bishop Budde’s words might seem radical, rude, inappropriate or offensive. But she did not speak in isolation; she is surrounded by a cloud of witnesses in the Christian tradition of speaking truth to power.

    Joanne M. Pierce does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. In asking Trump to show mercy, Bishop Budde continues a long tradition of Christian leaders ‘speaking truth to power’ – https://theconversation.com/in-asking-trump-to-show-mercy-bishop-budde-continues-a-long-tradition-of-christian-leaders-speaking-truth-to-power-248209

    MIL OSI – Global Reports

  • MIL-OSI Global: St. Thomas Aquinas’ skull just went on tour − here’s what the medieval saint himself would have said about its veneration

    Source: The Conversation – USA – By Therese Cory, Associate Professor of Thomistic Studies, University of Notre Dame

    The skull of St. Thomas Aquinas during a stop at St. Patrick Church in Columbus, Ohio, in December 2024. Nheyob/Wikimedia Commons

    Once, on a road trip in Greece, I stopped with my husband and dad at a centuries-old Orthodox monastery to view its famous frescoes. We were in luck, the porter said: It was a feast day. The relics of the monastery’s saintly founder were on view for public veneration.

    As a Catholic and a medievalist, I can never resist meeting a new saint. The relic, it turned out, was the saint’s hand, though without any special ornament or reliquary, the ornate containers in which relics are often displayed. Nothing but one plain, severed hand in a glass box, its fingers partly contorted, and its discolored skin shriveled onto the bones.

    We gathered around the shrine, silently, to pray. Then my dad, whose piety sometimes runs up against his penchant for dramatic storytelling, leaned over and whispered, “What if at the hotel, in the middle of the night, I hear a scratching sound, and then The Claw …” His own hand started crawling dramatically up his shirt and then flew to his throat.

    “Dad!” I hissed furiously, with a horrified glance at the monks praying nearby.

    Relics can admittedly feel a bit morbid – and yet, so holy. What exactly is their appeal?

    To me, it’s the physical closeness, especially with parts of a saint’s own body – what the Catholic Church calls “first class” relics, which can be as small as a chip of bone. There are also objects the saint used during life: “second class” relics, such as the gloves worn by the Italian mystic Padre Pio.

    The veneration of relics of saints was already well established in the early church. But controversies go back hundreds of years. During the Protestant Reformation, for example, reformers decried the shameless use of relics to drive donations and the proliferation of faux relics. Today, the idea of intentionally dismembering and displaying human body parts can seem shocking, even repulsive.

    Yet venerating relics remains far from a “relic” of the past. At the end of 2024, the skull of St. Thomas Aquinas – the great Dominican medieval thinker whose writings I study – made its first tour of the United States. The journey commemorated the “triple anniversary” of 700 years since his canonization, 750 years since his death and 800 years since his birth.

    From Cincinnati to Rhode Island to Washington, D.C., thousands of Catholics turned out to pay their homage to this medieval saint.

    Religious sisters venerating the skull at St. Patrick Church in Columbus, Ohio.
    Nheyob/Wikimedia Commons

    God’s dwelling place

    What might Aquinas himself have thought about all the attention to his traveling skull – that fragile and now empty case for the brain behind one of the most productive minds of European philosophy?

    Aquinas’ answer lies in a short but poignant text from “Summa Theologiae,” his best-known work. Christians should venerate relics, Aquinas says, because the saints’ bodies were dwelled in by God. The very parts of their bodies were the instruments, or “organs,” of God’s actions.

    The saints as “organs” of God: What a riveting image! God is so intimately present to his friends, the saints, that their very bodies are sanctified by his presence. Those hands, now dead and desiccated, performed God’s own actions as they cared for the sick, fed the hungry, celebrated Mass and reconciled the lost sheep.

    According to Aquinas, honoring saints’ relics is ultimately about honoring this divine activity, a superhuman love working through ordinary human beings. But as he notes elsewhere, God is present in all of creation, working “most secretly” through all creatures at every moment. So by recognizing the special holiness of saints’ relics, Christians can better perceive the universal holiness that radiates through the whole created world.

    Cherished keepsakes

    Yet in discussing relics, Aquinas has some challenging things to say about what is perhaps their most immediate draw: the sense that when I see or touch a relic, I am physically present to a saint.

    Because the saints are brothers and sisters in the Christian family, he says, Christians should cherish their physical remains just as people cherish a memento of a loved one, like “a father’s coat or ring.”

    I did a double-take when I read this: A memento? Surely the saint’s body is more than that.

    Stained glass in St. Patrick Church in Columbus, Ohio, depicts a mystical vision St. Thomas Aquinas had in the 13th century.
    Nheyob/Wikimedia Commons, CC BY-SA

    But Aquinas insists that physical remains really are more like mementos of the deceased than parts of them. When St. Teresa of Calcutta died, for instance, she left behind a corpse and a soul. These bodily remains shouldn’t be confused with the saint herself, who was a living, breathing, bodily person. If I kiss a saint’s relic, as Catholics often do, I am not kissing the saint but something that was formerly part of a saint. The word “relic” literally goes back to the Latin word for “leaving something behind.”

    The holiness of a relic, then, derives from the person it was once part of, not what it is now.

    Not just “once was,” though, but also “will be.” Aquinas adds – and to me this is one of the most beautiful aspects of his reflections on relics – that venerating a relic is also a way of looking forward to the future resurrection of the body. Christian doctrine teaches that at the end of time, God will restore each person’s body, reuniting it with their soul. Relics represent that hope for everlasting life.

    Later this year, the skull formerly known as Aquinas’ will wend its way back to its permanent place of rest, buried under the altar of the Dominican church in Toulouse, France. During its visit to the U.S., I was down with pneumonia and never got a chance to pay my respects. But I cherish the “third class” relic that my sister-in-law mailed me from Cincinnati: a holy card that she had touched to the skull’s reliquary.

    Therese Cory does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. St. Thomas Aquinas’ skull just went on tour − here’s what the medieval saint himself would have said about its veneration – https://theconversation.com/st-thomas-aquinas-skull-just-went-on-tour-heres-what-the-medieval-saint-himself-would-have-said-about-its-veneration-245970

    MIL OSI – Global Reports

  • MIL-Evening Report: ‘Turn it into a retirement village’: Inside the war of words over Eden Park

    After lengthy, torrid and emotional debate a critical decision for the future of Auckland Tāmaki Makaurau is being made in March. One party will celebrate; the other will slink back to the drawing board. But will it really settle the great Auckland stadium debate?

    SPECIAL REPORT: By Chris Schulz

    It resembles a building from Blade Runner. It looks like somewhere the Avengers might assemble. It is, believes Paul Nisbet, the future.

    “It’s innovative, it’s groundbreaking, it’s something different,” says the driving force behind Te Tōangaroa, a new stadium mooted for downtown Auckland.

    He has spent 13 years dreaming up this moon shot, and it shows. “We have an opportunity here to deliver something special for the country.”

    Located behind Spark Arena, Te Tōangaroa — also called “Quay Park” — is Nisbet’s big gamble, the stadium he believes Tāmaki Makaurau needs to sustain the city’s live sport and entertainment demands for the next 100 years.

    His is a concept as grand as it gets, a U-shaped dream with winged rooftops that will sweep around fans sitting in the stands, each getting unimpeded views out over the Waitematā Harbour and Rangitoto Island.

    Located behind Spark Arena, Te Tōangaroa is also called “Quay Park”. Image: Te Tōangaroa

    Nisbet calls his vision a “gateway for the world,” a structure so grand he believes it would attract the biggest sports teams, stars and sponsors to Aotearoa while offering visitors a must-see tourist destination. Nestled alongside residential areas, commercial zones and an All Blacks-themed hotel, designs show a retractable roof protecting 55,000 punters from the elements and a sky turret towering over neighbouring buildings.

    He’s gone all in on this. Nisbet’s quit his job, assembled a consortium of experts — called Cenfield MXD — and attracted financial backers to turn his vision into a reality. It is, Nisbet believes, the culmination of his 30-year career working in major stadiums, including 11 years as director of Auckland Stadiums.

    “I’ve had the chance to travel extensively,” he says. “I’ve been to over 50 stadiums around the world.”

    Tāmaki Makaurau, he says, needs Te Tōangaroa — urgently. If approved, it will be built over an ageing commercial space and an unused railway yard sitting behind Spark Arena, what Nisbet calls “a dirty old brownfields location that’s sapping the economic viability out of the city”.

    He calls it a “regeneration” project. “You couldn’t mistake you’re in Auckland, or New Zealand, when you see images of it,” he says.

    The All Blacks are on board, says Nisbet, and they want Te Tōangaroa built by 2029 in time for a Lions tour. (The All Blacks didn’t respond to a request for comment, but former players John Kirwan and Sean Fitzpatrick have backed the team moving to Te Tōangaroa.)

    Concert promoters are on board too, says Nisbet. He believes Te Tōangaroa would end the Taylor Swift debacle that’s seen her and many major acts skip us in favour of touring Australian stadiums.

    “It will be one of those special places that international acts just have to play,” he says.

    The problem? Nisbet’s made a gamble that may not pay off. In March, a decision is due to be made about the city’s stadium future. Building Te Tōangaroa, with an estimated construction time of six years and a budget of $1 billion, is just one option.

    The other, Eden Park, has 125 years of history, a long-standing All Blacks record and a huge number of supporters behind it — as well as a CEO willing to do anything to win.

    The stadium standing in Te Tōangaroa’s way
    Stand in Eden Park’s foyer for a few minutes and history will smack you in the face. It’s there in the photos framed on the wall from a 1937 All Blacks test match. It’s sitting in Anton Oliver’s rugby boots from 2001, presumably fumigated and placed inside a glass case.

    More recent history is on display too, with floor-to-ceiling photographs showing off concerts headlined by by Ed Sheeran and Six60, a pivot only possible since 2021.

    Soon, the man in charge of all of this arrives. “Very few people have seen this space,” says Nick Sautner, the Eden Park CEO who shakes my hand, pulls me down a hallway and invites me into a secret room in the bowels of Eden Park. With gleaming wood panels, leather couches and top-shelf liquor, Sautner’s proud of his hidden bar.

    “It’s invite-only . . . a VIP experience,” says Sautner, whose Australian accent remains easily identifiable despite seven years at the helm of Eden Park.

    The future of Eden Park if a refurb is granted. Image: YouTube

    This bar, he says, is just one of the many innovations Eden Park has undertaken in recent years. Built in 1900, the Mt Eden stadium remains the home of the All Blacks — but Eden Park is no longer considered a specialty sports venue.

    Up to 70 percent of the stadium’s revenue now comes from non-sporting activities, Sautner confirms. You can golf, abseil onto the rooftops and stay the night in dedicated glamping venues. It’s also become promoters’ choice for major concerts, with Coldplay and Luke Combs recently hosting multiple shows there. “We will consider any innovation you can imagine,” Sautner tells me. “We’re a blank canvas.”

    Throughout our interview, Sautner refers to Eden Park as the “national stadium”. He’s upbeat and on form, rattling off statistics and renovations from memory. His social media feeds — especially LinkedIn — are full of posts promoting the stadium’s achievements. He’ll pick up the phone to anyone who will talk to him.

    “Whatsapp is the best way of contacting me,” he says. Residents have his number and can call directly with complaints. After our interview, Sautner passes me his business card then follows it up with an email making sure I have everything I need. “My phone’s always on,” he assures me.

    He may not admit it, but Sautner’s doing all of this in an attempt to get ahead of what’s shaping up as the biggest crisis of Eden Park’s 125 years. If Te Tōangaroa is chosen in March, Eden Park — as well as Albany’s North Harbour Stadium and Onehunga’s Go Media Stadium – will all take a back seat.

    If Eden Park loses the All Blacks and their 31-year unbeaten record, then there’s no other word for it: the threat is existential.

    Called Eden Park 2.1, Sautner is promoting a three-stage renovation plan. Image: YouTube

    Ask Sautner if he’s losing sleep over his stadium’s future and he shakes his head. To him, Te Tōangaroa’s numbers don’t stack up. “If someone can make the business model work for an alternative stadium in Auckland, I’m all for activating the waterfront,” he says.

    Then he poses a series of questions: “How many events a year would a downtown stadium hold? Forty-five?” he asks. “So 320 other days a year, what’s going to be in that stadium?”

    He is, of course, biased. But Sautner believes upgrading Eden Park is the right move. Called Eden Park 2.1, Sautner is promoting a three-stage renovation plan that includes building a $100 million retractable rooftop. A new North Stand would lift Eden Park’s capacity to 70,000, and improved function facilities and a pedestrian bridge would turn the venue into “a fortress . . . capable of hosting every event”.

    He’s veering into corporate speak, but Sautner sees the vision clearly. With his annual concert consent recently raised from six to 12 shows, he already thinks he’s got it in the bag, “Eden Park has the land, it has the consent, it has the community, it has the infrastructure,” he says. “I’m very confident Eden Park is going to be here for another 100 years.”

    Instead of a drink, Sautner offers RNZ a personal stadium tour that takes us through the exact same doors that open when the All Blacks emerge onto the hallowed turf. There, blinking in the sunlight, Sautner sweeps his arms around the stadium and grins. “I get up every day and I think of my family,” he says. “Then I think, ‘How can I make Eden Park better?”

    The stadium debate: ‘It began when the dinosaurs died out’
    It is, says Shane Henderson, an argument for the ages. It never seems to quit. How long have Aucklanders been feuding about stadiums? “It began when the dinosaurs died out,” jokes Henderson.

    For the past year, he’s been chairing a working group that will make the decision on Auckland’s stadium future. That group whittled four options down to the current two, eliminating a sunken waterfront stadium, and another based in Silo Park.

    He’s doing this because Wayne Brown asked him to. “The mayor said, ‘We need to say to the public, ‘This is our preferred option for a stadium for the city.’” It’s taken over Henderson’s life. Every summer barbecue has turned into a forum for people to share their views.

    “People say, “Why don’t you do this?’” he says. Henderson won’t be drawn on which way he’s leaning ahead of March’s decision, but he’s well aware of the stakes. “We’re talking about the future of our city for generations to come,” he says. “It’s natural feelings are going to run high.”

    That’s true. As I researched this story, the main parties engaged in a back-and-forth discussion that became increasingly heated. Jim Doyle, from Te Tōangaroa’s Cenfield MXD team, described Eden Park’s situation as desperate.

    “Eden Park can’t fund itself . . . it’s got no money, it’s costing ratepayers,” he said. Doyle alleged the stadium “wouldn’t be fit for purpose”. “You’re going to have to spend probably close to $1 billion to upgrade it.” Asked what should happen to Eden Park should the decision go Te Tōangaroa’s way, Doyle shrugged his shoulders. “Turn it into a retirement village.”

    Eden Park’s Sautner immediately struck back. Yes, he admits Eden Park owes $40 million to Auckland Council, calling that debt a “legacy left over from the Rugby World Cup 2011”. But he denied most of the consortium’s claims.

    “Eden Park does not receive any funding or subsidies from Auckland ratepayers,” Sautner said in a written statement. He confirmed renovations had already begun. “Over the past three years, the Trust has invested more than $30 million to enhance infrastructure and upgrade facilities . . . creating flexible spaces to meet evolving market demands.”

    Sautner said Doyle’s statement was evidence of his team’s inexperience. “We are extremely disappointed that comments of this nature have been made,” he said. “They are factually incorrect and highlight Quay Park consortium’s lack of understanding of stadium economics.”

    Do we even need to do this?
    As the stadium debate turns into a showdown, major stars continue to skip Aotearoa in favour of huge Australian shows, with Katy Perry, Kylie Minogue and Oasis all giving us a miss this year. New Zealand music fans are reluctantly spending large sums on flights and accommodation if they want to see them. Until Metallica arrives in November, there are no stadium shows booked; just three of Eden Park’s 12 allotted concert slots are taken this year.

    Yet, Auckland City councillors will soon study feasibility reports being submitted by both stadium options.

    On March 24, Henderson, the working group chair, says councillors will come together to “thrash it out” and vote for their preferred option. There will only be one winner, and The New Zealand Herald reports either building Te Tōangaroa or Eden Park 2.1 is likely to cost more than $1 billion. Either we’re spending that on a brand new waterfront stadium, or we’re upgrading an old one.

    “Is that the best use of that money?” asks David Benge. The managing director for events company TEG Live doesn’t believe Tāmaki Makaurau needs another stadium because it’s barely using those it already has. He has questions.

    “I understand the excitement around a shiny new toy, but to what end?” he asks. “Can Auckland sustain a show at Go Media Stadium, a show at Western Springs, a show at Eden Park, and a show at this new stadium on the same night — or even in the same week?”

    Benge doesn’t believe Te Tōangaroa would entice more artists to play here either. “I’m yet to meet an artist who’s going to be swayed by how iconic a venue is,” he says. Bigger problems include the size of our population and the strength of our dollar.

    No matter the venue, “you’re still incurring the same expenses to produce the show,” he says. Instead, he suggests Pōneke as the next city needing a new venue. “If you could wave a magic wand and invest in a 10,000-12,000-capacity indoor arena in Wellington, that would be fantastic,” he says.

    Would a new stadium really lure big artists to NZ? Image: Te Tōangaroa

    Live Nation, the touring juggernaut that hosts most of the country’s stadium shows, didn’t respond to a request for comment. Other promoters canvassed by RNZ offered mixed views. Some wanted a new stadium, while others wanted a refurbished one. Every single one of them said that any new stadium needed to be built with concerts — not sport — in mind.

    “We’re fitting a square peg in a round hole,” one said about the production costs involved in trucking temporary stages into Eden Park or Go Media Stadium. “Turf replacement can add hundreds of thousands — if not $1 million — to your bottom line,” said another.

    Some wanted something else entirely. Veteran promoter Campbell Smith pointed out Auckland Council is seeking input for a potential redevelopment of Western Springs. One mooted option is turning it into a home ground for the rapidly rising football club Auckland FC. Smith doesn’t agree with that. “I think it’s a really attractive option for music and festivals,” he says. “It’s got a large footprint, it’s easily accessible, it’s close to the city … It would be a travesty if it was developed entirely for sport.”

    One thing is for certain: a decision on this lengthy, torrid and emotional topic is being made in March. One party will celebrate; the other will slink back to the drawing board. Will it finally end the great Auckland stadium debate? That’s a question that seems easier to answer than any of the others.

    Chris Schulz is a freelance entertainment journalist and author of the industry newsletter, Boiler Room. This article was first published by RNZ and is republished with the author’s permission. Asia Pacific Report has a community partnership agreement with RNZ.

    MIL OSI AnalysisEveningReport.nz

  • MIL-OSI Global: What are sleep retreats? A sleep scientist explains the latest wellness trend

    Source: The Conversation – UK – By Jason Ellis, Professor of Sleep Science, Northumbria University, Newcastle

    Considering the effect of poor sleep on the individual as well as on society and the economy, it is hardly surprising sleep has become an intense area of research focus in recent years. Most recently we have seen an increase in the offering of and appetite for so-called sleep retreats. But what are sleep retreats and are they helpful?

    As with any specialised retreat, there is no set formula for what a sleep retreat should focus on. As such, the range of what is available is incredibly variable, from retreats that just focus on a sleep-friendly environment (a cool, dark, quiet and comfortable bedroom in a luxurious location) to ones specifically aimed at managing a specific sleep disorder, using evidence-based therapies, such as cognitive behavioural therapy for insomnia.

    There are even ones that provide, among other things, a regimen of vitamins and minerals delivered intravenously. Most, however, fall somewhere between focusing on meditation, exercise and relaxation.

    Although there is good evidence that exercise, at the right intensity and duration, can be beneficial for sleep, it is unlikely that a lack of exercise alone causes poor sleep.

    Similarly, there is some, albeit poor quality, evidence that meditation and relaxation improve sleep quality. As such, it is unlikely that these treatments alone will fix a sleep problem.

    The main challenge is that sleep, as with diet or exercise, is just an overarching term for a complex behaviour, one that is influenced and can influence almost every area of a person’s life. For example, I am hearing a lot about supplementing with magnesium to aid sleep, but this is only likely to be beneficial if you are deficient in the first place.

    What to consider before you splash the cash

    So, should we approach the sleep retreat with caution? Not necessarily, it is more a case of doing your homework.

    First, who does the sleep retreat cater for, and what do you hope to get from the retreat? The busy executive who only allows themselves four hours of sleep a night will have very different expectations and experiences to a person who has undiagnosed sleep apnoea and sleeps for nine hours but wants to know why they are so sleepy during the day.

    This leads to the second consideration: what kind of pre-screening (for conditions that might be causing insomnia) and personalisation do they offer?

    Many retreats advertise an individual consultation as part of the package but don’t really say what that will cover (a sleep, medical and psychiatric history and lifestyle assessment should be done as a bare minimum. This is vital when we consider that while well-established, evidence-based treatments for a variety of sleep disturbances and disorders exist, they are not suitable for everyone.

    Also, there is a perception that non-pharmacological therapies, including nutraceuticals (products derived from food sources that said to have health benefits) and over-the-counter remedies (such as antihistamines, melatonin and valerian), don’t have side-effects, which is not necessarily the case.

    The final considerations are: who is delivering the retreat? And is what they are offering based on sound scientific evidence?

    Considering certification in sleep medicine is a hot topic in the sleep community at the moment, it is worth doing some research. For example, in the UK there is no pathway to becoming a sleep medicine specialist, consultant or coach. So who is leading the sleep retreat and is what they offering evidence-based?

    Jason Ellis has consulted to Kayak on Sleep Tourism.

    ref. What are sleep retreats? A sleep scientist explains the latest wellness trend – https://theconversation.com/what-are-sleep-retreats-a-sleep-scientist-explains-the-latest-wellness-trend-247632

    MIL OSI – Global Reports

  • MIL-OSI Global: You don’t have to be a net zero hero – how focus on personal climate action can distract from systemic problems

    Source: The Conversation – UK – By Sam Illingworth, Professor of Creative Pedagogies, Edinburgh Napier University

    Tint Media/Shutterstock

    Campaigns and social media often encourage people to make eco-friendly choices like using less plastic or driving less. While these actions are important, focusing so much on what people do can distract from the much larger role that businesses and governments play in causing and solving environmental problems.

    For example, some campaigns promote a “net zero hero” narrative that implies that people should take the lead in fighting climate change by changing their behaviour, recycling more, taking fewer flights or eating less meat.

    While personal actions can help, there’s a danger this way of thinking can put too much responsibility on consumers. These individual actions are not enough to solve the problem.

    By focusing so much on personal responsibility, we risk ignoring the systemic changes needed to address the climate crisis. These include switching to renewable energy on a large scale, enforcing strict industrial regulations and redesigning cities to reduce dependence on fossil fuels.

    Without these bigger steps, taken by governments and large organisations, we can’t make real progress in tackling climate change.




    Read more:
    Quick climate dictionary: what actually is a carbon footprint?


    Energy companies and trade groups have been particularly good at shifting blame to consumers. They promote products and habits that claim to lower personal carbon footprints while lobbying against strong environmental laws that would require real emission cuts from industries.

    Fossil fuel companies have known about climate change science since the 1950s but funded misinformation campaigns to delay action and shift blame to individuals.

    Indeed, the carbon footprint calculator itself was developed in 2004 by a public relations firm working for BP. The tool encouraged individuals to calculate their personal impact on the environment, focusing on activities like driving, energy use, and diet.

    According to reports on the campaign’s origins, this approach was part of a deliberate strategy to shift public attention away from the significant environmental harm caused by corporations, particularly the fossil fuel industry.

    Despite this narrative, many corporations have failed to address their own emissions. A recent study found that only 60% of companies met their 2020 emissions targets, and 31% failed to report any outcomes.

    This lack of accountability highlights how many major companies neglect their responsibilities, raising serious concerns about their commitment to 2030 climate goals.

    These tactics maintain the status quo and creates a cycle of guilt and failure for consumers. Many people feel overwhelmed, leading to demotivation and even climate anxiety.

    Similar strategies have been used in other industries. For instance, the tobacco industry blamed smokers for health issues, focusing on personal choice while downplaying nicotine addiction and resisting health regulations.

    The real meaning of a carbon footprint.

    Shifting the focus

    In my research into climate communication, I see how stories of guilt resonate with communities already facing misplaced blame. For example, in workshops with groups affected by austerity, people often felt guilty for not helping others more.

    Over time, they realised this was due to failures in governance, not personal shortcomings. They saw a similar pattern in the climate crisis, learning to separate personal guilt from the larger roles of corporations and governments.

    Collective action will drive systemic change.
    John Gomez/Shutterstock

    As a climate researcher and communicator, my job is to help move the conversation from personal guilt to shared responsibility and accountability. This shift empowers people as citizens, not just consumers, to demand action from leaders and industries.

    Understanding that while personal responsibility is meaningful, the real power to create change lies with corporations and governments is vital. We need systemic change, not consumer guilt.

    To tackle the climate crisis, we must make personal choices that reflect care for the environment. But we must also work together to demand that companies and governments adopt sustainable practices, for example through voting for leaders who prioritise environmental reform. The path to a sustainable future is collective action – not carrying the weight of guilt alone.


    Don’t have time to read about climate change as much as you’d like?

    Get a weekly roundup in your inbox instead. Every Wednesday, The Conversation’s environment editor writes Imagine, a short email that goes a little deeper into just one climate issue. Join the 40,000+ readers who’ve subscribed so far.


    Sam Illingworth does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. You don’t have to be a net zero hero – how focus on personal climate action can distract from systemic problems – https://theconversation.com/you-dont-have-to-be-a-net-zero-hero-how-focus-on-personal-climate-action-can-distract-from-systemic-problems-248073

    MIL OSI – Global Reports

  • MIL-OSI Global: Patrick Doyle’s five best film scores – including his pick of an undiscovered gem

    Source: The Conversation – UK – By David Scott, Head of Division, School of Business and Creative Industries, University of the West of Scotland

    Scottish musician Patrick Doyle is an acclaimed composer of over 60 feature film scores with many attendant accolades, honours and awards. I first met him in 2001 while making a now long-vanished series on movie music called Silverscreen Beats for BBC Radio.

    I visited him at his office on the Shepperton lot in Surrey. There, I watched, enchanted, as he flitted between desk and piano bringing his creativity to life with his incredible musicality and riotous humour illustrating scores like Carlito’s Way (1993), Sense and Sensibility (1995) and Gosford Park (2001).

    So years later, when the University of the West of Scotland (UWS) presented Doyle with an honorary doctorate, I wasted no time in asking him to visit and talk to our students. The film of that event is finally available online and is a treat for all fans of film music.

    I could pick 20 favourite Patrick Doyle soundtracks for this “best of” list. In the end, I selected these four and asked him to pick a fifth.

    1. Henry V (1989)

    Back in 2001, Doyle told me he loves to get the opportunity to compose a song for a movie soundtrack. Henry V, his first full-length feature score, includes one of the greatest examples, Non-Nobis Domine, sung after the key battle scene of Kenneth Branagh’s 1989 film.

    It builds from a plain opening verse, sung in the film by young Doyle himself who remembers, with a humorous twinkle, trying to sing it slightly “off key” to enhance its authenticity.

    Non-Nobis Domine in Henry V.

    From that simple introduction, the composer gradually adds choral, orchestral and complex harmonic elements, skilfully balancing the elation and darkness of triumph. And for all its harmonic counterpoint and rich orchestration, he never lets you forget that lonely central melody, doubling it down the octave on bowed double basses as it reaches the climax.

    The soundtrack recording, conducted by Simon Rattle with the Birmingham Symphony Orchestra, is a thing of wonder. But to truly understand the perfect marriage of story and music – even if orchestras and choirs did not typically boom across 15th-century battlefields – experience Non-Nobis Domine in the original movie.

    2. Brave (2012)

    The Scottish tradition is never far from Patrick’s music. Indeed, O! For a Muse of Fire, the opening theme from Henry V, uses a recognisably Scottish sound, two notes played quickly across a five note interval, as a key motif, expanding this in a melodic phrase that recalls the cries of seagulls.

    In Disney Pixar’s Brave (2012), Doyle brings an authentic voice to the imaginary Scotland of its central character, the indefatigable Merida. Her defiant exuberance is mirrored in the rhythm of pieces like The Games and Remember to Smile where the composer uses a traditional hand-held drum (the bodhran), bagpipes and fiddles, with the harmony instruments often playing in tight unison to rousing effect.

    Remember to Smile from the Brave score.

    If a key role of the movie soundtrack is to extend narrative or visual language, the effect wrought here is almost physical – to the extent that my own embarrassed grandchildren have had to restrain me from dancing on the couch during screenings of Brave.

    Elsewhere though, the slow mystery of a beautifully animated landscape is matched by atmospheric, languid passages that call on deep reserves of the tradition and its melancholy.

    3. Sense and Sensibility (1995)

    Patrick’s Catholic upbringing is another constant presence in his music. He was greatly influenced by the beautiful Irish melodic hymns which were imported to the west of Scotland. His soundtrack for Ang Lee’s Sense and Sensibility was nominated for a raft of music awards including the Baftas, the Oscars and Golden Globes.

    It has marvellous passages of yearning and almost devotional melody and harmony, but I include it selfishly for the hymnal Weep No More You Sad Fountains alone. It’s one of my very favourite melodies, and one I was privileged to hear him play at close quarters at UWS.

    The Dreame from Sense and Sensibility.

    This majestic piece can be heard on the original soundtrack, sung by English soprano Jane Eaglen, on Patrick’s 2015 album of solo piano pieces and, of course, during my interview with him. Perhaps less well known from the same film is The Dreame, another devotional piece, again sung by Eaglen and set to a Ben Johnson poem. Complex in conception and virtuosic in execution, the piece is nevertheless understated, underplayed, and more devastating for it.


    Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.


    4. Carlito’s Way (1993)

    Doyle’s hilarious and hugely affectionate account of working with one of cinema’s greats, the director Brian DePalma, is a highlight of the conversation I had with him at UWS. His insights range from the creation of the music, the larger-than-life DePalma himself and descriptions of giant cranes chucking down fake rain onto a Biblical-scale location shoot in New York.

    The music created for Carlito’s Way, DePalma’s crime classic starring Al Pacino, is dramatic and rangy, with passages of glacial orchestrated strings – the title theme is a highlight – sitting alongside solo piano, small jazz ensemble and interesting sonic juxtapositions.

    The Elevator from Carlitto’s Way.

    One piece, The Elevator, combines marimba, piano and plucked strings in unison against guiro and woodblocks. It establishes a theme that builds intensely, adding different instrumental colour towards the famous climax in Grand Central Station in New York. It is recognisably “movie music”, but tells its own melodic story.

    5. Doyle’s choice – Indochine (1993)

    When I asked Patrick to choose an “undiscovered gem for a new generation”, he quickly picked Indochine, the 1993 drama starring Catherine Deneuve. The movie won the Oscar for best foreign film in that year and the music is classic Doyle, melodic, rich in harmony and grand enough in orchestral scale to match the sumptuous visual language of the film.

    Premier Rendez-Vous from Indochine.

    That devotional, romantic sound is in full flow too. Pieces like Premier Rendez-Vous and Journey’s End are almost heady in conception and execution. Among the most distinctive themes in this hugely expansive work are also among the most distinctive pieces in Doyle’s own canon: two sure-footed tango and rumba pieces and the title theme itself with its unusual and atmospheric combination of ethereal wordless vocal, eastern bass drum, gong and finger bells. Essential.

    David Scott does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Patrick Doyle’s five best film scores – including his pick of an undiscovered gem – https://theconversation.com/patrick-doyles-five-best-film-scores-including-his-pick-of-an-undiscovered-gem-247132

    MIL OSI – Global Reports

  • MIL-OSI Global: Trump pulls out of WHO and Paris – how did international bodies get through deglobalisation last time around?

    Source: The Conversation – UK – By Perri 6, Emeritus Professor of Public Management, Queen Mary University of London

    Donald Trump has ordered the US to leave the World Health Organization. Skorzewiak / Shutterstock

    Following Donald Trump’s return to the White House, much attention has been given to his plans for tariffs on imported goods, deportations of illegal migrants, and cuts to federal government spending. Fewer column inches have addressed the implications of his presidency for global regulatory bodies.

    Just as he did during his first term, Trump has announced the withdrawal of the US from the World Health Organization (WHO) and from the Paris climate accords.

    And because his tariffs programme will challenge World Trade Organization (WTO) rules, Trump is likely to continue the US policy of stymieing the WTO’s appellate body, which adjudicates on trade disputes between states. US withdrawals from other international regulatory bodies are also possible.

    Each of the bodies from which Trump withdrew last time around survived. However, threats to global regulatory bodies today could be greater than they were during Trump’s first term.

    In the US and beyond, deglobalisation has so far been evident only in state policies, and not in trade flows. China, for example, has set up and now dominates several regional investment and trade organisations to provide alternatives to the International Monetary Fund and World Bank.

    However, tariff retaliation and bloc-based regulatory standards could soon turn “slowbalisation” – a trend whereby political support for open trade has gradually weakened and the rate of growth in world trade has slowed – into trade deglobalisation.

    We have been here before. The 1930s were characterised by high tariffs, breakup of trade into blocs, and withdrawals and expulsions of major powers from global bodies. In the 1940s, which saw the breakout of the second world war, trade was conducted almost exclusively among allies.

    Yet almost all international regulatory bodies survived during this period, albeit they were bruised and were able to achieve less as a result.

    Our study, which was published in 2021, distinguished pathways through which three distinct groups of global regulatory bodies either survived or else handed over their archives, networks and organisational capacity to their UN-era successors.

    Preserving rule sets

    One inter-war group of industry-specific global regulators oversaw capital-intensive and infrastructure-heavy international industries such as telecommunications and railways. This group included the International Telecommunications Union and a modest alphabet soup of closely cooperating railway bodies.

    In these fields, interconnection depended on common but frequently updated and adjusted rule sets for technology, accounting and routing management. They also required continuous statistical collections by international bureaus.

    Unable to agree major regulatory innovation after the global economic crisis began in 1931, these bodies reduced their focus to managing and maintaining their existing rule sets and information services.

    On the outbreak of war in Europe, their bureaus went into a phase of severely reduced activity, with many of their activities suspended. However, they continued to collect and publish statistics, maintained their networks within member states, and developed ambitious plans for peacetime.

    The International Telecommunications Union and the railway authorities resumed operations shortly after the end of hostilities with their rule sets intact.

    Individual brokering work

    A second cluster were generic bodies, responsible for the oversight of labour relations and aspects of capital flows. These are faster-moving fields than infrastructure-heavy industries. These bodies included the International Labour Organization (ILO) and the Economic and Financial Organisation of the League of Nations (EFO).

    They provided expertise for negotiating agreements on particular problems. In the case of the ILO, this included conventions on working time, women’s working conditions, and forced labour. The EFO brokered financial support with strict conditions for Austria and Hungary, then new and struggling states which faced acute financial crises in the early 1920s.

    These organisations faced increasing difficulties during the deglobalisation of the 1930s. But they continued to provide bilaterally negotiated support for many countries. The ILO, for example, provided technical assistance to some south American governments on the design of social insurance schemes, while the EFO’s financial committee worked with central banks.

    Survival or bequest was secured by the brokering work of key individual leaders who were able to exploit fluid networks among states, firms and unions in global labour and capital debates.

    The EFO secured the transfer of key staff, networks and traditions to post-war bodies including the UN Economic and Social Council and the UN Food and Agriculture Organization. And the ILO’s director-general, Edward Phelan, was crucial in negotiating with the US to relaunch the organisation with a new programme for the post-war era.

    New international clubs

    A third group of regulatory bodies was created precisely in response to the 1930s global economic crisis. These were international commodity unions for goods such as tin, rubber, tea and sugar.

    Most were publicly run cartels, often backed by the imperial blocs that dominated the fragmenting world trade system. Like many cartels, their cohesion was fragile. But many of those that were successfully established managed to survive the 1930s and the war that followed.

    Their survival depended less on the formal administrative organisation of the infrastructure bodies or the individual brokering work that sustained the capital and labour bodies. It was dependent more on their ability to draw upon club-like collective bonds both among major producing and exporting firms and among officials across key producer states and imperial authorities.

    Within the tightly bonded International Tin Committee, for example, a succession of agreements on prices, quotas and voting rights were settled. Despite initial US reluctance to see these international commodity unions continue into peacetime, President Harry Truman was persuaded of their temporary value for economic order during reconstruction.

    Some even continued until the 1970s, when they collapsed in that decade’s global economic turmoil. Freer markets then superseded intergovernmental cartels.

    Trump’s policies, as well as those of China, Russia and other major powers, may again endanger the roles of global regulatory bodies. But some will survive by focusing on the routine maintenance services provided by their bureaus, and some will empower individual leaders to negotiate their way to reinvention and survival.

    Others will pass their capacity to new agencies when deglobalisation eventually abates. And some new international bodies may emerge in response to conditions in industries most adversely affected by the changing terms of trade.

    Our work has led us to conclude that which strategy is chosen depends on two things. First, on the features of the field being regulated. And second on the informal social organisation within the international bodies and member states, which shapes how people can act and the skills they can sustain.

    It remains to be seen how informal social organisation in the WHO and climate treaty system will now evolve after US withdrawal.

    Eva Heims has received funding from the ESRC.

    Martha Prevezer and Perri 6 do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    ref. Trump pulls out of WHO and Paris – how did international bodies get through deglobalisation last time around? – https://theconversation.com/trump-pulls-out-of-who-and-paris-how-did-international-bodies-get-through-deglobalisation-last-time-around-247919

    MIL OSI – Global Reports

  • MIL-OSI Global: Deepseek: China’s gamechanging AI system has big implications for UK tech development

    Source: The Conversation – UK – By Feng Li, Chair of Information Management, Associate Dean for Research & Innovation, Bayes Business School, City St George’s, University of London

    Koshiro K

    DeepSeek sent ripples through the global tech landscape this week as it soared above ChatGPT in Apple’s app store. The meteoric rise has shifted the dynamics of US-China tech competition, shocked global tech stock valuations, and reshaped the future direction of artificial intelligence (AI) development.

    Among the industry buzz created by DeepSeek’s rise to prominence, one question looms large: what does this mean for the strategy of the third leading global nation for AI development – the United Kingdom?

    The generative AI era was kickstarted by the release of ChatGPT on November 30 2022, when large language models (LLMs) entered mainstream consciousness and began reshaping industries and workflows, while everyday users explored new ways to write, brainstorm, search and code. We are now witnessing the “DeepSeek moment” – a pivotal shift that demonstrates the viability of a more efficient and cost-effective approach for AI development.

    DeepSeek isn’t just another AI tool. Unlike ChatGPT and other major LLMs developed by tech giants and AI startups in the USA and Europe, DeepSeek represents a significant evolution in the way AI models are developed and trained.

    Most existing approaches rely on large-scale computing power and datasets (used to “train” or improve the AI systems), limiting development to very few extremely wealthy market players. DeepSeek not only demonstrates a significantly cheaper and more efficient way of training AI models, its open-source “MIT” licence (after the Massachusetts Institute of Technology where it was developed) allows users to deploy and develop the tool.

    This helps democratise AI, taking up the mantle from US company OpenAI – whose initial mission was “to build artificial general intelligence (AGI) that is safe and benefits all of humanity” – enabling smaller players to enter the space and innovate.

    By making cutting-edge AI development accessible and affordable to all, DeepSeek has reshaped the competitive landscape, allowing innovation to flourish beyond the confines of large, resource-rich organisations and countries.

    It has also set a new benchmark for efficiency in its approach, by training its model at a fraction of the cost, and matching – even surpassing – the performance of most existing LLMs. By employing innovative algorithms and architectures, it is delivering superior results with significantly lower computational demands and environmental impact.

    Why DeepSeek matters

    DeepSeek was conceived by a group of quantitative trading experts in China. This
    unconventional origin holds lessons for the UK and US.

    While the UK – particularly London – has long attracted scientific and technological excellence, many of the highest achieving young graduates have tended to disproportionately opt for careers in finance, something that has come the expense of innovation in other critical sectors such as AI. Diversifying the pathways for Stem (science, technology, engineering and maths) professionals could yield transformative outcomes.

    The UK government’s recent and much-publicised 50-point action plan on AI offers glimpses of progressive intent, but also displayed a lack of boldness to drive real change. Incremental steps are not sufficient in such a fast-moving environment. The UK needs a new plan – one that leverages its unique strengths while addressing systemic weaknesses.

    Firstly, it’s important to recognise that the UK’s comparative advantage lies in its leading interdisciplinary expertise. World-class universities, thriving fintech and dynamic professional services and creative sectors offer fertile ground for AI applications that extend beyond traditional tech silos. The intersection of AI with finance, law, creative industries and medicine presents opportunities to lead in some niche but high-impact areas.

    The UK’s funding and regulatory frameworks are due an overhaul. DeepSeek’s development underscores the importance of agile, well-funded ecosystems that can support big, ambitious “moonshot” projects. Current UK funding mechanisms are bureaucratic and fragmented, favouring incremental innovations over radical breakthroughs, at times stifling innovation rather than nurturing it. Simplifying grant applications and offering targeted tax incentives for AI startups would represent a healthy start.

    Finally, it will be critical for the UK to keep its talent in the country. The UK’s AI sector faces a brain drain as top talent gravitates toward better-funded opportunities in the US and China. Initiatives such as public-private partnerships for AI research development can help anchor talent at home.

    DeepSeek’s rise is an excellent example of strategic foresight and execution. It doesn’t merely aim to improve existing models, but redefines the very boundaries of how AI could be developed and deployed – while demonstrating efficient, cost-effective approaches that can yield astounding results. The UK should adopt a similarly ambitious mindset, focusing on areas where it can set global standards rather than playing catch-up.

    AI’s geopolitics cannot be ignored either. As the US and China compete with one another, the UK has a critical role to play as the trusted intermediary and ethical leader in AI governance. By championing transparent AI standards and fostering international collaboration, the UK can punch above its weight on the global stage.

    DeepSeek’s success should serve as a wake-up call. Britain has the talent, institutions and entrepreneurial spirit to be a significant leading player in AI – but it must act decisively, and now.

    It is time to remove token gestures and embrace bold strategies that move the needle and position the UK as a leader in an AI-driven future. This moment calls for action, not just more conversation.

    DeepSeek has raised the bar. It is now up to the UK to meet it.

    Feng Li does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Deepseek: China’s gamechanging AI system has big implications for UK tech development – https://theconversation.com/deepseek-chinas-gamechanging-ai-system-has-big-implications-for-uk-tech-development-248387

    MIL OSI – Global Reports

  • MIL-OSI Global: DeepSeek: how a small Chinese AI company is shaking up US tech heavyweights

    Source: The Conversation – Global Perspectives – By Tongliang Liu, Associate Professor of Machine Learning and Director of the Sydney AI Centre, University of Sydney

    Chinese artificial intelligence (AI) company DeepSeek has sent shockwaves through the tech community, with the release of extremely efficient AI models that can compete with cutting-edge products from US companies such as OpenAI and Anthropic.

    Founded in 2023, DeepSeek has achieved its results with a fraction of the cash and computing power of its competitors.

    DeepSeek’s “reasoning” R1 model, released last week, provoked excitement among researchers, shock among investors, and responses from AI heavyweights. The company followed up on January 28 with a model that can work with images as well as text.

    So what has DeepSeek done, and how did it do it?

    What DeepSeek did

    In December, DeepSeek released its V3 model. This is a very powerful “standard” large language model that performs at a similar level to OpenAI’s GPT-4o and Anthropic’s Claude 3.5.

    While these models are prone to errors and sometimes make up their own facts, they can carry out tasks such as answering questions, writing essays and generating computer code. On some tests of problem-solving and mathematical reasoning, they score better than the average human.

    V3 was trained at a reported cost of about US$5.58 million. This is dramatically cheaper than GPT-4, for example, which cost more than US$100 million to develop.

    DeepSeek also claims to have trained V3 using around 2,000 specialised computer chips, specifically H800 GPUs made by NVIDIA. This is again much fewer than other companies, which may have used up to 16,000 of the more powerful H100 chips.

    On January 20, DeepSeek released another model, called R1. This is a so-called “reasoning” model, which tries to work through complex problems step by step. These models seem to be better at many tasks that require context and have multiple interrelated parts, such as reading comprehension and strategic planning.

    The R1 model is a tweaked version of V3, modified with a technique called reinforcement learning. R1 appears to work at a similar level to OpenAI’s o1, released last year.

    DeepSeek also used the same technique to make “reasoning” versions of small open-source models that can run on home computers.

    This release has sparked a huge surge of interest in DeepSeek, driving up the popularity of its V3-powered chatbot app and triggering a massive price crash in tech stocks as investors re-evaluate the AI industry. At the time of writing, chipmaker NVIDIA has lost around US$600 billion in value.

    How DeepSeek did it

    DeepSeek’s breakthroughs have been in achieving greater efficiency: getting good results with fewer resources. In particular, DeepSeek’s developers have pioneered two techniques that may be adopted by AI researchers more broadly.

    The first has to do with a mathematical idea called “sparsity”. AI models have a lot of parameters that determine their responses to inputs (V3 has around 671 billion), but only a small fraction of these parameters is used for any given input.

    However, predicting which parameters will be needed isn’t easy. DeepSeek used a new technique to do this, and then trained only those parameters. As a result, its models needed far less training than a conventional approach.

    The other trick has to do with how V3 stores information in computer memory. DeepSeek has found a clever way to compress the relevant data, so it is easier to store and access quickly.

    What it means

    DeepSeek’s models and techniques have been released under the free MIT License, which means anyone can download and modify them.

    While this may be bad news for some AI companies – whose profits might be eroded by the existence of freely available, powerful models – it is great news for the broader AI research community.

    At present, a lot of AI research requires access to enormous amounts of computing resources. Researchers like myself who are based at universities (or anywhere except large tech companies) have had limited ability to carry out tests and experiments.

    More efficient models and techniques change the situation. Experimentation and development may now be significantly easier for us.

    For consumers, access to AI may also become cheaper. More AI models may be run on users’ own devices, such as laptops or phones, rather than running “in the cloud” for a subscription fee.

    For researchers who already have a lot of resources, more efficiency may have less of an effect. It is unclear whether DeepSeek’s approach will help to make models with better performance overall, or simply models that are more efficient.

    Tongliang Liu does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. DeepSeek: how a small Chinese AI company is shaking up US tech heavyweights – https://theconversation.com/deepseek-how-a-small-chinese-ai-company-is-shaking-up-us-tech-heavyweights-248434

    MIL OSI – Global Reports

  • MIL-OSI Global: A hot and troubled world of work: how South Africa’s bold new climate act and labour law can align to drive a just transition

    Source: The Conversation – Africa – By Debbie Collier, Professor of Law and Director of the Centre for Transformative Regulation of Work, University of the Western Cape

    Increased average temperatures, climate variability, and extreme weather events are taking a toll on the environment and disproportionately affecting the lives and livelihoods of vulnerable communities. This is intensifying challenges in the world of work.

    Working on a warmer planet increases health and safety risks and affects workers’ well-being and productivity. These risks are a challenge for employment, labour standards, and the creation of decent work.

    Temperatures in South Africa are rising faster than the global average. And finding ways to adapt to climate change and navigate its challenges is becoming increasingly urgent. These challenges are compounded by the disruptions of an energy transition. South Africa also has high levels of inequality and unemployment.

    South Africa, one of the largest (CO₂) emitters in Africa, has committed to reducing its emissions with the aim of reaching net zero emissions by 2050. But how does the country balance the need to cut carbon emissions while protecting an already vulnerable working population during the energy transition?

    Enabling a just transition is a focus for the constituencies of the National Economic Development and Labour Council. The council is South Africa’s national social dialogue institution. It consists of representatives from the state, organised labour, organised business, and community organisations. The council’s Labour Market Chamber has been working on how best to integrate principles of labour and environmental justice. And how labour laws can be used to support a just energy transition.

    The University of the Western Cape’s Centre for Transformative Regulation of Work, of which I am the director, has supported the council and its social partners in labour law reform processes. The aim is to ensure that labour laws and policy are responsive to the changing world of work, and are “fit for purpose” in the just transition era.

    Two priorities are to implement the Climate Change Act as envisaged. And to use and develop labour law to support a just transition.

    The Climate Change Act

    The Climate Change Act 22 of 2024 incorporates the goal of decent work within a commitment to a just transition. The act, which will take effect on a date yet to be determined, defines a just transition as

    a shift towards a low-carbon, climate-resilient economy and society and ecologically sustainable economies and societies which contribute toward the creation of decent work for all, social inclusion, and the eradication of poverty.

    The act is ambitious in its scope and leaves no part of society untouched. It aims to restructure the economy from one dependent on fossil fuels to a low carbon economy, at the same time contributing to decent work and an inclusive society.

    New institutional arrangements are envisaged and existing institutions are expected to adapt. Relevant state actors must “review and if necessary revise, amend, coordinate and harmonise their policies, laws, measures, programmes and decisions” to “give effect to the principles and objects” of the act.

    The act provides impetus for change and an opportunity to revisit the country’s labour law and industrial relations landscape.

    Labour law in a just transition era

    South Africa’s labour law promotes both collective bargaining and employee consultation processes — the “dual channels” for engagement. However, industrial relations are typically characterised by adversarial bargaining over wages and economic distribution. This approach falls short of the nuanced and collaborative processes needed to navigate a just transition. The first step requires a shift from familiar, adversarial patterns of engagement.

    The energy transition and adaptation to climate change may have significant implications for job security and employment. These include

    • the adoption of new technologies, resulting in workplace restructuring

    • changes in the organisation of work or work methods

    • the discontinuation of operations, either wholly or in part.

    The framework for constructive engagement on such developments includes institutions and mechanisms at workplace, sector and national levels. At the workplace, workplace forums were intended for this purpose.

    Workplace forums are voluntary institutions introduced in the Labour Relations Act 66 of 1994 to ensure that workers are consulted and have a voice in decisions that affect them. Unfortunately, the uptake of workplace forums has been limited.

    Industry and sector institutions include bargaining councils and the Sector Education and Training Authorities. These should be developed into spaces for consultation on measures to support a just transition and coordination of skills development and industrial policy.

    Nationally, Nedlac is the apex social dialogue institution. There’s also the Presidential Climate Commission which was established by President Cyril Ramaphosa to oversee and facilitate a just transition. The commission is regulated by the Climate Change Act. It plays a critical role in steering just transition policy processes and building consensus on regulatory developments.

    What are the gaps?

    Labour law has limited scope to address environmental degradation or the concerns of communities. To plug this gap, programmes that integrate rights, policies and services for workers and communities affected by the energy transition should be considered. For example the framework for Social and Labour Plans in the mining sector could be augmented to support a just transition.

    Labour law functions and mechanisms that support a just transition may need to be strengthened. Key areas for improvement include:

    • the framework and ecosystem for skills development to prepare workers for job transitions

    • occupational health and safety and labour standards for the protection of workers in conditions of increased heat and extreme weather events

    • the scope, application and objectives of social security schemes and social protection for workers affected by the transition to a low-carbon economy.

    Other steps towards a just transition include:

    Environmentally sustainable practices must be a priority in all workplaces. Consultation and coordinated responses should not be limited to workplaces, sectors and industries that are directly affected, such as the coal mining sector.

    Adaptation to climate change should be at the forefront of the collective efforts of all South Africans. Perhaps even more so in higher education institutions, where the responsibility to educate, innovate, and lead by example is paramount.

    South Africa’s climate change law envisages a pathway to social inclusion and decent work. Its labour laws provide critical tools for the transition.

    Debbie Collier, Shane Godfrey, Vincent Oniga and Abigail Osiki co-authored the Nedlac report, Optimising labour law for a just transition (2024).

    Debbie Collier receives funding from the National Research Foundation (NRF) and is the director of the Centre for Transformative Regulation of Work (CENTROW). CENTROW has received funding to assist the National Economic Development and Labour Council (NEDLAC) and social partners in labour law reform processes.

    ref. A hot and troubled world of work: how South Africa’s bold new climate act and labour law can align to drive a just transition – https://theconversation.com/a-hot-and-troubled-world-of-work-how-south-africas-bold-new-climate-act-and-labour-law-can-align-to-drive-a-just-transition-243406

    MIL OSI – Global Reports