Category: Universities

  • MIL-OSI Global: AI is consuming more power than the grid can handle — nuclear might be the answer

    Source: The Conversation – Canada – By Goran Calic, Associate Profesor of Strategy and Entrepreneurship Leadership Chair, McMaster University

    New partnerships are forming between tech companies and power operators — ones that could reshape decades of misconceptions about nuclear energy.

    Last year, Meta (Facebook’s parent company) put out a call for nuclear proposals, Google agreed to buy new nuclear reactors from Kairos Power, Amazon partnered with Energy Northwest and Dominion Energy to develop nuclear energy and Microsoft committed to a 20-year deal to restart Unit 1 of the Three Mile Island nuclear plant.

    At the centre of these partnerships is artificial intelligence’s voracious appetite for electricity. One Google search uses about as much electricity as turning on a household light for 17 seconds. Asking a Generative AI model like ChatGPT a single question is equivalent to leaving that light on for 20 minutes.




    Read more:
    AI is bad for the environment, and the problem is bigger than energy consumption


    Having GenAI generate an image can draw about 6,250 times more electricity, roughly the energy of fully charging a smartphone, or enough to keep the same light bulb on for 87 consecutive days.

    The hundreds of millions of people now using AI have effectively added the equivalent of millions of new homes to the power grid. And demand is only growing. The challenge for tech companies is that few sources of electricity are well-suited to AI.

    The grid wasn’t ready for AI

    AI requires vast amounts of computational power running around the clock, often housed in energy-intensive data centres.

    Renewable energy sources such as solar and wind provide intermittent energy, meaning they don’t guarantee the constant power supply these data centres require. These centres must be online 24/7, even when the sun isn’t shining and the wind isn’t blowing.

    Fossil fuels can run continuously, but they carry their own risks. They have significant environmental impacts. Fuel prices can be unpredictable, as exemplified by the gas price spikes due to the war in Ukraine, and the long-term availability of fossil fuels is uncertain.

    Major tech companies like Google, Amazon and Microsoft say they are committed to eliminating CO2 emissions, making fossil fuels a poor long-term fit for them.

    This has pushed nuclear energy back into the conversation. Nuclear energy is a good fit because it provides electricity around the clock, maximizing the use of expensive data centres. It’s also clean, allowing tech companies to meet their low CO2 commitments. Lastly, nuclear energy has very low fuel costs, which allows tech companies to plan their costs far into the future.

    However, nuclear energy has its own set of problems that have historically been hard to solve — problems that tech companies may now be uniquely positioned to overcome.

    Is nuclear energy making a comeback?

    Nuclear power has long been considered too costly and too slow to build. The estimated cost of a 1.1 gigawatt nuclear power facility is about US$7.77 billion, but can run higher. The recently completed Vogtle Units 3 and 4 in the state of Georgia, for example, cost US$36.8 billion combined.

    Historically, nuclear energy projects have been hard to justify because of their high upfront costs. Like solar and wind power, nuclear energy has relatively low operating costs once a plant is up and running. The key difference is scale: unlike solar panels, which can be installed on individual rooftops, the kind of nuclear reactors tech companies require can’t be built small.

    Yet this cost is now more palatable when compared to the expense of AI data centres, which are both more costly and entirely useless without electricity. The first phase of OpenAI and SoftBank’s Stargate AI project will cost US$100 billion and could be entirely powered by a single nuclear plant.

    Nuclear power plants also take a long time to build. A 1.1 gigawatt reactor takes, on average, 7.5 years in the U.S. and 6.3 years globally. Projects with such long timelines require confidence in long-term electricity demand, something traditional utilities struggle to predict.

    To solve the problem of long-range forecasting, tech companies are incentivizing power providers by guaranteeing they’ll purchase electricity far into the future.

    These companies are also literally and financially moving closer to nuclear power, either by acquiring nuclear energy companies or locating their data centres next to nuclear power plants.

    Destigmatizing nuclear energy

    One of the biggest challenges facing nuclear energy is the perception that it’s dangerous and dirty. Per gigawatt-hour of electricity, nuclear produces only six tonnes of CO2. In comparison, coal produces 970, natural gas 720 and hydropower 24. Nuclear even has lower emissions than wind and solar, which produce 11 and 53 tonnes of CO2, respectively.

    Nuclear energy is also among the safest energy sources. Per gigawatt-hour, it causes 820 times fewer deaths than coal, 43 times fewer than hydropower and roughly the same as wind and solar.

    Still, nuclear energy remains stigmatized, largely because of persistent misconceptions and outdated beliefs about nuclear waste and disasters. For instance, while many public concerns remain about nuclear waste, existing storage solutions have been used safely for decades and are supported by a strong track record and scientific consensus.

    Similarly, while the Fukushima disaster in Japan displaced thousands of people and was extremely costly (total costs of the disaster are expected at about US$188 billion), not a single person died of radiation exposure after the accident, a United Nations Scientific Committee of 80 international experts found.




    Read more:
    With nuclear power on the rise, reducing conspiracies and increasing public education is key


    For decades, there was little effort to correct public perceptions about nuclear fears because it wasn’t seen as necessary or profitable. Coal, gas and renewables were sufficient to meet the demand required of them. But that’s now changing.

    With AI’s energy needs soaring, Big Tech has classified nuclear energy as green and the World Bank has agreed to lift its longstanding ban on financing nuclear projects.

    Big Tech’s billion-dollar bet on nuclear

    The world has long lived with two nuclear dilemmas. The first is that, despite being one the safest and cleanest form of energy, nuclear was perceived as one the most dangerous and dirtiest.

    The second is that upgrading the power grid requires large-scale investments, yet money had been funnelled into small, distributed sources like solar and wind, or dirty ones like coal and natural gas.

    Now tech companies are making hundred-billion-dollar strategic bets that they can solve both nuclear dilemmas. They are betting that nuclear can offer the kind of steady, clean power their AI ambitions require.

    This could be an unexpected positive consequence of AI: the revitalization of one of the safest and cleanest energy sources available to humankind.

    Michael Tadrous, an undergraduate student and research assistant at the DeGroote School of Business at McMaster University, co-authored this article.

    Goran Calic does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. AI is consuming more power than the grid can handle — nuclear might be the answer – https://theconversation.com/ai-is-consuming-more-power-than-the-grid-can-handle-nuclear-might-be-the-answer-258677

    MIL OSI – Global Reports

  • MIL-OSI Global: Which African countries are flourishing? Scientists have a new way of measuring well-being

    Source: The Conversation – Africa – By Victor Counted, Associate Professor of Psychology, Regent University

    What does it mean to live a good life? Psychologists and social scientists have been focusing on a new idea called flourishing – a sense of well-being that goes beyond just happiness or success. It’s about your whole life being good, including how you interact with other people and your community. So then, how do Africans fare when it comes to flourishing?

    Victor Counted is a psychological scientist whose research across 40 African countries offers a data-rich rethinking of flourishing on the continent. His findings challenge the dominant narrative that Africa is “lagging behind” in development by showing a more nuanced picture of what it means to live a good life. We asked him more.


    What is flourishing?

    Flourishing is more than economic growth or individual happiness. It’s a multidimensional state of being that reflects how people feel about their lives and how well their lives are actually going. So it also measures people’s values within their community.

    The idea of well-being often carries a Eurocentric emphasis on the individual – personal satisfaction, autonomy, achievement. Flourishing accounts for how whole a person is in relation to their environment.

    It includes the social, spiritual and ecological contexts in which one lives. So, it’s not just about how one feels, but how one lives – fully, meaningfully and in a satisfying relationship with the world around us.

    What’s the Global Flourishing Study?

    The Global Flourishing Study tries to measure global patterns of human flourishing. It’s an ongoing five-year longitudinal study in over 200,000 participants across 22 countries.

    I was one of the team of global scholars brought together to examine the trends on what it means to live well across cultures and life circumstances.




    Read more:
    What makes people flourish? A new survey of more than 200,000 people across 22 countries looks for global patterns and local differences


    The study identifies six key dimensions of flourishing:

    • Happiness and life satisfaction
    • Mental and physical health
    • Meaning and purpose
    • Character and virtue
    • Close social relationships
    • Financial and material stability

    Participants rate how they’re doing in each of these areas on a scale from 0 to 10. Further questions capture experiences related to trust, loneliness, hope, resilience, and other related well-being variables.



    CC BY-ND

    Of the 22 nations, five were African: Nigeria, Kenya, South Africa, Tanzania and Egypt.

    While these countries didn’t top the global rankings (Indonesia and Mexico did), Nigeria, Kenya and Egypt all reported relatively high flourishing scores, especially when well-being was considered apart from financial status.



    Nigeria, for example, ranked 5th globally in flourishing scores that excluded financial indicators – ahead of many wealthier nations. Nigerians indicated strengths in social relationships, character and virtues (like forgiveness or helping others). But potential areas of growth included financial well-being, housing, ethnic discrimination and education.

    Overall, this suggests that while material resources matter, they’re not the only thing that determines well-being. Kenya ranked 7th, Egypt 10th, Tanzania 11th and South Africa 13th. Each showed unique strengths in areas like meaning, social connection or mental health.

    You did a separate study on flourishing in Africa. What did you find?

    In a 2024 study we analysed data from the Gallup World Poll (2020–2022) to explore 38 indicators of well-being across 40 African countries.

    This study offered a more detailed and culture-sensitive picture of how Africans experience and prioritise flourishing. The dimensions explored were derived from both local and universal sources, allowing for regionally relevant insights.

    We found that African populations often score high in meaning, character and social relationships – despite economic hardship. This offers an important corrective to western assumptions about well-being.

    Some of our key findings were:

    ● There is significant diversity between and within African countries. Mauritius consistently ranked highest in life evaluations (overall satisfaction with their lives), while countries like Sierra Leone and Zimbabwe scored lowest.

    ● East African countries such as Rwanda and Ethiopia showed strong performance in social well-being indicators (like feeling respected or learning new things daily) even when economic indicators were low.

    ● Countries in West Africa, such as Senegal and Ghana, scored high in emotional well-being, with many people reporting positive daily emotions like enjoyment and laughter.

    ● Southern African nations, despite challenges like income inequality, displayed resilience through strong community ties and cultural practices rooted in the philosophy of ubuntu.

    The results reinforced that flourishing in Africa cannot only be reduced to gross domestic product (GDP) per capita (a measure of the average economic output per person in a country) – nor to western norms of success.

    What can African countries focus on to flourish?

    In my view, the path to greater flourishing lies in embracing local knowledge and investing in culturally relevant development priorities. Instead of following western pathways – centred on individual advancement – Africa can model alternative flourishing pathways that reflect what matters most to African people.

    1. Prioritise local knowledge systems

    African ideas about a connected society – like ubuntu (southern Africa), ujamaa (east Africa), teranga or wazobia (west Africa), and al-musawat wal tarahum (north Africa) teach people to care for each other and live in peace. These values help people live meaningful lives and can inform leadership and legislation.

    2. Redefine development metrics

    Western development models focus on individual achievement, economic output and material consumption. GDP per capita fails to capture the everyday realities and aspirations of African communities. We should also measure things like how happy people are, how hopeful they feel about the future, how strong and resilient their communities are, and how clean, safe and dignifying their living environments are.

    This is not a new idea – for years development scholars have called for a shift away from narrow economic indicators toward a focus on human dignity, agency, and the real opportunities people have to pursue the lives they value. What’s new is the growing availability of data and the momentum to take these alternative metrics seriously in shaping national policies and priorities.

    3. Invest in education for character development

    Quality education is essential to unlocking the continent’s potential to flourish. But Africa needs more than just academic skills and workforce readiness – it needs a strategy for intentional development of values and habits that shape how a person thinks, feels, and acts with integrity.

    Part of the problem lies in how the humanities – fields like history, literature, philosophy, and religious studies – are often undervalued or underfunded in education systems. But it is precisely these disciplines that nurture moral imagination, critical reflection, and civic responsibility. We need educational models that form not just workers, but whole persons – people who can think ethically, act responsibly, and lead with character in their communities.




    Read more:
    What makes a person seem wise? Global study finds that cultures do differ – but not as much as you’d think


    What does Africa offer the world in terms of flourishing?

    Africa is not waiting to be saved. Across the continent, people are building communities of care, cultivating joy amid hardship, and passing on values of unity, faith, and compassion. This is what development looks like when rooted in human dignity.

    Africa flourishing goals offer an alternative vision for development – one that starts with what Africa already has, not what it lacks. These are locally emic aspirations for well-being. They are shaped by Africa’s indigenous knowledge systems, cultural values, and religious/spiritual traditions. Pursuing these goals means prioritising wholeness over wealth, community over consumption, and resilience over rescue.

    The continent has so much to offer the world: wisdom, strong community values, and ways of staying resilient and living fully even in hard times. But many of these local insights are missing in the global science of well-being.

    Victor Counted consults for Africa Flourishing Initiative

    ref. Which African countries are flourishing? Scientists have a new way of measuring well-being – https://theconversation.com/which-african-countries-are-flourishing-scientists-have-a-new-way-of-measuring-well-being-257458

    MIL OSI – Global Reports

  • MIL-OSI Africa: Which African countries are flourishing? Scientists have a new way of measuring well-being

    Source: The Conversation – Africa – By Victor Counted, Associate Professor of Psychology, Regent University

    What does it mean to live a good life? Psychologists and social scientists have been focusing on a new idea called flourishing – a sense of well-being that goes beyond just happiness or success. It’s about your whole life being good, including how you interact with other people and your community. So then, how do Africans fare when it comes to flourishing?

    Victor Counted is a psychological scientist whose research across 40 African countries offers a data-rich rethinking of flourishing on the continent. His findings challenge the dominant narrative that Africa is “lagging behind” in development by showing a more nuanced picture of what it means to live a good life. We asked him more.


    What is flourishing?

    Flourishing is more than economic growth or individual happiness. It’s a multidimensional state of being that reflects how people feel about their lives and how well their lives are actually going. So it also measures people’s values within their community.

    The idea of well-being often carries a Eurocentric emphasis on the individual – personal satisfaction, autonomy, achievement. Flourishing accounts for how whole a person is in relation to their environment.

    It includes the social, spiritual and ecological contexts in which one lives. So, it’s not just about how one feels, but how one lives – fully, meaningfully and in a satisfying relationship with the world around us.

    What’s the Global Flourishing Study?

    The Global Flourishing Study tries to measure global patterns of human flourishing. It’s an ongoing five-year longitudinal study in over 200,000 participants across 22 countries.

    I was one of the team of global scholars brought together to examine the trends on what it means to live well across cultures and life circumstances.


    Read more: What makes people flourish? A new survey of more than 200,000 people across 22 countries looks for global patterns and local differences


    The study identifies six key dimensions of flourishing:

    • Happiness and life satisfaction
    • Mental and physical health
    • Meaning and purpose
    • Character and virtue
    • Close social relationships
    • Financial and material stability

    Participants rate how they’re doing in each of these areas on a scale from 0 to 10. Further questions capture experiences related to trust, loneliness, hope, resilience, and other related well-being variables.


    CC BY-ND

    Of the 22 nations, five were African: Nigeria, Kenya, South Africa, Tanzania and Egypt.

    While these countries didn’t top the global rankings (Indonesia and Mexico did), Nigeria, Kenya and Egypt all reported relatively high flourishing scores, especially when well-being was considered apart from financial status.


    Courtesy Victor Counted

    Nigeria, for example, ranked 5th globally in flourishing scores that excluded financial indicators – ahead of many wealthier nations. Nigerians indicated strengths in social relationships, character and virtues (like forgiveness or helping others). But potential areas of growth included financial well-being, housing, ethnic discrimination and education.

    Overall, this suggests that while material resources matter, they’re not the only thing that determines well-being. Kenya ranked 7th, Egypt 10th, Tanzania 11th and South Africa 13th. Each showed unique strengths in areas like meaning, social connection or mental health.

    You did a separate study on flourishing in Africa. What did you find?

    In a 2024 study we analysed data from the Gallup World Poll (2020–2022) to explore 38 indicators of well-being across 40 African countries.

    This study offered a more detailed and culture-sensitive picture of how Africans experience and prioritise flourishing. The dimensions explored were derived from both local and universal sources, allowing for regionally relevant insights.

    We found that African populations often score high in meaning, character and social relationships – despite economic hardship. This offers an important corrective to western assumptions about well-being.

    Some of our key findings were:

    ● There is significant diversity between and within African countries. Mauritius consistently ranked highest in life evaluations (overall satisfaction with their lives), while countries like Sierra Leone and Zimbabwe scored lowest.

    ● East African countries such as Rwanda and Ethiopia showed strong performance in social well-being indicators (like feeling respected or learning new things daily) even when economic indicators were low.

    ● Countries in West Africa, such as Senegal and Ghana, scored high in emotional well-being, with many people reporting positive daily emotions like enjoyment and laughter.

    ● Southern African nations, despite challenges like income inequality, displayed resilience through strong community ties and cultural practices rooted in the philosophy of ubuntu.

    The results reinforced that flourishing in Africa cannot only be reduced to gross domestic product (GDP) per capita (a measure of the average economic output per person in a country) – nor to western norms of success.

    What can African countries focus on to flourish?

    In my view, the path to greater flourishing lies in embracing local knowledge and investing in culturally relevant development priorities. Instead of following western pathways – centred on individual advancement – Africa can model alternative flourishing pathways that reflect what matters most to African people.

    1. Prioritise local knowledge systems

    African ideas about a connected society – like ubuntu (southern Africa), ujamaa (east Africa), teranga or wazobia (west Africa), and al-musawat wal tarahum (north Africa) teach people to care for each other and live in peace. These values help people live meaningful lives and can inform leadership and legislation.

    2. Redefine development metrics

    Western development models focus on individual achievement, economic output and material consumption. GDP per capita fails to capture the everyday realities and aspirations of African communities. We should also measure things like how happy people are, how hopeful they feel about the future, how strong and resilient their communities are, and how clean, safe and dignifying their living environments are.

    This is not a new idea – for years development scholars have called for a shift away from narrow economic indicators toward a focus on human dignity, agency, and the real opportunities people have to pursue the lives they value. What’s new is the growing availability of data and the momentum to take these alternative metrics seriously in shaping national policies and priorities.

    3. Invest in education for character development

    Quality education is essential to unlocking the continent’s potential to flourish. But Africa needs more than just academic skills and workforce readiness – it needs a strategy for intentional development of values and habits that shape how a person thinks, feels, and acts with integrity.

    Part of the problem lies in how the humanities – fields like history, literature, philosophy, and religious studies – are often undervalued or underfunded in education systems. But it is precisely these disciplines that nurture moral imagination, critical reflection, and civic responsibility. We need educational models that form not just workers, but whole persons – people who can think ethically, act responsibly, and lead with character in their communities.


    Read more: What makes a person seem wise? Global study finds that cultures do differ – but not as much as you’d think


    What does Africa offer the world in terms of flourishing?

    Africa is not waiting to be saved. Across the continent, people are building communities of care, cultivating joy amid hardship, and passing on values of unity, faith, and compassion. This is what development looks like when rooted in human dignity.

    Africa flourishing goals offer an alternative vision for development – one that starts with what Africa already has, not what it lacks. These are locally emic aspirations for well-being. They are shaped by Africa’s indigenous knowledge systems, cultural values, and religious/spiritual traditions. Pursuing these goals means prioritising wholeness over wealth, community over consumption, and resilience over rescue.

    The continent has so much to offer the world: wisdom, strong community values, and ways of staying resilient and living fully even in hard times. But many of these local insights are missing in the global science of well-being.

    – Which African countries are flourishing? Scientists have a new way of measuring well-being
    – https://theconversation.com/which-african-countries-are-flourishing-scientists-have-a-new-way-of-measuring-well-being-257458

    MIL OSI Africa

  • MIL-OSI Russia: GUU at the competition “My country – my Russia”: the rector presented awards, and a graduate became the winner

    Translation. Region: Russian Federal

    Source: State University of Management – Official website of the State –

    On June 21, 2025, as part of the Youth Day of the St. Petersburg International Economic Forum (SPIEF), a solemn awards ceremony was held for the winners of the XXII season of the All-Russian competition “My Country – My Russia”.

    Rector of the State University of Management Vladimir Stroyev presented awards to the winners in the nomination “Transport. Communication routes of my country”. The coordinator of experts in this area, including teachers of our university, was the director of the Institute of Economics and Finance of the State University of Management Galina Sorokina.

    “For many years now I have been taking part in the award ceremony and in the competition as a whole. Our teachers and students also actively participate. Every year during the selection and evaluation of works, we are all inspired by the ideas of the children. I am convinced every time that there are many passionate, beautiful and good people in our country,” said Vladimir Stroyev.

    In addition, Victoria Kostikova, a graduate of the Institute of Economics and Finance of the State University of Management this year in the International Management program, became the winner of the competition in the nomination “My Hospitable Russia” with the International Educational Project “Teleport”.

    “The project provides an opportunity for foreign students to become researchers of Russian culture, tell their stories, and share them with the world. Behind this project is friendship, which is stronger than borders and prejudices. We study the past through cultural heritage, explore the present through travel and dialogue, and together we shape a multipolar future where Russia is perceived not as an abstraction, but as a country of people to which one wants to return,” Victoria said about her project.

    Let us recall that 183 participants aged 18 and over made it to the final stage of the “My Country – My Russia” competition, and the prize places were taken by 39 authors of the best projects aimed at the socio-economic development of Russian territories.

    Please note: This information is raw content directly from the source of the information. It is exactly what the source states and does not reflect the position of MIL-OSI or its clients.

    MIL OSI Russia News

  • MIL-OSI USA: Stansbury Fights for Expanded Access to Healthcare, More Providers

    Source: United States House of Representatives – Representative Melanie Stansbury (N.M.-01)

     WASHINGTON, D.C. – Congresswoman Melanie Stansbury (NM-01) fought for expanded access to healthcare in rural and Indigenous communities during an Indian and Insular Affairs Subcommittee hearing.  

    Her bill, the IHS Provider Expansion Act, was reintroduced earlier in the month, and testimony about the legislation was heard during the subcommittee hearing.  

    Watch video of the hearing.  

    The legislation would establish an Office of Graduate Medical Education Programs within the Indian Health Service (IHS). This legislation would expand the existing IHS Residency Program, building from the Shiprock-University of New Mexico (SUNM) Family Medicine Residency which is the first in the nation.  

    “Access to healthcare should not be determined by history or geography,” said Rep. Melanie Stansbury (NM-01). “The IHS Provider Expansion Act is a vital step towards ensuring that Native and Indigenous communities can access healthcare and grow the number of medical professionals serving Native communities. By investing in medical education within the Indian Health Service, we can help expand healthcare and bridge the gap in healthcare disparities that have persisted for far too long.” 

    Testifying about the importance of the legislation was Dr. Adriann Begay from the Navajo Nation HEAL Initiative. Dr. Begay is Tábaahi (Edge of the Water clan) and born for Bít’ahnii (Folded Arms People clan). Her maternal grandparents are Ta’néészahnii (Badlands People clan) and paternal grandparents are Tl’aashchí’í (Red Cheek People clan).  

    She completed her undergraduate studies at the University of Arizona; and received a medical degree from the University of North Dakota School of Medicine through the Indians into Medicine program. She completed her residency in Family Medicine at the University of Arizona and is a Diplomate of the American Board of Family Practice. Adriann worked for the Indian Health Service for 21 years initially at Salt River Clinic under Phoenix Indian Medical Center for 4 years as a primary care provider. Then at Gallup Indian Medical Center as an urgent care physician and administrator for 17 years. 

    Watch video of Dr. Begay’s testimony.  

    More about the bill and its impact:  

    In New Mexico, which is home to 23 Tribal Nations and a population that is nearly 12% Native, access to healthcare services is a pressing issue. Currently, IHS provides services in 37 states to about 2.2 million out of 3.7 million Indigenous people in the country.  

    This bill is projected to directly impact millions of people across the country served by the IHS to improve access to healthcare and medical professionals who understand the unique health challenges faced by Tribal communities.  

    By expanding access through IHS, this bill will also help to address the significant deficit of rural primary healthcare providers across the country. Recent data from the U.S. Department of Health and Human Services shows rural areas across the country face a significant deficit in primary care providers, with more than 80 million Americans living in Health Professional Shortage Areas (HPSAs).   

    By expanding graduate medical education opportunities through IHS, we can expect an increase in the number of physicians willing to practice in these underserved regions.  

    Key Provisions of the Legislation:  

    • Establishment of the Office: The Secretary of Health makes permanent the Office of Graduate Medical Education Programs to oversee residency and fellowship initiatives within the IHS. 
    • Creating a Pipeline: The Office will facilitate opportunities for future healthcare professionals, paraprofessionals, and other health-related workers to engage in residency and fellowship programs. 
    • Oversight of Residency Programs: The Office will oversee existing residency and fellowship programs at IHS facilities and support the creation of additional programs aimed at recruiting and retaining healthcare professionals. 
    • Coordination with Academic Institutions: The Office will work in collaboration with academic institutions to strengthen educational ties and enhance training opportunities. 
    • Interagency Working Group: An interagency working group, involving various federal agencies, will assist in the implementation and sustainability of the Office, ensuring ongoing support and resources.  
    •  

    Read the bill here.  

    ###

    MIL OSI USA News

  • MIL-OSI USA: Governor Ivey Announces Appointment of Grace Jeter to Covington County Circuit Judgeship

    Source: US State of Alabama

    MONTGOMERY – Governor Kay Ivey on Monday announced the appointment of Grace Jeter as Covington County Circuit Court Judge.

    “Grace Jeter comes to the bench with a strong background as a prosecutor with extensive courtroom experience,” said Governor Ivey.  “In addition to serving for nearly two decades as an assistant district attorney, her legal career also includes work as a staff attorney in state appellate court. She is well versed in the law and will serve the people of Covington County with distinction.”

    “I am grateful for Governor Ivey’s appointment,” said Jeter. “Having worked for the people of Covington County for 20 years, I am humbled by the opportunity to continue serving them as Circuit Judge.”

    Jeter will succeed former 22nd Judicial Circuit Judge Ben Bowden, who was appointed to serve on the Alabama Court of Civil Appeals by Governor Ivey on May 21, 2025.

    Jeter’s legal experience includes 19 years of service as Assistant and Chief Assistant District Attorney in the 22nd Judicial Circuit District Attorney’s Office in Andalusia, where she tried more than 100 jury trials; four years’ service as Staff Attorney for the Alabama Court of Criminal Appeals; and two years as an attorney with Merrell & Bryan, LLC in Andalusia.

    A resident of Red Level, Alabama, Jeter and her husband, Jeff, have two children. She is a 1996 graduate of Huntingdon College in Montgomery, and she received her Juris Doctor in 1999 from Samford University’s Cumberland School of Law in Birmingham. Jeter is the first female Circuit Judge to serve in Covington County.

    Jeter’s appointment is effective immediately.

    Jeter’s official photo is attached.

    ###

    MIL OSI USA News

  • MIL-OSI Russia: Development of cooperation between Russia and China in the field of antimonopoly policy was discussed at the National Research University Higher School of Economics

    Translation. Region: Russian Federal

    Source: State University Higher School of Economics – State University Higher School of Economics –

    © Higher School of Economics

    The HSE hosted a roundtable discussion entitled “New Challenges for Antitrust Regulation: The Chinese Perspective.” The event was organized by BRICS International Centre for Competition Law and Policy (BRICS Centre). Special guests were Chinese colleagues from the Competition Policy and Assessment Research Centre (CPAC) of the State Administration of Market Regulation of the People’s Republic of China (SAMR). Last year, the BRICS Centre and CPAC SAMR was signed strategic cooperation agreement.

    The meeting was also attended by representatives of the FAS Russia, the Eurasian Economic Commission and employees of the BRICS Centre and Faculty of LawThe discussion was moderated by Alexey Ivanov, Director of the BRICS Centre and Professor of the Faculty of Law at the National Research University Higher School of Economics.

    He recalled that last year the BRICS Centre developed a draft international fair competition platforms, which were supported antimonopoly authorities of the association. Initiative was approved Vladimir Putin at the Kazan summit last October, and this is now a priority task for the BRICS Centre in the context of multilateral cooperation on competition. Alexey Ivanov noted: “We expect that the Chinese Centre for Competition Policy and Expertise will become a key partner in the development of this platform.”

    The platform is intended to become a basis for the convergence of state policies and law enforcement practices to protect competition. The first stage of the project will be the creation of a unified system of interstate information exchange on economic concentration transactions and on the most pressing problems of socially significant markets. At the same time, the digitalization of cooperation within the BRICS is the key to the success of this “new architecture of international economic life.”

    Deputy Head of the FAS Russia Andrey Tsyganov addressed the participants with a welcoming speech. He covered the history of interaction between the agencies of the two countries, which began in 1996 with the signing of an agreement between the governments of the Russian Federation and China on cooperation in the field of antimonopoly policy and the fight against unfair competition. The current areas of partnership were detailed, including the exchange of best practices, coordination in border markets and joint work within the BRICS framework. “Our countries are the driving force behind cooperation in the BRICS format. Many important projects begin with our initiatives. This cooperation is focused on the so-called socially significant markets: food, pharmaceuticals, digital economy,” the speaker said. Further emphasizing the importance of digitalization, Andrey Tsyganov noted that Russia is carefully studying the experience of China in regulating digital markets, as well as new approaches and solutions of Chinese regulators.

    Deputy Director of CPAC Jie Fang spoke about the structure and activities of the center, as well as the work results of China’s antitrust regulator in 2024. During his speech, he also proposed three areas for further cooperation between the BRICS Center and CPAC: improving the cooperation mechanism by developing a clear direction and a clear understanding of common goals, which includes enhancing the role of CPAC in BRICS with the assistance of Russian colleagues; focusing on issues of mutual interest, which include antitrust supervision and enforcement in vital areas of the economy, developing mechanisms for monitoring the activities of Internet platforms, combating unfair competition in the digital environment, and protecting commercial secrets; developing new methods of cooperation, involving mutual provision of professional advice and assistance on compliance management for companies operating in Russia and China, as well as sharing the latest research results and enhancing the effectiveness of mutual learning.

    In his speech, the head of the HR department of the CPAC, Changqing Wang, drew attention to the key role of human resources in antitrust research, emphasizing the need for educational work and training highly qualified specialists in this field. According to him, since its establishment, the center has paid special attention to supporting young personnel and improving their professional level.

    Liwei Xie, Director of the CPAC Institute of Platform Economy, spoke about the development and regulation of the platform economy in China. She began her report with the latest data on the development of the country’s digital sector, according to which the monthly active mobile Internet users in China have reached 1.26 billion people. The volume of annual online retail sales exceeds 15 trillion yuan, which has allowed the Chinese online retail market to maintain its leading position in the world for 12 years in a row. At the same time, the platform economy has directly or indirectly provided employment for more than 200 million people.

    According to the speaker, China’s platform economy is a multi-layered and multi-faceted system, where e-commerce platforms such as Alibaba, JD.com and Pinduoduo together form a complete matrix and integrate multiple models, including B2C, C2C, B2B. In turn, short video entertainment platforms such as Douyin and Kuaishou have formed a complete industrial chain, from content creation to intellectual property incubation.

    In recent years, Chinese authorities have been aggressively cracking down on violations such as abuse of dominance, false advertising, counterfeit goods, and price scams. The regulator has conducted a number of high-profile antitrust investigations into Alibaba, Meituan, and CNKI (China National Knowledge Infrastructure). It has also tightened controls over mergers between companies in the platform economy and is clamping down on the placement of false advertising online. According to the regulator, these measures have already yielded results: major players have become more strict in complying with the rules, and the industry has entered a phase of “stable supervision.”

    The platform economy is supervised according to the principle that “whoever is responsible for the offline sector also supervises the online sector.” SAMR’s area of responsibility includes comprehensive market supervision, covering online trade in goods and services, antitrust activities, and combating unfair competition in the digital environment. The legal basis for this is the Law on Electronic Commerce, the Rules for Supervision of Online Commerce, as well as laws on combating unfair competition, on the protection of personal data, and intellectual property. In 2024, SAMR stepped up the fight against violations in live commerce, including the sale of counterfeit goods and price manipulation. Work is underway to revise laws on pricing and unfair competition, and new regulations are being prepared for streaming services and platforms.

    The Russian experience of regulating digital markets was presented by Irina Nikolaicheva, Head of the Department for Regulation of Communications and Information Technology of the FAS Russia. She reported that the agency is currently developing systemic approaches to the analysis and regulation of digital markets, studying such phenomena as network effects. The basis for this work was the amendments to the Law on Protection of Competition adopted in 2023, known as the fifth antimonopoly package. Before the amendments to the law, the service actively used soft law tools, in particular the “Principles of Good Conduct for Platforms” signed by the largest Russian marketplaces. Experience has shown that an integrated approach combining legislative measures and self-regulation is most effective. As part of the current regulation, the Government of the Russian Federation instructed the Ministry of Economic Development, together with the FAS Russia, to develop a separate bill on platform employment, designed to establish clear and non-discriminatory rules for access to the largest digital platforms, including marketplaces and taxi aggregators, to ensure a balance of interests of operators, market participants and consumers.

    Olga Korolkova, Assistant to the Member of the Board (Minister) for Competition and Antimonopoly Regulation of the Eurasian Economic Commission (EEC), shared her experience of supranational regulation. She recalled that the EAEU, which celebrated its 11th anniversary in May 2025, is an international organization of regional economic integration whose task is to ensure the free movement of goods, services, capital and labor. The EEC Competition Block, in turn, ensures this freedom in cross-border markets. As part of the strategic development directions until 2025, the Commission has prepared a draft agreement on e-commerce within the EAEU, establishing requirements for professional market participants, including requirements for platforms and advertising messages, and also touching upon issues of consumer protection, technical regulation, security and customs clearance of digital goods. In addition, the EEC Antimonopoly Block has already amended the methodology for assessing the state of competition, including criteria for analyzing digital markets, such as network effects.

    Summing up the meeting, Alexey Ivanov focused on the unique role of the antimonopoly regulator, which is called upon to act as a mediator and facilitator, taking a neutral and objective position. The regulator’s task is not to protect the interests of one of the parties, such as platform owners or their employees, but to promote the development of competition. The key goal of its activities is to ensure balanced and sustainable development of the market, when the growth and dominance of some participants to the detriment of others is not allowed.

    Speaking about the role of BRICS, Alexey Ivanov emphasized that the association is a “network of networks,” a superstructure over regional associations that performs the function of coordination between various regional structures, and, among other things, helps countries build a synchronized antimonopoly policy.

    Please note: This information is raw content directly from the source of the information. It is exactly what the source states and does not reflect the position of MIL-OSI or its clients.

    MIL OSI Russia News

  • MIL-OSI: Mizuho Americas Hires Yaron Kinar as Managing Director and Senior Equity Research Analyst Covering the Insurance Sector

    Source: GlobeNewswire (MIL-OSI)

    NEW YORK, June 23, 2025 (GLOBE NEWSWIRE) — Mizuho Americas today announced the hiring of Yaron Kinar as Managing Director and Senior Equity Research Analyst covering the Insurance sector. Based in Chicago, Kinar reports to the Head of Equity Research, Bill Featherston.

    Kinar has two decades of equity research experience in the insurance and financial sectors. He joins Mizuho from Jefferies, where he was lead Equity Research Analyst for North America P&C Insurance and Insurtech and named runner-up in the 2023-4 Institutional Investor (now Extel) All-America Research Team surveys.

    “Yaron’s reputation as an insightful and influential insurance industry equity analyst is a great addition to our team,” said Featherston. “His extensive experience will greatly benefit our clients and Mizuho as a whole as we build out our coverage of the Financials sector.”

    Prior to Jefferies, he held lead analyst roles at Goldman Sachs and Deutsche Bank, where he was recognized as an All-America Research Team survey Rising Star.

    Kinar began his career in underwriting at AIG and holds an MBA from Columbia Business School and an LL.B. from Hebrew University of Jerusalem.

    About Mizuho Americas
    Mizuho Financial Group, Inc. is one of the largest financial institutions in the world as measured by total assets of ~$2 trillion, according to S&P Global 2024. Mizuho’s 65,000 employees worldwide offer comprehensive financial services to clients in 36 countries and 850 offices throughout the Americas, EMEA, and Asia.

    Mizuho Americas is a leading Corporate and Investment Bank (CIB) that provides a full spectrum of client-driven solutions across strategic advisory, capital markets, corporate banking, and fixed income and equities sales & trading to corporate, government, and institutional clients in the US, Canada, and Latin America. Through its acquisition of Greenhill, Mizuho enhanced its M&A, restructuring, and private capital advisory capabilities across the Americas, Europe, and Asia. Mizuho Americas employs approximately 4,000 professionals. For more information visit www.mizuhoamericas.com.

    For inquiries, please contact:
    Jim Gorman
    Executive Director, Media Relations, Mizuho Americas
    +1-212-282-3867
    jim.gorman@mizuhogroup.com

    The MIL Network

  • MIL-OSI Global: Iran is considering closing the strait of Hormuz – why this would be a major escalation

    Source: The Conversation – UK – By Basil Germond, Professor of International Security, Department of Politics, Philosophy and Religion, Lancaster University

    Faced with the prospect of continuing Israeli airstrikes and further American involvement, Iran’s parliament has reportedly approved plans to close the strait of Hormuz.

    This is potentially a very dangerous moment. The strait of Hormuz is an important shipping lane through which 20% of the world’s oil transits – about 20 million barrels each day.

    The waterway connects the Persian Gulf and the Gulf of Oman. Iran can either disrupt maritime traffic or attempt to “close” the strait altogether. These are distinctly different approaches with different risks and outcomes.


    Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.


    The first option is to try and disrupt maritime traffic like Yemen’s Houthi rebels have been doing in the Red Sea since winter 2024. This can be done by attacking passing ships with rockets and drones.

    There are already reports that Iran has started to jam GPS signals in the strait, which has the potential to severely interfere with passing ships, according to US-based maritime analyst Windward.

    Disruption of this kind is likely to deter shipping companies from using this route for fear of casualties and loss of cargo. Shipping companies that want to avoid the Red Sea can always use alternative shipping lanes, such as the Cape of Good Hope route. As inconvenient as that is, there is no such option in the case of the Gulf.

    As we’ve seen with Houthis’ attacks, such disruptions have impacts on oil price, but also ripple effects on stock markets and inflation. Although the US and its western allies can absorb these economic effects – certainly for a while – disrupting the strait would still demonstrate that Tehran has some leverage.

    The credibility factor

    The second option – “closing” the strait would involve interdicting all maritime traffic. This is akin to a blockade. And for it to work, as we have seen in the Black Sea with Russia’s failed attempt at blockading Ukraine, a blockade must be credible enough to deter all traffic.

    Iran has a number of ways to block the strait. It could deploy mines in the waters around the choke point and sink vessels to create obstacles. Iran would also likely use its navy, including submarines, to engage those attempting to break the blockade; use electronic and cyber attacks to disrupt navigation; and threaten civilian traffic and regional ports and oil infrastructure with drones and rockets.

    It’s worth noting that Iran still has plenty of short-range rockets. Israel claims to have destroyed much of its longer range ballistic-missile capability, but it is understood that the country still has a stockpile of short-range missiles that could be effective in targeting ships and infrastructure in the Gulf as well as US bases in the region.

    Recent events have shown up Iran as a bit of a paper tiger. It has made bold claims about its plan to retaliate and the military strength it has to do so. Yet with almost no air power capabilities (apart from drones and missiles) and limited naval power – and with its proxies either defeated or on the back foot – Iran is no longer in a position to project power in the region.

    Iran’s response to the current Israeli attacks have not managed to inflict any major damage or achieve any strategic or political objectives. It’s hard to see a change on the battlefield as things stand.

    Vital waterway: 20% of the world’s oil transts through the Strait of Hormuz.
    w:en:Kleptosquirrel/Wikimedia Commons, CC BY-SA

    For this reason, Tehran’s best option is to target the strait of Hormuz, which has the potential to cause a significant spike in oil prices, leading to a major disruption of the global economy.

    Short of being able to rival the US or Israel on the battlefield, Iran might decide to use asymmetrical means of disruption (in particular missile and drone attacks on civilian shipping) to affect the global economy. Closing or disrupting the strait would be an effective way of doing that.

    A blockade, even a partial one, would offer Tehran some options on the diplomatic scene. For instance, it has been reported that the US asked China to convince Iran not to close the strait. This demonstrates that Tehran can use the threat of a blockade to its advantage on the diplomatic front. But for this to work, the blockade needs to be effective and thus sustained.

    What would be the effect of a blocking the Strait?

    Disrupting traffic in the strait could drag Gulf states – Iraq, Kuwait, Saudi Arabia, UAE, Bahrain and Qatar – into the conflict, since their interests will be directly affected. It’s important to consider how they might respond and whether this will drive them closer to the US – and even Israel, as was already happening with the Abraham Accords and the tentative, but shaky, rapprochement between Saudi Arabia and Israel.




    Read more:
    US joins Israel in attack on Iran and ushers in a new era of impunity


    These are all things Iran would have factored into its calculations a year ago when Israel was targeting its proxies, including Hezollah, Hamas and the various Shia militias it funds in Iraq and elsewhere. But now, given that it has suffered an enormous military setback, which has hurt the regime’s prestige and credibility – including, importantly, at home – Tehran is more likely to downplay these risks. I would expect it to proceed with its blockade plans.

    Even if China voices concerns, like it did regarding the Houthis’ attacks, this is unlikely to change the decision. The regime is cornered. If the leaders believe they could be toppled, they are likely to consider the risks worth taking, particularly if they feel it could give them diplomatic leverage.

    The US has enough naval and air power to disrupt such a blockade. It can preemptively destroy Iran’s mine-laying forces. It can also target missile launch sites inland and respond to threats as and when they arise.

    This is likely to prevent Iran from completely closing the strait. But it won’t prevent the Islamic republic from disrupting maritime trade enough to have serious effects on the world economy. This might well be one of the last cards the regime has to play, both on the battlefield and in the diplomatic arena.

    Basil Germond does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Iran is considering closing the strait of Hormuz – why this would be a major escalation – https://theconversation.com/iran-is-considering-closing-the-strait-of-hormuz-why-this-would-be-a-major-escalation-259562

    MIL OSI – Global Reports

  • MIL-OSI USA: Bowman, Unintended Policy Shifts and Unexpected Consequences

    Source: US State of New York Federal Reserve

    Thank you for the invitation to join you today.1 As the Federal Reserve’s Vice Chair for Supervision, I am responsible for, among other things, leading the Board’s Division of Supervision and Regulation in its work to promote the safe and sound operation of the U.S. banking system. While this includes the specific activities of bank supervision and regulation, the financial system reaches far beyond the banking system. Regulators must also monitor the effects of activities that extend outside this perimeter, for example activities that have migrated from banks to non-banks, or when there are broader market implications of regulatory actions and their potential effects on financial stability. Regulations should not be created in a static world of “set it and forget it.”
    Today, my remarks will focus specifically on how the passage of time—with underlying changes in the composition of the economy and the financial system, interest rate shifts, and patterns and preferences of banking and financial activity—can lead to unintended policy application and unexpected consequences. Regulators should consider these broader evolving dynamics as they craft regulations to endure beyond today’s circumstances.
    Typically, these effects are not contemplated in the scope of the usual cost-benefit analysis, as shifts occur over time after a new rule or regulation is implemented or enacted. But shifts can, in effect, become new policy choices with consequences that can pose significant issues.
    One shift in particular is that of the supplementary leverage ratio increasingly becoming the binding capital constraint for the largest banks in the United States. The U.S. banking system includes two basic types of capital requirements: risk-based requirements that impose a capital “charge” based on the underlying risk of a particular activity, and leverage-based requirements that do not differentiate based on the risk characteristics of underlying assets. And while leverage-based capital requirements are generally intended to operate as a backstop to risk-based requirements, changes in the financial system and the broader economy can alter this relationship between capital requirements. This shift in the nature of leverage-based capital requirements, from backstop to binding constraint, was not driven by a deliberate policymaking process, but rather by the maintenance of a high level of reserves in the banking system, as well as the introduction of liquidity requirements that compelled banks to replace loans with high-quality liquid assets.2
    Monetary Policy and Economic OutlookBefore turning to the main theme of my remarks, I would like to give a brief update on my outlook for the economy and monetary policy.
    At the Federal Open Market Committee (FOMC) meeting last week, the Committee voted to maintain the target range for the federal funds rate at 4-1/4 to 4‑1/2 percent and to continue to reduce the Federal Reserve’s securities holdings. I supported this decision because the data shows a solid labor market and I would like to see further confirmation that inflation is close to our 2 percent target on a sustained basis.
    If inflation remains near its current level or continues to move closer to our target, or if the data show signs of weakening in labor market conditions, it would be appropriate to consider lowering the policy rate, moving it closer to a neutral setting.
    At this point, we have not seen significant economic impacts from trade developments or other factors, and the U.S. economy has continued to be resilient despite some slowing in economic growth. Private domestic final purchases (PDFP) growth slowed to a moderate pace in the first quarter, even as activity was partly boosted by a pull-forward of spending on motor vehicles and high-tech equipment ahead of the implementation of tariffs. Although the pull-forward of spending appears to be unwinding, retail and motor vehicle sales through May provide further evidence that PDFP has softened so far this year.
    The labor market appears to remain solid, with payroll employment rising about 140,000 per month, on average, in April and May, only slightly below the average monthly gains over the past two quarters. This pace of job gains appears consistent with the unemployment rate remaining at a low 4.2 percent through May, which is roughly unchanged since the middle of last year.
    The labor market appears to be stable near estimates of full employment, with layoffs remaining low. The number of job openings relative to job seekers has moved roughly sideways since the middle of last year at, or a touch below, the pre-pandemic level. And the labor market no longer appears to be especially tight or a significant source of inflation pressures, as most wage growth measures have slowed closer to a pace consistent with 2 percent inflation.
    Turning to inflation, we have seen a welcome return to further moderation of personal consumption expenditures (PCE) inflation over the past three months. The May consumer and producer price reports suggest that 12-month core PCE inflation stood at 2.6 percent in May, down meaningfully from its elevated reading of 2.9 percent at the end of last year. Similar to the past two years, elevated monthly inflation readings in January and February have been followed by low readings as we move into the spring.
    On a 12-month basis, core PCE goods inflation has picked up somewhat since last December, but this has been more than offset by a considerable slowing in core PCE services inflation. It appears that any upward pressure from higher tariffs on goods prices is being offset by other factors and that the underlying trend in core PCE inflation is moving much closer to our 2 percent target than is currently apparent in the data. With housing services inflation on a sustained downward trajectory, and other core services inflation already consistent with 2 percent inflation, only core goods inflation remains somewhat elevated likely reflecting limited passthrough from tariffs.
    With economic growth slowing, it is possible that recent softness in aggregate demand could be starting to translate into weaker labor market conditions. While still strong, the labor market appears to be less dynamic, with modest hiring rates, layoffs edging up from low levels, and job gains concentrated in just a few industries. With inflation on a sustained trajectory toward 2 percent, softness in aggregate demand, and signs of fragility in the labor market, I think that we should put more weight on downside risks to our employment mandate going forward.
    Despite progress on lowering inflation, there are potential upside risks if negotiations result in higher tariffs or if firms raise goods prices independent of any tariff pass-through. Although we have not seen evidence of disruptive impacts on supply chains, changes in global trade patterns could lead to an increase in prices for goods and services. The current conflict in the Middle East or other geopolitical tensions could also lead to higher commodity prices.
    I am certainly attentive to these inflation risks, but I am not yet seeing a major concern, as some retailers seem unwilling to raise prices for essentials due to high price sensitivity among low-income consumers and as supply chains appear to be largely unaffected so far.
    Measures of policy and economic uncertainty have receded from recent highs, and measures of consumer and business sentiment have also improved in recent weeks after having dropped considerably. These developments reinforce my view that concerns will subside as more clarity emerges on trade policy. Businesses appear to be resuming investment and hiring decisions, as they feel increasingly confident that less favorable trade outcomes are unlikely to occur.
    I remain focused on how new policies evolve and whether future data releases will provide perspective about their economic impacts. On trade policy, I expect that negotiations will ultimately result in lower tariff rates than are currently in place, consistent with the resumption of financial market optimism. Further, should we see effects on inflation this year, I expect that increased slack in the economy will limit this to a small, one-off impact.
    Small and one-off price increases this year should translate only into a small drag on real activity. I also expect that less restrictive regulations, lower business taxes, and a more friendly business environment will likely boost supply and largely offset any negative effects on economic activity and prices.
    In considering the risks to achieving our dual mandate, I fully supported the revised characterization of uncertainty and the balance of risks in our most recent monetary policy statement, pointing to the diminished uncertainty and removing the emphasis on risks to both sides of our mandate. In my view, it was appropriate to recognize that the balance of risks has shifted. In fact, the data have not shown clear signs of material impacts from tariffs and other policies. I think it is likely that the impact of tariffs on inflation may take longer, be more delayed, and have a smaller effect than initially expected, especially because many firms front-loaded their stocks of inventories. And, all considered, ongoing progress on trade and tariff negotiations has led to an economic environment that is now demonstrably less risky. The change in our monetary policy statement appropriately incorporates this shift in the balance of risks as well as the rapid improvement in many measures of uncertainty.
    As we think about the path forward, it is time to consider adjusting the policy rate. As inflation has declined or come in below expectations over the past few months, we should recognize that inflation appears to be on a sustained path toward 2 percent and that there will likely be only minimal impacts on overall core PCE inflation from changes to trade policy. We should also recognize that downside risks to our employment mandate could soon become more salient, given recent softness in spending and signs of fragility in the labor market.
    Before our next meeting in July, we will have received one additional month of employment and inflation data. If upcoming data show inflation continuing to evolve favorably, with upward pressures remaining limited to goods prices, or if we see signs that softer spending is spilling over into weaker labor market conditions, such developments should be addressed in our policy discussions and reflected in our deliberations. Should inflation pressures remain contained, I would support lowering the policy rate as soon as our next meeting in order to bring it closer to its neutral setting and to sustain a healthy labor market. In the meantime, I will continue to carefully monitor economic conditions as the Administration’s policies, the economy, and financial markets continue to evolve.
    It is important to note that monetary policy is not on a preset course. At each FOMC meeting, my colleagues and I will make our decisions based on the incoming data and the implications for and risks to the outlook, guided by the Fed’s dual-mandate goals of maximum employment and stable prices. I will also continue to meet with a broad range of contacts as I assess the appropriateness of our monetary policy stance.
    Bringing inflation in line with our price-stability goal is essential for sustaining a healthy labor market and fostering an economy that works for everyone in the longer run.
    Policy Shifts and Unintended ConsequencesIn my responsibilities over bank regulation and supervision at the Federal Reserve, I intend to apply a pragmatic approach. We will review data and evidence, identify problems that need to be resolved, and develop efficient solutions to address those identified issues.3 While the regulatory authority of the Federal Reserve is primarily related to the banking system, the consequences of banking regulation and supervisory efforts are not limited to the banking system. Bank regulation and supervision affect how financial activities are conducted, the cost and availability of credit and financial services, and even what types of entities provide those services. While it is important to consider the consequences of regulatory actions as they evolve over time, in cases where regulation may create or exacerbate financial stability risks, we must examine whether those risks are justified by the safety and soundness benefits of the regulation.
    Bank-affiliated broker-dealers play a critical role in U.S. capital markets, including in Treasury market intermediation activities. Today I will discuss the lessons we have learned about how bank regulatory requirements, specifically leverage ratios in the United States, can have unintended consequences. Leverage ratio impacts on bank-affiliated broker-dealers can have broader impacts, including market impacts like those observed in Treasury market intermediation activities. Once we’ve identified “emerging” unintended consequences—issues that were not contemplated during the development of a regulatory approach—we must consider how to revisit earlier regulatory and policy decisions.
    As I will discuss in greater detail shortly, regulators must act quickly to address the growing problems with increasingly binding leverage ratios. In 2021, in connection with the expiration of temporary, emergency changes to the supplementary leverage ratio (SLR), the Federal Reserve committed to “soon” inviting public comment on potential modifications.4 Over four years later, a proposal has not been issued, and problems with Treasury market intermediation continue to emerge. The time has come for the federal banking agencies to revisit leverage ratios and their impacts on the Treasury markets.
    Looking at the Data: Treasury Market FunctioningAs a first step in this pragmatic approach, it is important to look at what the data says about Treasury market functioning. This is a necessary first step before we determine whether there are issues or problems that can be addressed through adjustments to bank regulatory requirements.
    A review of Treasury market data provides a history of growing issues with Treasury market functioning. In recent years, U.S. policy debates have highlighted the need to take preventative measures to ensure smooth market functioning. One issue that continues to persist is low levels of Treasury market liquidity as the Board’s semiannual Financial Stability Report noted.5 In addition, some dealers experienced balance sheet pressure in intermediating record volumes of Treasury market transactions in the spring, at a time when reports from market participants also indicated reduced demand from other Treasury investors.6
    A survey of market participants from the Fed’s most recent Financial Stability Report noted that more than a quarter of respondents cited Treasury market functioning as a risk to the U.S. financial system and the broader global economy. This was an increase from the same survey conducted last fall when 17 percent of those surveyed cited Treasury market functioning as a risk.7
    Recent changes to Treasury market clearing activities from the Securities and Exchange Commission’s central clearing requirement for U.S. Treasuries were implemented to improve Treasury market functioning. Once fully implemented, these changes may improve market functioning. The Federal Reserve’s Standing Repo Facility may also help to promote smooth functioning in the Treasury market. But it is unclear how the ongoing increases in the volume of Treasury issuance, the volume of Treasury securities outstanding, and changes to the Fed’s balance sheet over time, may also affect market liquidity.
    Treasury markets have experienced stress events as recently as the September 2019 repo market stress, and the so-called “dash for cash” in March of 2020. In early April, we also saw strains in Treasury cash markets. Although markets continued to function, there were unexpected moves in Treasury yields, with an initial drop in yields followed by a sharp increase that seems to have been driven in part by the unwinding of the swap spread trade by leveraged investors in response to declining swap spreads.
    We do not know exactly what circumstances may lead to a future stress event or how it will manifest, and continuing to impose unwarranted limits on dealers’ intermediation capacity could exacerbate a future stress event in this critical market. But we do know that these events have raised concerns about the resilience of U.S. Treasury markets. Therefore, we should continue to actively monitor indicators of market functioning. Recent trends in both market liquidity indicators and survey responses suggest that this problem has persisted and may be becoming more severe. Low liquidity can create more volatility in prices, exacerbate the effects of market shocks, and threaten market functioning.
    Identifying the Problem: Looking Beyond Treasury Market IntermediationLarge bank-affiliated primary dealers play a vital role in the intermediation of U.S. Treasury markets. These dealers are subject to, not insulated from, the effect of banking regulation. While many factors can affect market liquidity, including the growing volume of Treasury issuance, Treasury market saturation, and interest rate volatility, we must consider whether some of the pressure is a byproduct of bank regulation. Due to the role of large banks in the intermediation of Treasury markets, there is a direct link between banking regulation and Treasury market liquidity, particularly when it comes to the growth of “safe” assets in the banking system and the increase in leverage-based capital requirements becoming the binding capital constraint on some large banks. In 2018, the Federal Reserve along with the Office of the Comptroller of the Currency (OCC) proposed significant changes to the enhanced supplementary leverage ratio (eSLR) that applies to the largest banks.8 These revisions were never finalized, but the intent behind them was to return the eSLR to its traditional role as a backstop capital requirement instead of what has become a substantial balance sheet constraint.
    The proposed change was designed to promote resilience in the banking system and to protect financial stability, while also maximizing credit availability and economic growth throughout the credit cycle.9 During the COVID-19 pandemic, the Federal Reserve addressed constraints on the ability of U.S. banks to support efficient Treasury market functioning by temporarily excluding Fed reserves and Treasuries from the denominator of the SLR.10
    The central role of bank-affiliated broker-dealers in Treasury market intermediation has led us to take a close look at bank regulatory requirements to clarify how these requirements, particularly their calibration, may impact Treasury market functioning. Although designed to address low risk activities, like Treasury market intermediation, leverage ratios have become increasingly binding as a bank capital constraint as market conditions change.
    While issues around the use of leverage ratios require close examination, a solid capital foundation in the banking system is critical to support safety and soundness and financial stability. Revisiting the calibration of leverage ratios to ensure that they remain backstops instead of creating binding constraints, especially in times of stress, should not be interpreted as a critique of the role of capital in a robust regulatory and supervisory framework.
    But to be clear, the consequences of an overly restrictive leverage ratio go well beyond just Treasury market intermediation, and impact a wide range of low-risk activities. Leverage capital requirements do not differentiate between the risk of different asset classes or exposures.
    However, in periods when bank balance sheets are expanding—like the significant deposit inflows during COVID-19—leverage capital requirements can unintentionally become the binding constraint on both banks and their affiliates. This increases the amount of required capital as bank balance sheets grow, regardless of the underlying risk. When constrained in this way, bank-affiliated primary dealers may pull back on the market intermediation of low-risk assets like U.S. Treasuries. A binding leverage capital requirement can create perverse incentives for banks to shift their balance sheets into higher risk assets, since doing so could generate larger returns without requiring additional capital. This is simply a cause and effect of overly restrictive leverage capital.
    The fact of leverage ratios becoming increasingly binding is evident in simple metrics like the ratio of risk-weighted assets to total leverage exposure. These are, respectively, the denominators of risk-based capital ratios and the SLR. Shortly after the SLR was adopted in the U.S. in the mid-2010s, this ratio stood at 48 percent in the aggregate for the eight largest U.S. banks, the global systemically important banks (G-SIBs). Since then, the ratio of risk-weighted assets to total leverage exposure has declined and currently stands at 40 percent, primarily due to higher reserves and other types of high-quality liquid assets on bank balance sheets. This downward trend results in the SLR increasingly becoming the binding constraint and reflects banks’ growing holdings of high-quality liquid assets, most of which carry a risk weight of zero under risk-based capital ratios but have a 100 percent weighting under leverage capital ratios.
    Efficient SolutionsOne example of the SLR’s unintended consequence is the erosion of liquidity in U.S. Treasury markets because it is driven, in part, by leverage ratio requirements increasingly becoming the binding constraints on the largest U.S. banks. This example also illustrates the necessity of evaluating tradeoffs in regulation and speaks to a larger issue with the calibration of leverage.
    The banking regulators are uniquely positioned to both analyze and remediate components of the bank regulatory framework that may disrupt banks’ participation in low-risk, but economically critical activities. This includes the exacerbation of Treasury market illiquidity. Treasury markets play a critical role in the U.S. and global financial systems, and we should be proactive in addressing the unintended consequences of bank regulation, while ensuring the framework continues to promote safety, soundness, and financial stability.11 We should start by addressing potential constraints on Treasury market functioning before issues arise, lessening impacts from stress, and mitigating the need to intervene in future market events.
    On Wednesday, the Board is scheduled to consider specific amendments to the eSLR, which is the requirement that applies at both the holding company and bank levels of the largest U.S. banks. While I do not want to front-run the proposal, I will note that the proposal’s goal is to address a long-identified—and growing—problem with the calibration of this leverage requirement. The proposal would solicit public comment on the impacts of this miscalibration, potential fixes, and work to develop an appropriate and effective solution. This proposal takes a first step toward what I view as long overdue follow-up to review and reform what have become distorted capital requirements. This proposal, while meaningful, addresses only one element of the capital framework. More work on capital requirements remains, especially to consider how they have evolved and whether changes in market conditions have revealed issues that should be addressed.
    In a few weeks, on July 22, the Federal Reserve will host a conference to bring together a wide range of thought leaders to discuss the U.S. bank capital framework, including the design and calibration of leverage ratios. Fixing the design and calibration of leverage capital requirements will not resolve every issue with U.S. Treasury market functioning. But, simple reforms to return leverage ratio requirements to their traditional role as a capital backstop could improve Treasury market functioning by building resilience in advance of future stress events. And this could reduce the chances that we would need to intervene in Treasury markets should a future stress event arise. While we know well the issues created by the eSLR, there are many potential improvements that could address other issues within the capital framework.
    As I have noted previously, a broader set of reforms could include amending not only the leverage capital ratio, but also G-SIB surcharge requirements. We should also reconsider capital requirements for a wider range of banks, including the SLR’s application to banks with more than $250 billion in assets, Tier 1 leverage requirements, and the calibration of the community bank leverage ratio.
    The unintended shift over time in the eSLR increasingly becoming a binding capital constraint demonstrates that we need to think about regulatory policies in a dynamic way based on the evolution in the banking and financial systems, and the broader economy.
    Other examples of regulations that must take into account the impact of economic growth and inflation include elements of the G-SIB surcharge, as well as regulatory thresholds that define the broader categories of banks. Thresholds like the $10 billion definition of a “community bank” and the $700 billion in total assets and $75 billion for cross-jurisdictional activity separating Category II and III banks determine which regulatory requirements apply to each group.
    One way to prevent the original calibration from becoming divorced from the foundational policy decisions over time is to index the relevant G-SIB surcharge coefficients and regulatory thresholds to nominal gross domestic product. While approaches like indexing thresholds and requirements can make our regulations more robust and durable over time, we should also acknowledge the essential role of supervision as a tool to promote safety and soundness, and financial stability. Just as our capital requirements are intended to operate in a complementary manner, so do regulation and supervision act in a complementary way.
    These are only a handful of relevant examples, but they are representative of an effective approach to regulatory reform. Regulations should not be created in a static world of “set it and forget it.” The economy evolves over time, as do the banking and financial systems and the needs of businesses and consumers.
    Increasingly, regulators are expected to conduct a more thorough and detailed analysis as part of the ordinary rulemaking process, which includes a proposal’s costs and benefits. Yet, over time, we tend to devote fewer resources to the work of conducting maintenance of our regulations. Maintenance of the regulatory system should include reviewing the basis for earlier policy decisions, considering whether the policies embedded in regulations have been distorted over time through market developments, and examining whether emerging issues in the market should lead to further review and revision.
    Closing ThoughtsThank you for the opportunity to join you today and to provide my views on the U.S. economic outlook and current regulatory proposals. In the United States, regulatory policy objectives are prescribed by law, and bank regulators focus primarily on promoting the safe and sound operation of U.S. banks, and financial stability. Despite this limited purpose, we must understand the consequences of regulations, which can extend well beyond the banking system. Recent trends—including providing more fact-based and analytical support for proposals—are a positive step in achieving responsible regulation.
    But we need a broad commitment to follow the approach I have just described. We must consider relevant data and information, identify the source of any problems or opportunity for greater efficiency, and then develop targeted and effective policy solutions and approaches.

    1. The views expressed here are my own and are not necessarily those of my colleagues on the Federal Reserve Board or the Federal Open Market Committee. Return to text
    2. See 12 CFR 249.3; 249.20 (defining categories of high-quality liquid assets based on asset characteristics). Return to text
    3. See Michelle W. Bowman, “Taking a Fresh Look at Supervision and Regulation (PDF),” (speech at the Georgetown University McDonough School of Business, Psaros Center for Financial Markets Policy, Washington, D.C., June 6, 2025). Return to text
    4. Board of Governors of the Federal Reserve System, “Federal Reserve Board Announces that the Temporary Change to its Supplementary Leverage Ratio (SLR) for Bank Holding Companies Will Expire as Scheduled on March 31,” press release, March 19, 2021, (“To ensure that the SLR—which was established in 2014 as an additional capital requirement—remains effective in an environment of higher reserves, the Board will soon be inviting public comment on several potential SLR modifications. The proposal and comments will contribute to ongoing discussions with the Department of the Treasury and other regulators on future work to ensure the resiliency of the Treasury market.”). Return to text
    5. See Board of Governors of the Federal Reserve System, Financial Stability Report (PDF) (Washington, D.C., April 2025), 10–11. Return to text
    6. Board of Governors, Financial Stability Report, at 32. Return to text
    7. See Board of Governors, Financial Stability Report, at 3. Return to text
    8. See Office of the Comptroller of the Currency and Federal Reserve System (2018), “Regulatory Capital Rules: Regulatory Capital, Enhanced Supplementary Leverage Ratio Standards for U.S. Global Systemically Important Bank Holding Companies and Certain of Their Subsidiary Insured Depository Institutions; Total Loss-Absorbing Capacity Requirements for U.S. Global Systemically Important Bank Holding Companies,” Federal Register, vol. 83 (April 19), pp. 17317–27. Return to text
    9. See Office of the Comptroller of the Currency and Federal Reserve System (2018), “II. Revisions to the Enhanced Supplementary Leverage Ratio Standards,” Federal Register, vol. 83 (April 19), p. 17319, paragraph 3: “Leverage capital requirements should generally act as a backstop to the risk-based requirements. If a leverage ratio is calibrated at a level that makes it generally a binding constraint through the economic and credit cycle, it can create incentives for firms to reduce participation in or increase costs for low-risk, low-return businesses.” Return to text
    10. See, for example, Federal Reserve System (2020), “Temporary Exclusion of U.S. Treasury Securities and Deposits at Federal Reserve Banks from the Supplementary Leverage Ratio (PDF),” Federal Register, vol. 85, (April 14), pp. 20578–79. Return to text
    11. For more information, see the press release in note 4 indicating that the Board would seek comment on changes to the SLR. Return to text

    MIL OSI USA News

  • MIL-OSI USA: NIST Releases Extensive Video Update on Champlain Towers South Investigation

    Source: US Government research organizations

    NCST Champlain Tower South Collapse Investigation | Technical Update (June 2025)

    The National Institute of Standards and Technology’s (NIST’s) National Construction Safety Team (NCST) has released an extensive video update on its investigation into the June 2021 partial collapse of the Champlain Towers South building in Surfside, Florida. The update reviews the investigation’s history and progress, shares preliminary findings, and highlights potential impacts that this complex investigation could have on building codes and standards.

    In the video, investigative lead Judith Mitrani-Reiser and co-lead Glenn Bell explain how the team has determined that some of the hypotheses they are considering for how the failure occurred have a higher likelihood than others. The team has reviewed two dozen hypotheses, relying on extensive physical evidence, imagery, historical records, witness interviews, remote sensing data, laboratory testing, computer modeling and more.  

    “As we have shared in previous updates, there were many design and construction problems that weakened the building from the start,” said Mitrani-Reiser. “These deficiencies posed many potential failure initiation possibilities both in the pool deck and the tower, and each is being carefully considered so that we can narrow our focus to the most likely ones and seek to rule out others.”

    The two experts describe the extensive planning and coordination that helped the team systematically work through analyses, testing and modeling to arrive at its preliminary findings. They note that from NIST’s initial deployment of a preliminary reconnaissance team in the first 48 hours after the collapse, this investigation has relied on collaboration with local authorities and expertise from across the federal government, private industry and academia.

    Researchers used a saw to cut into a steel-reinforced concrete slab following a slab-column connection test at the University of Washington. The cut reveals shear cracking and failure at the surface.

    Credit: NIST

    Higher-Likelihood Collapse Hypotheses

    Bell walks viewers through three hypotheses with higher likelihood, beginning with the failure of one of the typical slab-column connections in the pool deck. He describes factors that contributed to low margins of safety in the pool deck, including understrength of the building’s original structural design relative to the requirements of the building code. Additionally, he notes that steel reinforcement was not placed where it should have been, leading to significantly diminished strength of the pool deck slab and slab-column connections. He also points to heavy planters that were not in the original design, as well as a rehabilitation of the pool deck decades earlier that added sand and pavers, increasing the load on a system that was already functionally and structurally inadequate. The team also found corrosion of the steel reinforcement in the pool deck concrete, which can weaken the slabs and slab-column connections.  

    “While there is strong evidence that the collapse initiated in the pool deck, we have not ruled out a failure initiation in the tower,” said Bell. “The fact that the pool deck collapsed before the tower does not preclude the possibility that there was some initiating event in the tower that set off the collapse of the very vulnerable pool deck.”

    Some of the design, construction and degradation issues found in the pool deck are also evident in the building tower and present other plausible hypotheses that the team continues to pursue. In addition to the misplacement of steel reinforcement within slabs and columns, some basement columns had prolonged exposure to water due to ponding and flooding in the garage. This can cause corrosion of the steel reinforcement and deteriorate the concrete. The team therefore also considers it a higher likelihood that the collapse was initiated by either the diminished strength of the columns in the tower or the failure of a slab-beam-column joint in the southernmost column line of the east part of the tower, close to where the tower joined the pool deck.

    Replicas of Champlain Towers South building components were tested until failure at the University of Minnesota. This image shows a failed connection between the pool deck slab-beam and the slab-drop-beam.

    Credit: NIST

    Lower-Likelihood Collapse Hypotheses

    The investigation team determined that there is a lower likelihood that the partial collapse was initiated by two potential problems beneath the building: voids known as “karst” or pile failure. Mitrani-Reiser explains how satellite data was used to look for gradual settling or sinking of the ground in the general area of Champlain Towers South. None was seen in the area in the five years before the partial collapse, nor was localized sinking observed near the building in the days leading up to the tragedy.

    The team found no evidence of karst in the limestone on which the foundation sits, and careful studies of the limestone showed it has features that actually inhibit the formation of karst. Team members calculated that the foundation pile capacity shown on the design drawings was sufficient to carry the building loads and laboratory and nondestructive testing of pile concrete showed adequate material strength. Finally, the basement slab did not show any distress or trauma that would indicate karst formation or pile failure, such as cracking or sinking.

    Bell also notes as a lower likelihood scenario the separation of the pool deck/street-level slab from the south basement wall.  

    Preliminary Findings Rely on Broad Range of Evidence  

    In the past few months, the team has updated the collapse timeline based on interviews and records, modeling results, and new analyses of audio and digital evidence.  

    Although there is very little video from the night of the collapse, every image was meticulously analyzed to determine its precise perspective and identify clues that could inform the timeline, such as changes to reflections of light on building surfaces, such as a wall.

    Mitrani-Reiser describes how team members made a breakthrough by using a novel approach to analyzing videos. They compared the soundwaves of the audio recorded by two videos from different parts of the building to find and correlate patterns of sounds in each video. This helped pinpoint when the videos overlapped in time and provided insight into what was happening in the building by comparing the building’s movement at the same time on two different floors. All audiovisual evidence in NIST’s possession has now been timestamped.

    Mitrani-Reiser also notes the importance of social science research to develop carefully crafted interviews that have helped to elicit important memories not reported elsewhere. Information gained in these interviews has helped confirm the collapse timeline, in tandem with the video evidence.  

    A NIST NCST investigator examines the underside of a test specimen following a slab-beam-column test at the University of Minnesota. 

    Credit: NIST

    Implications for the Future

    “Two clear questions coming out of this investigation are why the design and construction problems were not discovered when Champlain Towers South was built, and how do we evaluate the structural safety of existing buildings?” said Bell.

    While the video presentation does not offer recommendations for changes to codes or practice, it does highlight some areas that industry experts could consider. These include how special inspections that are mandated for safety might impact construction quality control by giving builders a false sense of security that someone else will catch their errors later.

    Mitrani-Reiser also shares that the investigation found no records from the original construction of the building, and few from its early life, and notes the importance of records retention going beyond initial drawings to include “quality assurance records and, particularly, peer review reports where they exist.”

    Finally, Mitrani-Reiser calls on the engineering and construction professions to take seriously the apparent lack of quality control and quality assurance found in the case of Champlain Towers South. She noted that, “this tragic event has revealed flaws in our systems, and quality is at the heart of it.”

    The team is finalizing its analysis and has begun drafting its investigation report, which is expected to be completed in 2026. 

    MIL OSI USA News

  • MIL-OSI Russia: “Ahead of Time Together”: Winners and Prize-Winners of “Highest Standard” Awarded in Moscow

    Translation. Region: Russian Federal

    Source: State University Higher School of Economics – State University Higher School of Economics –

    June 13th Center of Cultures The HSE hosted a ceremony to honor the winners of the All-Russian School OlympiadHighest quality“. Of the more than 4.5 thousand winners and prize winners, about 700 schoolchildren from 67 regions of Russia took part in it. The best of the best were noted in special nominations established by the organizing committee of the Olympiad. For the second year in a row, the Olympiad “Highest Standard” is held with the support of Sber.

    Before the ceremony, a festival program was organized in the HSE atrium, which for an hour and a half became the main city square of HSE City with street activities and artists, a lounge area and elegant pavilions, flags and garlands.

    Here you could get a consultation from a neuro-fortune teller, play table football and hockey, solve puzzles and dance, take part in the creation of living paintings. In the chill-zone of Sber, which is supporting the Olympiad for the second season, schoolchildren played computer games, ate ice cream and got answers to questions about building a dream career, and in the VR-greenhouse of the ROST Group of Companies, a partner of Vysshaya Proba in biology, they picked tomatoes, drank smoothies and tried snacks with the taste of tomato and cucumber.

    In the Photo Mosaic zone, participants were invited to contribute to the creation of the HSE inscription from hundreds of photographs of Olympiad diploma winners. Those who wished could take part in a quest introducing HSE, in the game What? Where? When?, continue to build up their intellectual potential at the master class What Can Be Learned from Social (and Not Only) Network Analysis? or the training Creative Worlds: How Ideas Turn into Collaborations.

    The guests then moved to the Cultural Center. The participants of the ceremony honoring the diploma holders (similar events were previously held in Saint Petersburg, Perm And Nizhny Novgorod) said the first vice-rector of the National Research University Higher School of Economics, Vadim Radaev.

    “The Olympiad “Higher Standard” will soon turn 30, and every year it becomes more and more beautiful and cool. It already includes 30 profiles, including two new ones – “Industrial Programming” together with “Yandex” and “History of Art” together with the Pushkin Museum. And of course, the competition is growing. This year, more than 50 thousand people took part in it, and your victory is even more significant. There are more than 4.5 thousand winners and prize-winners, and even more diplomas, because some of you managed to win the Olympiad in several profiles,” said Vadim Radaev.

    The First Vice-Rector also thanked the partners and the team of organizers, “who are conducting the ‘Higher Test’ at the highest level.”

    Olga Tsukanova, Managing Director and Head of the Academic Partnerships Directorate at Sber, joined in the congratulations. She emphasized that the Higher School of Economics offers a wide range of sciences, and those who win the Olympiads then find themselves in a variety of fields.

    “We will be glad to see you among our employees, clients, partners, and we are ready to support those who see the future, who are moving towards the future, who are ready to lead others. Invitations to internships at Sber are received not only by students, but also by schoolchildren, who can try themselves in our product teams, “twist” the products that we release to the market. And students, especially after two years of study, having received a solid base, do cool projects at Sber,” said Olga Tsukanova.

    The organizing committee of the Olympiad established special nominations in which the best of the best were recognized: “Everest of Science” (diplomas in five or more profiles), “Conquering Olympus” (the highest results in profiles from 90 points), “Victory Marathon” (prize places for four or more years), “Ahead of Time” (completion of tasks two grades higher than the class of study, and tasks for the 7th grade by sixth-graders) and “HSE Olympiads” (winning several intellectual competitions of the National Research University Higher School of Economics). The laureates in these nominations, as well as two diploma winners of the Olympiad, who celebrated their birthday on June 13, were presented with diplomas, medals and gifts on stage.

    Deputy Vice-Rector – Head Directorate for the Development of Intellectual Competitions HSE University Danil Fedorov, congratulating the winners in the “Everest of Science” nomination, urged them to apply to a university where it is difficult to study, reminding them that the Higher School of Economics is exactly such a university.

    Olga Tsukanova invited the winners in the Conquering Olympus nomination to become students of the AI360: Artificial Intelligence Engineering track of the bachelor’s program Applied Mathematics and Computer Science, which is being implemented at HSE jointly with Sber and Yandex.

    Chairman of the Methodological Commission for the Profile “Foreign Languages” – Head Foreign language schools HSE University Ekaterina Kolesnikova compared the process of preparing for the Olympics to playing sports. “The winners in the “Victorious Marathon” nomination know very well that those who do not stop when things are difficult, who act at the limit of their capabilities, win,” she noted.

    The winners in the “Ahead of Time” nomination were announced by Anna Korovko, Senior Director for Main Educational Programs at the National Research University Higher School of Economics, and the Chair of the Methodological Commission for the “Political Science” profile, Dean Faculty of Social Sciences Denis Stukal. Anna Korovko promised that by the time they finish 11th grade, studying at the HSE will become even more difficult, and Denis Stukal, himself a former Olympiad participant, called them true leaders who not only challenged those who were a year or two older than them, but also succeeded in doing so.

    “You have a great future ahead of you, and I hope that at some point it will become inextricably linked with our university, because HSE is a university that is also ahead of its time. Let’s get ahead of it together and move only forward,” Denis Stukal concluded.

    The Chairperson of the Methodological Commission for the Economics Profile, Daria Tabashnikova, announced the winner in the HSE Olympiads brand nomination, Anastasia Usenko, who won the Vysshaya Proba Olympiad, the In Your Own Words essay championship, and the Highest Aerobatics competition. “Collecting awards, receiving diplomas, and preferences is great, but it’s even cooler when a person tries himself in different things and succeeds,” Daria Tabashnikova emphasized.

    The results of the event were summed up by the Director for Work with Gifted Students at the National Research University Higher School of Economics, Tamara Protasevich.

    “The ending Olympiad season of “Highest Standard” is the fifteenth, anniversary one for our team, which is responsible for its implementation. The year 2025 is generally rich in anniversaries: 5 years of the All-Russian Case Championship, 10 years of “Highest Aerobatics”. And “Highest Standard” is our largest project: registration for it began in August last year, and diplomas are being awarded now, in June. The Olympiad is constantly in the focus of our attention, and we are constantly improving it,” said Tamara Protasevich.

    She gave examples of feedback from Olympiad participants, which those present in the hall agreed with, raising glowing hearts: “The level of tasks is decent, difficult, but interesting,” “The atmosphere is pleasant, comfortable, not overwhelming, allows you to enjoy completing the Olympiad tasks,” “Organization – everything is clear and well thought out, prompt responses to questions, caring, friendly volunteers.”

    Tamara Protasevich also announced another nomination – “Recognition of the Organizers”, the winners of which were the best volunteers – students of the National Research University Higher School of Economics, who over the past three years participated in the “Higher Standard” and other intellectual competitions of the university. “Without these guys, not a single project of our directorate would have taken place. They are the best!” – she concluded.

    The ceremony of honoring the diploma winners ended with a collective performance of the student anthem “Gaudeamus”, after which all its participants were awarded the Olympiad diplomas and medals in the lobby of the Center of Cultures. Some of them shared their impressions with the news service “Vyshka.Glavnoe”.

    “The Highest Standard” is a combination of all the best that can be found at the Olympiad, says Erland Glukhov, a 10th-grader at the AMTEK General Education Lyceum in Cherepovets. “I participated in the in-person stage in Moscow, my friends in St. Petersburg and Nizhny Novgorod, and everyone was happy with the organization of the process and the support of the participants. I especially like the tasks: they are designed in an unconventional way, they include interesting elements, and they are really interesting to solve.”

    According to Erland, behind every victory at the Olympics there is, first and foremost, hard work, not only your own, but also that of your mentors, as well as the support of your parents.

    “When I was doing assignments in the Law profile, I had the feeling that I was in some other universe the whole time, that I fell asleep in the first minute and woke up in the last minute, when everything was already done,” said Alexander Gimpelson, a 10th-grade student at School No. 7 “Russian Classical School” in Ryazan. “The assignments required a creative approach, and it was always necessary not only to reproduce the provisions of the laws, but also to understand them, evaluate them from different angles, and show how they can be applied in practice.”

    In preparation for the Olympiad, Alexander mastered scientific literature, thanks to which “these complex adverbial participial phrases, thirty subordinate clauses in one sentence of the law became lively and understandable.” In a year, he plans to enroll in the Faculty of Law at the National Research University Higher School of Economics and subsequently specialize in the field of private law.

    11th-grader Polina Platonova from the Vladimir region has been participating in Olympiads since the 4th grade. This year she went to Nizhny Novgorod for the “Highest Standard”, and she associates the in-person round competitions with both a holiday and a tense struggle. The girl is considering the possibility of entering the National Research University Higher School of Economics – Nizhny Novgorod and also associates her further professional development with jurisprudence.

    Albina Markaryan, an 11th-grader from Voronezh, participated in the final round in her hometown and will be applying to the HSE for a bachelor’s degree in International Relations this year. Before the awards ceremony, she walked around the atrium (“everything was organized wonderfully, lots of competitions and entertainment”), she liked everything in the university building, and she has no doubt that if she is accepted, these feelings will not only remain, but will also intensify.

    Please note: This information is raw content directly from the source of the information. It is exactly what the source states and does not reflect the position of MIL-OSI or its clients.

    MIL OSI Russia News

  • MIL-OSI Global: Embarrassed? Why this feeling might actually be good for you

    Source: The Conversation – UK – By Laura Elin Pigott, Senior Lecturer in Neurosciences and Neurorehabilitation, Course Leader in the College of Health and Life Sciences, London South Bank University

    Embarrassment is generated by a network of different brain regions working together. Kues/ Shutterstock

    Picture this: it’s your first day at a new job. You’re about to introduce yourself to a large group of people you’ll be working with – and promptly fall flat on your face. Not exactly the entrance you had in mind.

    We’ve all cringed at moments like these — whether they happen to us or to others. That instant, full-body wince, and the shared, silent relief that it didn’t happen to you.

    Embarrassment is a universal, visceral and oddly contagious emotion. It’s what psychologists call a self-conscious emotion. This means it hinges on our awareness of ourselves through others’ eyes.

    Unlike shame or guilt, embarrassment isn’t usually moral — it’s about looking awkward or inept. Context matters too. We feel more embarrassed in front of people whose opinions we value or who hold power.

    Yet while embarrassment may feel uncomfortable, it actually has surprising social and psychological benefits.


    Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.


    Empathy and social connection

    Evolutionary psychologists believe embarrassment developed as a social corrective – a way to acknowledge mistakes, signal remorse and reduce conflict within groups.
    This instinct probably helped our ancestors stay in the group, which was critical for survival. People who showed embarrassment were seen as more trustworthy and cooperative.

    In this way, embarrassment can invite empathy and forgiveness, strengthening relationships. It signals that we care what others think, promoting approachability and emotional closeness. So, while it’s uncomfortable in the moment, embarrassment probably evolved to keep communities cohesive.

    Embarrassment is also contagious. Most of us have cringed on someone else’s behalf. This shows how deeply tuned our social brains are. We empathise with others’ awkwardness, often rushing to reassure them. This empathy helps preserve harmony and can also help us build connection with others.

    Embarrassment signals remorse and can invite empathy from others.
    fizkes/Shutterstock

    Trust and virtue

    Visible signs of embarrassment – such as blushing or stumbling over words – are often seen as signs of honesty and generosity. One study found that people who show embarrassment are judged to be more trustworthy and sociable.

    Blushing may have evolved on purpose to be a visible, honest signal of humility that others instinctively trust. Experiments even show we’re more likely to forgive someone who looks embarrassed than someone who acts indifferent.

    Learning social norms

    Forgetting you’re not on mute in a Zoom meeting, sending a message to the wrong group chat or realising your shirt’s inside out after an important meeting. These moments may be minor, but our brains still process them as social threats – albeit small ones.

    In this way, embarrassment helps us adhere to social norms and expectations – many of which are unwritten and only discovered once we’ve flubbed them by mistake. Embarrassment acts as an internal guide, helping us remember social missteps and encouraging us to conform to shared expectations – not out of shame, but because it feels right. It also nudges us whenever we stray near the edges of what’s socially comfortable, helping us course-correct swiftly.

    The way we react to an embarrassing situation is also important in helping us learn from our experiences. Many of us laugh nervously when embarrassed. This effectively reframes the incident from threatening to harmlessly amusing in our minds.

    Humility and authenticity

    Embarrassment keeps egos in check, signals emotional intelligence and makes us more relatable. In a curated world, an awkward moment can humanise us and build credibility.

    However, while moderate embarrassment is healthy and constructive, excessive fear of it can become harmful – crossing into social anxiety.

    Your brain on embarrassment

    Embarrassment isn’t generated by a single “embarrassment centre” in the brain. Rather, it’s generated by a network of different brain regions working together.

    The medial prefrontal cortex (mPFC) is a region in the front of the brain that’s active during self-reflection and when thinking about how others perceive us. It’s also involved in storing social memories – which is why an embarrassing memory, even from years ago, can still make you cringe when it pops into your head.

    The anterior cingulate cortex (ACC) is the reason you blush, your heart pounds and you feel sweaty when you’re deeply embarrassed. The ACC activates your “fight or flight” reaction. When the ACC fires up, it also helps us adjust our behaviour – aiding in impulse control and helping us learn from the mistake so we don’t do it again.

    The amygdala is the brain’s emotional alarm bell. When we get embarrassed, the amygdala registers the emotional intensity of the situation – especially the fear of being seen negatively.

    People with social anxiety show an imbalance between the mPFC and amygdala. Their mPFC is underactive (so they’re less able to rationalise others’ perspectives), while their amygdala is overactive (causing excessive fear signals). This combination makes it hard for them to accurately gauge social situations, often interpreting them as more threatening and embarrassing than they really are.

    Finally, the insula, a region located deep in the brain, helps us tune into our emotions and bodily states. This creates that gut-level discomfort we feel during embarrassing moments. All these regions work in concert during an embarrassing moment.

    Embarrassment is uncomfortable, yes – but it’s also a reminder that we care about others and want to belong. It’s part of what makes us human. So the next time you experience an embarrassing moment, try to laugh it off and remember that the moment is helping us to learn and connect.

    Laura Elin Pigott does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Embarrassed? Why this feeling might actually be good for you – https://theconversation.com/embarrassed-why-this-feeling-might-actually-be-good-for-you-259094

    MIL OSI – Global Reports

  • MIL-OSI Global: Why social media injury recovery videos could do more harm than help

    Source: The Conversation – UK – By Craig Gwynne, Senior Lecturer in Podiatry, Cardiff Metropolitan University

    Studio Romantic/Shutterstock

    When Kim Kardashian glided into the launch party of her NYC SKIMS boutique on a knee scooter, a mobility aid for people with lower leg injuries – stiletto on one foot, designer cast on the other – she wasn’t just managing an injury. She was creating content.

    And she’s far from alone.

    In 2024, rapper Kid Cudi turned his own broken foot into a viral storyline, posting updates of himself on crutches and in a surgical boot after a mishap at the Coachella festival in California. These high profile injuries don’t just invite sympathy; they generate style points, followers and millions of views.

    But as injury recovery morphs into online entertainment, it raises an important question: is this trend helping people heal or encouraging risky behaviour that can delay recovery?


    Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.


    Open any social media feed and you’ll likely stumble across videos of people hobbling through supermarkets, dancing on crutches, or sweating through workouts in a medical boot. Hashtags like #BrokenFootClub and #InjuryRecovery have spawned thriving online communities where users share advice, frustrations and recovery milestones. For many, rehab has become a public performance, complete with triumphant comeback narratives.

    And it’s not just celebrities. All sorts of people are turning their injuries, from hiking sprains to post-surgery recoveries, into digital diaries. Some offer helpful tips or emotional support, while others focus on fast-tracked progress, sometimes glossing over the slower, necessary steps that true healing demands.

    A broken foot used to mean rest. Now it can mean millions of views.

    Watching others navigate recovery can be deeply reassuring. Seeing someone joke about wobbling to the bathroom or demonstrate how to climb stairs with crutches can ease the loneliness that often comes with injury.

    And some creators are genuinely getting it right. Increasing numbers of healthcare professionals, from orthopaedic surgeons to physiotherapists and podiatrists, now use social media platforms such as TikTok and Instagram to share safe exercises, realistic timelines and expert tips on navigating recovery. For people who struggle to access in-person care, this clinically sound content can be a lifeline.

    But not all content is created equal – and some can do more harm than good.

    When rest gets rebranded

    But on social media, rest isn’t always part of the narrative. The most viewed recovery videos often aren’t posted by healthcare professionals but by influencers eager to showcase rapid progress. Some discard crutches too soon, hop unaided, or attempt high-impact exercises while their bodies are still vulnerable – all for the sake of engagement.

    What’s often missing is the unglamorous reality: swelling, setbacks, rest and the slow, sometimes frustrating, pace of real healing. Bones, tendons and ligaments aren’t impressed by likes or follower counts. Healing requires time and carefully structured loading: a gradual, deliberate increase in weight bearing and movement to rebuild strength without risking re-injury.

    Ignoring this process can lead to delayed healing, chronic pain, re-injury, or even long term joint and muscle complications that can affect the knees, hips, or back.

    And this isn’t just speculation. A 2025 study examining TikTok content on acute knee injuries found that most videos were produced by non-experts and often contained incomplete or inaccurate information. Researchers warned that this misinformation may not only distort patient expectations but also lead to decisions that hinder proper recovery. Similar trends were found in anterior cruciate ligament knee injury videos, where dangerous, non-evidence based practices were widely promoted to millions of viewers.

    Healthcare professionals are now seeing the ripple effects firsthand. Many physiotherapists and podiatrists report a growing number of patients arriving with unrealistic expectations shaped by social media, rather than medical advice. Some patients feel frustrated when their recovery doesn’t match the rapid progress they see online. Others attempt risky exercises before their bodies are ready, setting themselves back.

    A 2025 study examining TikTok content on acute knee injuries found that most videos were produced by non-experts and often contained incomplete or inaccurate information. Researchers warned that this misinformation may not only distort patient expectations but also lead to decisions that hinder proper recovery.

    The World Health Organization has also flagged the dangers of online health misinformation. When social media shortcuts replace professional care, patients risk not only slower recovery but potentially more complex medical problems, while clinicians are left managing the aftermath.

    Recovery isn’t a race

    While supportive online communities can be a valuable source of comfort, the pressure to “bounce back” quickly can be dangerous. Viral videos and celebrity recoveries can create a toxic sense of comparison, tempting people to rush their own healing process.

    Research shows that the psychological drive to return to activity, particularly among younger adults, can reduce rehab compliance and sharply increase the risk of re-injury. True recovery isn’t governed by trending hashtags; it follows a personal, biologically determined timeline that requires patience, rest, and carefully structured rehabilitation.

    Seeing stars like Kim Kardashian with a designer cast might make injury look fashionable. But for most people, a broken foot is not glamorous; it’s weeks of awkward movement, discomfort, adaptation and quiet, steady healing.

    Mobility content can inspire, motivate, and connect – but it’s not a road map for your own recovery. If you’re injured, approach online content with curiosity, not comparison. Learn from others, but listen to your body. Healing is personal. Your recovery won’t be dictated by views, likes, or viral trends – it will unfold on your body’s own timetable.

    Craig Gwynne does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Why social media injury recovery videos could do more harm than help – https://theconversation.com/why-social-media-injury-recovery-videos-could-do-more-harm-than-help-258533

    MIL OSI – Global Reports

  • MIL-OSI Global: Where did the wonder go – and can AI help us find it?

    Source: The Conversation – UK – By Lucy Gill-Simmen, Vice Dean for Education & Student Experience, Royal Holloway University of London

    French philosopher René Descartes crowned human reason in 1637 as the foundation of existence: Cogito, ergo sumI think, therefore I am. For centuries, our capacity to doubt, question and think has been both our compass and our identity. But what does that mean in an age where machines can “think”, generate ideas, write novels, compose symphonies and, increasingly, make decisions?

    Artificial intelligence (AI) has brought a new kind of certainty, one that is quick, data-driven and at times frighteningly precise, at times alarmingly wrong. From Google’s Gemini to OpenAI’s ChatGPT, we live in a world where answers can arrive before the question is even finished. AI has the potential to change not just how we work, but how we think. As our digital tools become more capable, we may well be justified in asking: where did the wonder go?

    We have become increasingly accustomed to optimisation. From using apps to schedule our days to improving how companies hire staff through AI-powered recruitment tools, technology has delivered on its promise of speed and efficiency.


    This article is part of our State of the Arts series. These articles tackle the challenges of the arts and heritage industry – and celebrate the wins, too.


    In education, students increasingly use AI to summarise readings and generate essay outlines; in healthcare, diagnostic models match human doctors in detecting disease.

    But in our pursuit of optimisation, we may have left something essential behind. In her book The Power of Wonder (2023), author Monica Parker describes wonder as a journey, a destination, a verb and a noun, a process and an outcome.

    Lamenting how “modern life is conditioning wonder-proneness out of us”, the author suggests we have “traded wonder for the pale facsimile of electronic novelty-seeking”. And there’s the paradox: AI gives us knowledge at scale, but may rob us of the humility and openness that spark genuine curiosity.

    AI as the antidote?

    But what if AI isn’t the killer of wonder, but its catalyst? The same technologies that predict our shopping habits or generate marketing content can also create surreal art, compose jazz music and tell stories in different ways.

    Tools like DALL·E, Udio.ai, and Runway don’t just mimic human creativity, they expand our creative capacity by translating abstract ideas into visual or audio outputs instantly. They don’t just mimic creativity, they open it up to anyone, enabling new forms of self-expression and speculative thinking.

    The same power that enables AI to open imaginative possibilities can also blur the line between fact and fiction, which is especially risky in education where critical thinking and truth-seeking are paramount. That’s why it’s essential that we teach students not just to use these tools, but to question them. Teaching people to wonder isn’t about uncritical amazement – it’s about cultivating curiosity alongside discernment.

    Educators experimenting with AI in the classroom are starting to see this potential, as my recent work in the area has shown. Rather than using AI merely to automate learning, we are using it to provoke questions and to promote creativity.

    When students ask ChatGPT to write a poem in the voice of Virginia Woolf about climate change, they learn how to combine literary style with contemporary issues. They explore how AI mimics voice and meaning, then reflect on what works and what doesn’t.

    When they use AI tools to build brand storytelling campaigns, they practise turning ideas into images, sounds and messages and learn how to shape stories that connect with audiences. Students are not just using AI, they’re learning to think critically and creatively with it.

    This aligns with Brazilian philosopher Paulo Friere’s “banking” concept of education, where rather than depositing facts, educators are required to spark critical reflection. AI, when used creatively, can act as a dialogue partner, one that reflects back our assumptions, challenges our ideas and invites deeper inquiry.

    The research is mixed, and much depends on how AI is used. Left unchecked, tools like ChatGPT can encourage shortcut thinking. When used purposely as a dialogue partner, prompting reflection, testing ideas and supporting creative inquiry, studies show it can foster deeper engagement and critical thinking. The challenge is designing learning experiences that make the most of this potential.

    A new kind of curiosity

    Wonder isn’t driven by novelty alone, it’s about questioning the familiar. Philosopher Martha Nussbaum describes wonder as “taking us out of ourselves and toward the other”. In this way, AI’s outputs have the potential to jolt people out of cognitive ruts and into new realms of thought, causing them to experience wonder.

    It could be argued that AI becomes both mirror and muse. It holds up a reflection of our culture, biases and blind spots while nudging us toward the imaginative unknown at the same time. Much like the ancient role of the fool in King Lear’s court, it disrupts and delights, offering insights precisely because it doesn’t think like humans do.

    This repositions AI not as a rival to human intelligence, but as a co-creator of wonder, a thought partner in the truest sense.

    Descartes saw doubt as the path to certainty. Today, however, we crave certainty and often avoid doubt. In a world overwhelmed by information and polarisation, there is comfort in clean answers and predictive models. But perhaps what we need most is the courage to ask questions, to really wonder about things.

    The German poet Rainer Maria Rilke once advised: “Be patient toward all that is unsolved in your heart and try to love the questions themselves.”

    AI can generate perspectives, juxtapositions and “what if” scenarios that challenge students’ habitual ways of thinking. The point isn’t to replace critical thinking, but to spark it in new directions. When artists co-create with algorithms, what new aesthetics emerge that we’ve yet to imagine?

    And when policymakers engage with AI trained on other perspectives from around the world, how might their understanding and decisions be transformed? As AI reshapes how we access, interpret and generate knowledge, this encourages rethinking not just what we learn, but why and how we value knowledge at all.

    Educational philosophers such as John Dewey and Maxine Greene championed education that cultivates imagination, wonder and critical consciousness. Greene spoke of “wide-awakeness”, a state of being in the world.

    Deployed thoughtfully, AI can be a tool for wide-awakeness. In practical terms, it means designing learning experiences where AI prompts curiosity, not shortcuts; where it’s used to question assumptions, explore alternatives, and deepen understanding.

    When used in this way, I believe it can help students tell better stories, explore alternate futures and think across disciplines. This demands not only ethical design and critical digital literacy, bit also an openness to the unknown. It also demands that we, as humans, reclaim our appetite for awe.

    In the end, the most human thing about AI might be the questions it forces us to ask. Not “What’s the answer?” but “What if …?” and in that space, somewhere in between certainty and curiosity, wonder returns. The machines we built to do our thinking for us might just help us rediscover it.

    Lucy Gill-Simmen does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Where did the wonder go – and can AI help us find it? – https://theconversation.com/where-did-the-wonder-go-and-can-ai-help-us-find-it-258490

    MIL OSI – Global Reports

  • MIL-OSI Global: Society needs a systems update to cope with climate crisis – my new film explains why

    Source: The Conversation – UK – By James Dyke, Associate Professor in Earth System Science, University of Exeter

    The climate and ecological crisis is one of the greatest challenges humanity has ever faced. If the world fails to address it, and over the rest of this century we continue to burn fossil fuels and pump even more carbon dioxide into the atmosphere, we’ll face catastrophe. On this much, almost all governments agree (with some notable exceptions such as the US).

    Even the world’s largest oil and gas companies now acknowledge that their products are behind the alarming increase in global temperatures and that we will have to transition to alternative fuels. Eventually.

    In some oil and gas firms’ net zero policies you will often see the word “eventually” or its equivalent used. Yes, they accept that the age of fossil fuels will be over, but they don’t give any end date. In fact, with continued expansion of new oil and gas fields they appear to give every indication of continuing to be fossil fuel companies for the foreseeable future.


    Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.


    Will such firms actually phase out coal, oil and gas at the rate required to avoid dangerous climate change? How quickly does that now have to happen? Immediately.

    At current rates of emissions, the window to have a 50:50 chance of limiting warming to 1.5°C will close in as little as six years. Given that global emissions are not stabilising but in fact going up, we are in the process of overshooting 1.5°C and heading deep into dangerous climate change territory.

    Does that mean it’s game over, that the climate catastrophes we fear will come to pass? Thinking about these sorts of systemic risks form the basis of much of my current research. This includes some pretty alarming analysis on how societies can react to challenges such as climate change in ways that can make the situation much worse.

    But herein lies a potentially powerful source of hope for the future because what we do as individuals and members of communities and countries will make all the difference. That’s what was on my mind when I started working on a new climate change documentary with filmmaker Paul Maple.

    Radical reductions

    Our new film System Update: Rebooting Our Future argues that, while we may have run out of time to avoid dangerous climate change, we are now only beginning to see how we can not just avoid further environmental damage but make a much better world for all of humanity. To do that, we must go beyond the incremental and timid policies of today. We need to be radical and dig into the drivers of climate change.

    Take economic growth, for example. You will not find a political party in power in any industrialised nation that does not have continued economic growth as one of its core objectives. Economic performance is often the main way politicians are judged. That’s why threats of a recession lead news reports.

    In System Update, I ask what is this economic growth for, if it continues to drive expanded energy and material consumption and drive us further towards climate and ecological collapse?

    If our economic and political systems cannot deliver radical emissions reductions in a sustainable and fair way, then they need to be rebooted. Rather than policies being orientated towards maximising economic growth, we can instead question how the current goods and services an economy produces are used.

    How can local communities be empowered to make themselves more resilient to climate change while reducing their emissions? Where can citizen assemblies strengthen our democracies and help foster the wider support for ambitious climate action? These assemblies work by recruiting a representative cross section of society who hear from a range of climate experts, and then work together to provide policy recommendations.

    I put such questions to an amazing group of activists, academics and policymakers. We quickly discovered from economic anthropologist Jason Hickel that there is no end of new thinking about economics.

    Lawyer and key architect of the Paris agreement Farhana Yamin recounted the epic battle that she and others have been waging with politicians to get them to understand and act on some of the fundamental truths of climate change. Researcher and strategist Laurie Laybourn spoke of the need for leaders to understand how this gathering storm of climate change demands new mindsets.

    Climate change adaptation expert Kathryn Brown made the case for a rapid increase in efforts to protect communities from environmental change, while climate historian Alice Bell put today’s debates into the wider context. Climate campaigner Max Wakefield and climate justice activist Dylan Hamilton connected the big picture elements of the climate crisis to both everyday actions like what you buy and how to you travel, to deeper engagement with politics.

    It’s easy to feel overwhelmed about the scale of climate change. There is a constant stream of bad news about rising temperatures and extreme weather. What I hope System Update shows is that there is no end of ideas for how such an outcome could be averted, and how you could put them into practice.

    We will win. The age of fossil fuels is ending. The question now is, how fast do you want to make that happen?


    Don’t have time to read about climate change as much as you’d like?

    Get a weekly roundup in your inbox instead. Every Wednesday, The Conversation’s environment editor writes Imagine, a short email that goes a little deeper into just one climate issue. Join the 45,000+ readers who’ve subscribed so far.


    James Dyke does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Society needs a systems update to cope with climate crisis – my new film explains why – https://theconversation.com/society-needs-a-systems-update-to-cope-with-climate-crisis-my-new-film-explains-why-257503

    MIL OSI – Global Reports

  • MIL-OSI Global: Appeals court ruling grants Donald Trump broad powers to deploy troops to American cities

    Source: The Conversation – Canada – By Jack L. Rozdilsky, Associate Professor of Disaster and Emergency Management, York University, Canada

    Residents of Los Angeles will need to get used to federally controlled National Guard troops operating on their streets. Due to a ruling from an appeals court on June 19, United States President Donald Trump now has broad authority to deploy military forces in American cities.

    This is a troubling development. All presidents have held in their grasp extraordinary powers to deploy military troops domestically. But Trump stands apart with his apparent keen interest in manufacturing false emergencies to exploit extraordinary power.

    An 1878 law called the Posse Comitatus Act restricts using the military for domestic law enforcement. The broader principle being challenged by Trump’s actions in L.A. is the norm of the military not being allowed to interfere in the affairs of civilian governance.

    Injunctions and appeals

    Five months into Trump’s presidency, L.A. has been targeted for aggressive immigration enforcement. In their pluralistic city where dozens of languages and nationalities peacefully co-exist, some Angelenos believe the city is experiencing an attack on its most essential social fabric.

    On June 7, Trump acted under United States Code Title 10 provisions to take over command and control of California’s National Guard. Federalized military forces were deployed.

    The objective was to counter what Trump argued was a form of rebellion against the authority of the government of the United States. In fact, these “rebellions” were largely peaceful protests in downtown L.A.

    On June 9, the U.S. District Court for the Northern District of California granted an injunction restraining the president’s use of military force in L.A. The court order supported Gov. Gavin Newsom’s contention that Trump overstepped his authority.

    On June 19, a decision from a panel of judges at the U.S. Court of Appeals for the Ninth Circuit overturned the injunction.

    What this means at the moment is that Trump does not have to return control of the troops to Newsom. California has options to continue litigation by asking the Federal Appeals Court to rehear the matter, or perhaps directly asking the U.S. Supreme Court to intervene.

    Moving toward authoritarianism

    Trump’s June 7 memorandum facilitating his move to overrule Newsom’s authority and seize control of 2,000 National Guard troops was based on the president defining his own so-called emergency.

    He claimed incidents of violence and disorder following aggressive immigration enforcement amounted to a form of rebellion against the U.S.

    As Trump flexes his emergency power might, his second term has been called the 911 presidency. He has used extraordinary emergency powers at a pace well beyond his predecessors, pressing the limits to address his administration’s supposed sense of serious perils overtaking the nation.

    Issues arise when the level of actual danger locally is not at all representative of what the president suggests is a full-scale national emergency. For example, demonstrations over immigration raids occupied only a tiny parcel of real estate in L.A.’s huge metropolitan area. A Los Angeles-based rebellion against the U.S. was not occurring.

    As dissent over aggressive immigration enforcement actions grew, localized clashes with law enforcement did occur. Mutual aid surged into Los Angeles, where neighbouring California law enforcement agencies acted to assist one another. The law enforcement challenges never rose to the level of the governor of California requesting additional federal support.

    Shortly after the federal government took over the California National Guard, Newsom said the move was purposefully inflammatory.

    In addition to declaring dubious emergencies to amass power, stoking violence is a characteristic of authoritarian rulers. Creating fear, division and feelings of insecurity can lead to community crises. Trump did not need to wait for a crisis; it seems he simply invented one.

    No guardrails

    The expression “out of kilter” comes to mind as Trump inches closer to invoking the Insurrection Act of 1807. If so, the situation will look quite similar in practice to what is happening now in Los Angeles.

    Five years ago, Trump flirted with invoking the Insurrection Act during Black Lives Matter unrest in Washington, D.C., in and around Lafayette Park.

    As recent L.A. protests intensified, Trump stated: “We’re going to have troops everywhere.”

    Currently, there are few guardrails in place to prevent a rogue president from misusing the military in domestic civilian affairs. Trump has been coy about whether he would tap into the greater powers available to him under the Insurrection Act.

    Real emergencies presenting existential threats to America do persist. Nuclear proliferation, climate change and pandemics need serious leaders. But politically exploiting last-resort emergency laws designed to provide options to deal with genuine existential threats — not to weaponize them against protesters demonstrating against public policy — is absurd.

    Jack L. Rozdilsky receives support for research communication and public scholarship from York University. He also has received research support from the Canadian Institutes of Health Research.

    ref. Appeals court ruling grants Donald Trump broad powers to deploy troops to American cities – https://theconversation.com/appeals-court-ruling-grants-donald-trump-broad-powers-to-deploy-troops-to-american-cities-258894

    MIL OSI – Global Reports

  • MIL-OSI Global: To spur the construction of affordable, resilient homes, the future is concrete

    Source: The Conversation – USA – By Pablo Moyano Fernández, Assistant Professor of Architecture, Washington University in St. Louis

    A modular, precast system of concrete ‘rings’ can be connected in different ways to build a range of models of energy-efficient homes. Pablo Moyano Fernández, CC BY-SA

    Wood is, by far, the most common material used in the U.S. for single-family home construction.

    But wood construction isn’t engineered for long-term durability, and it often underperforms, particularly in the face of increasingly common extreme weather events.

    In response to these challenges, I believe mass-produced concrete homes can offer affordable, resilient housing in the U.S. By leveraging the latest innovations of the precast concrete industry, this type of homebuilding can meet the needs of a changing world.

    Wood’s rise to power

    Over 90% of the new homes built in the U.S. rely on wood framing.

    Wood has deep historical roots as a building material in the U.S., dating back to the earliest European settlers who constructed shelters using the abundant native timber. One of the most recognizable typologies was the log cabin, built from large tree trunks notched at the corners for structural stability.

    Log cabins were popular in the U.S. during the 18th and 19th centuries.
    Heritage Art/Heritage Images via Getty Images

    In the 1830s, wood construction underwent a significant shift with the introduction of balloon framing. This system used standardized, sawed lumber and mass-produced nails, allowing much smaller wood components to replace the earlier heavy timber frames. It could be assembled by unskilled labor using simple tools, making it both accessible and economical.

    In the early 20th century, balloon framing evolved into platform framing, which became the dominant method. By using shorter lumber lengths, platform framing allowed each floor to be built as a separate working platform, simplifying construction and improving its efficiency.

    The proliferation and evolution of wood construction helped shape the architectural and cultural identity of the nation. For centuries, wood-framed houses have defined the American idea of home – so much so that, even today, when Americans imagine a house, they typically envision one built of wood.

    A suburban housing development from the 1950s being built with platform framing.
    H. Armstrong Roberts/ClassicStock via Getty Images

    Today, light-frame wood construction dominates the U.S. residential market.

    Wood is relatively affordable and readily available, offering a cost-effective solution for homebuilding. Contractors are familiar with wood construction techniques. In addition, building codes and regulations have long been tailored to wood-frame systems, further reinforcing their prevalence in the housing industry.

    Despite its advantages, wood light-frame construction presents several important limitations. Wood is vulnerable to fire. And in hurricane- and tornado-prone regions, wood-framed homes can be damaged or destroyed.

    Wood is also highly susceptible to water-related issues, such as swelling, warping and structural deterioration caused by leaks or flooding. Vulnerability to termites, mold, rot and mildew further compromise the longevity and safety of wood-framed structures, especially in humid or poorly ventilated environments.

    The case for concrete

    Meanwhile, concrete has revolutionized architecture and engineering over the past century. In my academic work, I’ve studied, written and taught about the material’s many advantages.

    The material offers unmatched strength and durability, while also allowing design flexibility and versatility. It’s low-cost and low-maintenance, and it has high thermal mass properties, which refers to the material’s ability to absorb and store heat during the day, and slowly release it during the cooler nights. This can lower heating and cooling costs.

    Properly designed concrete enclosures offer exceptional performance against a wide range of hazards. Concrete can withstand fire, flooding, mold, insect infestation, earthquakes, hail, hurricanes and tornadoes.

    It’s commonly used for home construction in many parts of the world, such as Europe, Japan, Mexico, Brazil and Argentina, as well as India and other parts of Southeast Asia.

    However, despite their multiple benefits, concrete single-family homes are rare in the U.S.

    That’s because most concrete structures are built using a process called cast-in-place. In this technique, the concrete is formed and poured directly at the construction site. The method relies on built-in-place molds. After the concrete is cast and cured over several days, the formwork is removed.

    This process is labor-intensive and time-consuming, and it often produces considerable waste. This is particularly an issue in the U.S., where labor is more expensive than in other parts of the world. The material and labor cost can be as high as 35% to 60% of the total construction cost.

    Portland cement, the binding agent in concrete, requires significant energy to produce, resulting in considerable carbon dioxide emissions. However, this environmental cost is often offset by concrete’s durability and long service life.

    Concrete’s design flexibility and structural integrity make it particularly effective for large-scale structures. So in the U.S., you’ll see it used for large commercial buildings, skyscrapers and most highways, bridges, dams and other critical infrastructure projects.

    But when it comes to single-family homes, cast-in-place concrete poses challenges to contractors. There are the higher initial construction costs, along with a lack of subcontractor expertise. For these reasons, most builders and contractors stick with what they know: the wood frame.

    A new model for home construction

    Precast concrete, however, offers a promising alternative.

    Unlike cast-in-place concrete, precast systems allow for off-site manufacturing under controlled conditions. This improves the quality of the structure, while also reducing waste and labor.

    The CRETE House, a prototype I worked on in 2017 alongside a team at Washington University in St. Louis, showed the advantages of a precast home construction.

    To build the precast concrete home, we used ultra-high-performance concrete, one of the latest advances in the concrete industry. Compared with conventional concrete, it’s about six times stronger, virtually impermeable and more resistant to freeze-thaw cycles. Ultra-high-performance concrete can last several hundred years.

    The strength of the CRETE House was tested by shooting a piece of wood at 120 mph (193 kph) to simulate flying debris from an F5 tornado. It was unable to breach the wall, which was only 2 inches (5.1 centimeters) thick.

    The wall of the CRETE House was able to withstand a piece of wood fired at 120 mph (193 kph).

    Building on the success of the CRETE House, I designed the Compact House as a solution for affordable, resilient housing. The house consists of a modular, precast concrete system of “rings” that can be connected to form the entire structure – floors, walls and roofs – creating airtight, energy-efficient homes. A series of different rings can be chosen from a catalog to deliver different models that can range in size from 270 to 990 square feet (25 to 84 square meters).

    The precast rings can be transported on flatbed trailers and assembled into a unit in a single day, drastically reducing on-site labor, time and cost.

    Since they’re built using durable concrete forms, the house can be easily mass-produced. When precast concrete homes are mass-produced, the cost can be competitive with traditional wood-framed homes. Furthermore, the homes are designed to last far beyond 100 years – much longer than typical wood structures – while significantly lowering utility bills, maintenance expenses and insurance premiums.

    The project is also envisioned as an open-source design. This means that the molds – which are expensive – are available for any precast producer to use and modify.

    The Compact House is made using ultra-high-performance concrete.
    Pablo Moyano Fernández, CC BY-SA

    Leveraging a network that’s already in place

    Two key limitations of precast concrete construction are the size and weight of the components and the distance to the project site.

    Precast elements must comply with standard transportation regulations, which impose restrictions on both size and weight in order to pass under bridges and prevent road damage. As a result, components are typically limited to dimensions that can be safely and legally transported by truck. Each of the Compact House’s pieces are small enough to be transported in standard trailers.

    Additionally, transportation costs become a major factor beyond a certain range. In general, the practical delivery radius from a precast plant to a construction site is 500 miles (805 kilometers). Anything beyond that becomes economically unfeasible.

    However, the infrastructure to build precast concrete homes is already largely in place. Since precast concrete is often used for office buildings, schools, parking complexes and large apartments buildings, there’s already an extensive national network of manufacturing plants capable of producing and delivering components within that 500-mile radius.

    There are other approaches to build homes with concrete: Homes can use concrete masonry units, which are similar to cinder blocks. This is a common technique around the world. Insulated concrete forms involve rigid foam blocks that are stacked like Lego bricks and are then filled with poured concrete, creating a structure with built-in insulation. And there’s even 3D-printed concrete, a rapidly evolving technology that is in its early stages of development.

    However, none of these use precast concrete modules – the rings in my prototypes – and therefore require substantially longer on-site time and labor.

    To me, precast concrete homes offer a compelling vision for the future of affordable housing. They signal a generational shift away from short-term construction and toward long-term value – redefining what it means to build for resilience, efficiency and equity in housing.

    An image of North St. Louis, taken from Google Earth, showing how vacant land can be repurposed using precast concrete homes.
    Pablo Moyano Fernández, CC BY-SA

    This article is part of a series centered on envisioning ways to deal with the housing crisis.

    Pablo Moyano Fernández does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. To spur the construction of affordable, resilient homes, the future is concrete – https://theconversation.com/to-spur-the-construction-of-affordable-resilient-homes-the-future-is-concrete-254561

    MIL OSI – Global Reports

  • MIL-OSI Global: No country for old business owners: Economic shifts create a growing challenge for America’s aging entrepreneurs

    Source: The Conversation – USA – By Nancy Forster-Holt, Clinical Associate Professor of Innovation and Entrepreneurship, University of Rhode Island

    Americans love small businesses. We dedicate a week each year to applauding them, and spend Small Business Saturday shopping locally. Yet hiding in plain sight is an enormous challenge facing small business owners as they age: retiring with dignity and foresight. The current economic climate is making this even more difficult.

    As a professor who studies aging and business, I’ve long viewed small business owners’ retirement challenges as a looming crisis. The issue is now front and center for millions of entrepreneurs approaching retirement. Small enterprises make up more than half of all privately held U.S. companies, and for many of their owners, the business is their retirement plan.

    But while owners often hope to finance their golden years by selling their companies, only 20% of small businesses are ready for sale even in good times, according to the Exit Planning Institute. And right now, conditions are far from ideal. An economic stew of inflation, supply chain instability and high borrowing costs means that interest from potential buyers is cooling.

    For many business owners, retirement isn’t a distant concern. In the U.S., baby boomers – who are currently 61 to 79 years old – own about 2.3 million businesses. Altogether, they generate about US$5 billion in revenue and employ almost 25 million people. These entrepreneurs have spent decades building businesses that often are deeply rooted in their communities. They don’t have time to ride out economic chaos, and their optimism is at a 50-year low.

    New policies, new challenges

    You can’t blame them for being gloomy. Recent policy shifts have only made life harder for business owners nearing retirement. Trade instability, whipsawing tariff announcements and disrupted supply chains have eroded already thin margins. Some businesses – generally larger ones with more negotiating power – are absorbing extra costs rather than passing them on to shoppers. Others have no choice but to raise prices, to customers’ dismay. Inflation has further squeezed profits.

    At the same time, with a few notable exceptions, buyers and capital have grown scarce. Acquirers and liquidity have dried up across many sectors. The secondary market – a barometer of broader investor appetite – now sees more sellers than buyers. These are textbook symptoms of a “flight to safety,” a market shift that drags out sale timelines and depresses valuations – all while Main Street business owners age out. These entrepreneurs typically have one shot at retirement – if any.

    Adding to these woes, many small businesses are part of what economists call regional “clusters,” providing services to nearby universities, hospitals and local governments. When those anchor institutions face budget cuts – as is happening now – small business vendors are often the first to feel the impact.

    Research shows that many aging owners actually double down in weak economic times, sinking increasing amounts of time and money in a psychological pattern known as “escalating commitment.” The result is a troubling phenomenon scholars refer to as “benign entrapment.” Aging entrepreneurs can remain attached to their businesses not because they want to, but because they see no viable exit.

    This growing crisis isn’t about bad personal planning — it’s a systemic failure.

    Rewriting the playbook on small business policy

    A key mistake that policymakers make is to lump all small business owners together into one group. That causes them to overlook important differences. After all, a 68-year-old carpenter trying to retire doesn’t have much in common with a 28-year-old tech founder pitching a startup. Policymakers may cheer for high-growth “unicorns,” but they often overlook the “cows and horses” that keep local economies running.

    Even among older business owners, circumstances vary based on local conditions. Two retiring carpenters in different towns may face vastly different prospects based on the strength of their local economies. No business, and no business owner, exists in a vacuum.

    A small business owner in Rochester, Vt., discusses the challenges of retirement in a news segment from WCAX-TV.

    Relatedly, when small businesses fail to transition, it can have consequences for the local economy. Without a buyer, many enterprises will simply shut down. And while closures can be long-planned and thoughtful, when a business closes suddenly, it’s not just the owner who loses. Employees are left scrambling for work. Suppliers lose contracts. Communities lose essential services.

    Four ways to help aging entrepreneurs

    That’s why I think policymakers should reimagine how they support small businesses, especially owners nearing the end of their careers.

    First, small business policy should be tailored to age. A retirement-ready business shouldn’t be judged solely by its growth potential. Rather, policies should recognize stability and community value as markers of success. The U.S. Small Business Administration and regional agencies can provide resources specifically for retirement planning that starts early in a business’s life, to include how to increase the value of the business and a plan to attract acquirers in later stages.

    Second, exit infrastructure should be built into local entrepreneurial ecosystems. Entrepreneurial ecosystems are built to support business entry – think incubators and accelerators – but not for exit. In other words, just like there are accelerators for launching businesses, there should be programs to support winding them down. These could include confidential peer forums, retirement-readiness clinics, succession matchmaking platforms and flexible financing options for acquisition.

    Third, chaos isn’t good for anybody. Fluctuations in capital gains taxes, estate tax thresholds and tariffs make planning difficult and reduce business value in the eyes of potential buyers. Stability encourages confidence on both sides of a transaction.

    And finally, policymakers should include ripple-effect analysis in budget decisions. When universities, hospitals or governments cut spending, small business vendors often absorb much of the shock. Policymakers should account for these downstream impacts when shaping local and federal budgets.

    If we want to truly support small businesses and their owners, it’s important to honor the lifetime arc of entrepreneurship – not just the launch and growth, but the retirement, too.

    Nancy Forster-Holt does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. No country for old business owners: Economic shifts create a growing challenge for America’s aging entrepreneurs – https://theconversation.com/no-country-for-old-business-owners-economic-shifts-create-a-growing-challenge-for-americas-aging-entrepreneurs-254537

    MIL OSI – Global Reports

  • MIL-OSI Global: How the end of carbon capture could spark a new industrial revolution

    Source: The Conversation – USA – By Andres Clarens, Professor of Civil and Environmental Engineering, University of Virginia

    Steelmaking uses a lot of energy, making it one of the highest greenhouse gas-emitting industries.
    David McNew/Getty Images

    The U.S. Department of Energy’s decision to claw back US$3.7 billion in grants from industrial demonstration projects may create an unexpected opening for American manufacturing.

    Many of the grant recipients were deploying carbon capture and storage – technologies that are designed to prevent industrial carbon pollution from entering the atmosphere by capturing it and injecting it deep underground. The approach has long been considered critical for reducing the contributions chemicals, cement production and other heavy industries make to climate change.

    However, the U.S. policy reversal could paradoxically accelerate emissions cuts from the industrial sector.

    An emissions reality check

    Heavy industry is widely viewed as the toughest part of the economy to clean up.

    The U.S. power sector has made progress, cutting emissions 35% since 2005 as coal-fired power plants were replaced with cheaper natural gas, solar and wind energy. More than 93% of new grid capacity installed in the U.S. in 2025 was forecast to be solar, wind and batteries. In transportation, electric vehicles are the fastest-growing segment of the U.S. automotive market and will lead to meaningful reductions in pollution.

    But U.S. industrial emissions have been mostly unchanged, in part because of the massive amount of coal, gas and oil required to make steel, concrete, aluminum, glass and chemicals. Together these materials account for about 22% of U.S. greenhouse gas emissions.

    The global industrial landscape is changing, though, and U.S. industries cannot, in isolation, expect that yesterday’s means of production will be able to compete in a global marketplace.

    Even without domestic mandates to reduce their emissions, U.S. industries face powerful economic pressures. The EU’s new Carbon Border Adjustment Mechanism imposes a tax on the emissions associated with imported steel, chemicals, cement and aluminum entering European markets. Similar policies are being considered by Canada, Japan, Singapore, South Korea and the United Kingdom, and were even floated in the United States.

    The false promise of carbon capture

    The appeal of carbon capture and storage, in theory, was that it could be bolted on to an existing factory with minimal changes to the core process and the carbon pollution would go away.

    Government incentives for carbon capture allow producers to keep using polluting technologies and prop up gas-powered chemical production or coal-powered concrete production.

    The Trump administration’s pullback of carbon capture and storage grants now removes some of these artificial supports.

    Without the expectation that carbon capture will help them meet regulations, this may create space to focus on materials breakthroughs that could revolutionize manufacturing while solving industries’ emissions problems.

    The materials innovation opportunity

    So, what might emissions-lowering innovation look like for industries such as cement, steel and chemicals? As a civil and environmental engineer who has worked on federal industrial policy, I study the ways these industries intersect with U.S. economic competitiveness and our built environment.

    There are many examples of U.S. innovation to be excited about. Consider just a few industries:

    Cement: Cement is one of the most widely used materials on Earth, but the technology has changed little over the past 150 years. Today, its production generates roughly 8% of total global carbon pollution. If cement production were a country, it would rank third globally after China and the United States.

    Researchers are looking at ways to make concrete that can shed heat or be lighter in weight to significantly reduce the cost of building and cooling a home. Sublime Systems developed a way to produce cement with electricity instead of coal or gas. The company lost its IDP grant in May 2025, but it has a new agreement with Microsoft.

    Making concrete do more could accelerate the transition. Researchers at Stanford and separately at MIT are developing concrete that can act as a capacitor and store over 10 kilowatt-hours of energy per cubic meter. Such materials could potentially store electricity from your solar roof or allow for roadways that can charge cars in motion.

    How concrete could be used as a capacitor. MIT.

    Technologies like these could give U.S. companies a competitive advantage while lowering emissions. Heat-shedding concrete cuts air conditioning demand, lighter formulations require less material per structure, and energy-storing concrete could potentially replace carbon-intensive battery manufacturing.

    Steel and iron: Steel and iron production generate about 7% of global emissions with centuries-old blast furnace processes that use intense heat to melt iron ore and burn off impurities. A hydrogen-based steelmaking alternative exists today that emits only water vapor, but it requires new supply chains, infrastructure and production techniques.

    U.S. Steel has been developing techniques to create stronger microstructures within steel for constructing structures with 50% less material and more strength than conventional designs. When a skyscraper needs that much less steel to achieve the same structural integrity, that eliminates millions of tons of iron ore mining, coal-fired blast furnace operations and transportation emissions.

    Chemicals: Chemical manufacturing has created simultaneous crises over the past 50 years: PFAS “forever chemicals” and microplastics have been showing up in human blood and across ecosystems, and the industry generates a large share of U.S. industrial emissions.

    Companies are developing ways to produce chemicals using engineered enzymes instead of traditional petrochemical processes, achieving 90% lower emissions in a way that could reduce production costs. These bio-based chemicals can naturally biodegrade, and the chemical processes operate at room temperature instead of requiring high heat that uses a lot of energy.

    Is there a silver bullet without carbon capture?

    While carbon capture and storage might not be the silver bullet for reducing emissions that many people thought it would be, new technologies for managing industrial heat might turn out to be the closest thing to one.

    Most industrial processes require temperatures between 300 and 1830 degrees Fahrenheit (150 and 1000 degrees Celsisus for everything from food processing to steel production. Currently, industries burn fossil fuels directly to generate this heat, creating emissions that electric alternatives cannot easily replace. Heat batteries may offer a breakthrough solution by storing renewable electricity as thermal energy, then releasing that heat on demand for industrial processes.

    How thermal batteries work. CNBC.

    Companies such as Rondo Energy are developing systems that store wind and solar power in bricklike materials heated to extreme temperatures. Essentially, they convert electricity into heat during times when electricity is abundant, usually at night. A manufacturing facility can later use that heat, which allows it to reduce energy costs and improve grid reliability by not drawing power at the busiest times. The Trump administration cut funding for projects working with Rondo’s technology, but the company’s products are being tested in other countries.

    Industrial heat pumps provide another pathway by amplifying waste heat to reach the high temperatures manufacturing requires, without using as much fossil fuel.

    The path forward

    The Department of Energy’s decision forces industrial America into a defining moment. One path leads backward toward pollution-intensive business as usual propping up obsolete processes. The other path drives forward through innovation.

    Carbon capture offered an expensive Band-Aid on old technology. Investing in materials innovation and new techniques for making them promises fundamental transformation for the future.

    Andres Clarens receives funding from the National Science Foundation and the Alfred P Sloan Foundation.

    ref. How the end of carbon capture could spark a new industrial revolution – https://theconversation.com/how-the-end-of-carbon-capture-could-spark-a-new-industrial-revolution-257894

    MIL OSI – Global Reports

  • MIL-OSI Global: I’m an expert in crafting public health messages: Here are 3 marketing strategies I use to make Philadelphia healthier

    Source: The Conversation – USA – By Sarah Bauerle Bass, Professor of Social and Behavioral Sciences, Temple University

    A comic book produced for Black transgender women in Philadelphia explains the benefits of using PrEP to prevent HIV infection. Wriply Bennet for the Risk Communication Laboratory, Temple University

    In Philadelphia, the leading causes of death are heart disease, cancer and unintentional drug overdose. While some of these deaths are caused by things out of our control – like genetics – many are largely preventable.

    Preventable deaths are the result of a series of decisions. Whether a person decides to smoke, eat lots of fried foods or be a couch potato, their decisions – sometimes unconsciously – can affect their health.

    I’m a health communication expert and public health researcher at Temple University in North Philadelphia. I began working in public health in the late 1980s at the beginning of the HIV/AIDS epidemic, and before that I worked in marketing and public relations. I have spent my career thinking about how health decisions are like many of the decisions consumers make each day around which products to buy.

    One key difference with health decisions is the inherent risks involved. There isn’t much risk in trying a new brand of cereal, but there is risk in riding a motorcycle without a helmet.

    Many people have a “that won’t happen to me” attitude when making a decision that involves risk. This element of “risk perception” has guided my interest in health decisions and how to use commercial marketing techniques – the same ones companies use to sell products – to encourage people to get vaccinated, get a colonoscopy or get treated for a medical condition.

    Temple students involved in the RapidVax project talk to Kensington residents about COVID-19 vaccinations during the pandemic.
    Temple University College of Public Health

    Breaking demographics into psychographics

    One strategy I use is segmentation analysis.

    Segmentation analysis is the process of looking at groups of people who may look like they are all similar on the surface – such as Black women from North Philadelphia – and then breaking them into smaller groups based on differences in their attitudes, beliefs or behaviors.

    Looking at these “psychographics” instead of demographics like age or sex can help public health communication researchers better understand how to communicate effectively.

    For example, I led a study in 2021 that looked at how connected transgender women living in Philadelphia and the San Francisco Bay Area felt to other members of the trans community. We wanted to see if messaging about PrEP, or pre-exposure prophylaxis, the medication used to prevent HIV infection, would need to be different depending on how connected they felt.

    We found that participants who were more engaged with the trans community were not only more knowledgeable about PrEP, but they were also more likely to see the benefits of using it compared with those who were less engaged.

    This indicates that strategies to reach those not as connected may need to include, for example, providing more basic information about what PrEP is and how it works.

    An example of perceptual mapping that shows different attitudes and beliefs around the HIV prevention medication PrEP.
    Temple University College of Public Health

    Mathematical models and 3D maps

    Another powerful marketing tool that I use is a process known as perceptual mapping and vector message modeling.

    Using simple survey answers, we can mathematically model how people are thinking about a health decision and present it in a three-dimensional map.

    Similar to how someone might think about the relationship between where cities or countries are in relation to each other – such as where Philadelphia is in relation to New York or Chicago – we can take answers from a survey and convert them into distances. We ask people to agree or disagree to statements about the benefits or barriers to a decision and enter their responses into a computer program to create the map.

    We can then do vector message modeling, which shows how to move the group toward the desired decision.

    Think back to high school physics when you may have learned about the amount of force, or pushing and pulling, needed to move one object toward another. Vector message modeling helps us figure out which beliefs to push or pull against to get the group to move toward a particular decision, and it helps us create the most persuasive messages for that group.

    When we use vector modeling along with segmentation analysis, we can also compare how messaging may need to be similar or different for different groups.

    For example, I used segmentation analysis and then perceptual mapping and vector message modeling to understand how medical mistrust might affect the decision to get vaccinated for COVID-19 among a group of Philadelphians who had not yet been vaccinated.

    Education materials created after using commercial marketing techniques to identify persuasive messages about COVID-19 booster shots.
    Temple University College of Public Health

    Our team then looked at perceptual maps and vector message modeling by levels of mistrust. The vectors showed that those with high levels of medical mistrust would be more likely to respond to messages that addressed concerns about the pandemic being a hoax, or the worry that minorities wouldn’t get the same treatment as others.

    This allowed us to think about how to build in messages around those issues in public media campaigns or other communication strategies that encourage vaccination.

    Decision-making tools

    I have used these methods to create and test a number of different communication strategies to influence health decisions.

    For example, I’ve developed web-based tools that have been used in hospitals and clinics in Philadelphia to encourage methadone patients with hepatitis C to receive antiviral treatment for their infection, Black cancer patients to take part in a clinical trial or to get genetic testing, and patients with low literacy and higher risk of colorectal cancer to have a colonoscopy.

    Staff members from the Risk Communication Laboratory organize materials to educate North Philadelphia residents about COVID-19 booster shots.
    Temple University College of Public Health

    My colleagues and I have also developed posters, booklets and social media posts that encourage low-income and vaccine-hesitant Philadelphians in Kensington to get COVID-19 booster shots; educational slides for low-literacy Philadelphia adults on dirty bombs and how the radioactive weapons might be used in a terror attack; and a comic book for trans women to learn about the benefits of PrEP use.

    Getting people to make better decisions about their health can be an uphill battle. We all have our reasons for not doing things that are good for us. For example, what did you eat for lunch today? Was it healthy? If not, why did you eat it?

    My job is to figure out what makes people do what they do, and then help them make decisions that keep them healthy.

    Read more of our stories about Philadelphia.

    Sarah Bauerle Bass has received funding from a number of organizations, including the National Institutes of Health, the American Cancer Society, Pennsylvania and Philadelphia Departments of Health, and independent pharma research grants from Gilead and Merck.

    ref. I’m an expert in crafting public health messages: Here are 3 marketing strategies I use to make Philadelphia healthier – https://theconversation.com/im-an-expert-in-crafting-public-health-messages-here-are-3-marketing-strategies-i-use-to-make-philadelphia-healthier-254905

    MIL OSI – Global Reports

  • MIL-OSI Global: How do atoms form? A physicist explains where the atoms that make up everything around come from

    Source: The Conversation – USA – By Stephen L. Levy, Associate Professor of Physics and Applied Physics and Astronomy, Binghamton University, State University of New York

    Many heavy atoms form from a supernova explosion, the remnants of which are shown in this image. NASA/ESA/Hubble Heritage Team

    Curious Kids is a series for children of all ages. If you have a question you’d like an expert to answer, send it to CuriousKidsUS@theconversation.com.


    How do atoms form? – Joshua, age 7, Shoreview, Minnesota


    Richard Feynman, a famous theoretical physicist who won the Nobel Prize, said that if he could pass on only one piece of scientific information to future generations, it would be that all things are made of atoms.

    Understanding how atoms form is a fundamental and important question, since they make up everything with mass.

    The question of where atoms comes from requires a lot of physics to be answered completely – and even then, physicists like me only have good guesses to explain how some atoms are formed.

    What is an atom?

    An atom consists of a heavy center, called the nucleus, made of particles called protons and neutrons. An atom has lighter particles called electrons that you can think of as orbiting around the nucleus.

    The electrons each carry one unit of negative charge, the protons each carry one unit of positive charge, and the neutrons have no charge. An atom has the same number of protons as electrons, so it is neutral − it has no overall charge.

    An atom consists of positively charged protons, neutrally charged neutrons and negatively charged electrons.
    AG Caesar/Wikimedia Commons, CC BY-SA

    Now, most of the atoms in the universe are the two simplest kinds: hydrogen, which has one proton, zero neutrons and one electron; and helium, which has two protons, two neutrons and two electrons. Of course, on Earth there are lots of atoms besides these that are just as common, such as carbon and oxygen, but I’ll talk about those soon.

    An element is what scientists call a group of atoms that are all the same, because they all have the same number of protons.

    When did the first atoms form?

    Most of the universe’s hydrogen and helium atoms formed around 400,000 years after the Big Bang, which is the name for when scientists think the universe began, about 14 billion years ago.

    Why did they form at that time? Astronomers know from observing distant exploding stars that the size of the universe has been getting bigger since the Big Bang. When the hydrogen and helium atoms first formed, the universe was about 1,000 times smaller than it is now.

    And based on their understanding of physics, scientists believe that the universe was much hotter when it was smaller.

    Before this time, the electrons had too much energy to settle into orbits around the hydrogen and helium nuclei. So, the hydrogen and helium atoms could form only once the universe cooled down to something like 5,000 degrees Fahrenheit (2,760 degrees Celsius). For historical reasons, this process is misleadingly called recombination − combination would be more descriptive.

    The helium and deuterium − a heavier form of hydrogen − nuclei formed even earlier, just a few minutes after the Big Bang, when the temperature was above 1 billion F (556 million C). Protons and neutrons can collide and form nuclei like these only at very high temperatures.

    Scientists believe that almost all the ordinary matter in the universe is made of about 90% hydrogen atoms and 8% helium atoms.

    How do more massive atoms form?

    So, the hydrogen and helium atoms formed during recombination, when the cooler temperature allowed electrons to fall into orbits. But you, I and almost everything on Earth is made of many more massive atoms than just hydrogen and helium. How were these atoms made?

    The surprising answer is that more massive atoms are made in stars. To make atoms with several protons and neutrons stuck together in the nucleus requires the type of high-energy collisions that occur in very hot places. The energy needed to form a heavier nucleus needs to be large enough to overcome the repulsive electric force that positive charges, like two protons, feel with each other.

    The immense heat and pressure in stars can form atoms through a process called fusion.
    NASA/SDO

    Protons and neutrons also have another property – kind of like a different type of charge – that is strong enough to bind them together once they are able to get very close together. This property is called the strong force, and the process that sticks these particles together is called fusion.

    Scientists believe that most of the elements from carbon up to iron are fused in stars heavier than our Sun, where the temperature can exceed 1 billion F (556 million C) – the same temperature that the universe was when it was just a few minutes old.

    This periodic table shows which astronomical processes scientists believe are responsible for forming each of the elements.
    Cmglee/Wikimedia Commons (image) and Jennifer Johnson/OSU (data), CC BY-SA

    But even in hot stars, elements heavier than iron and nickel won’t form. These require extra energy, because the heavier elements can more easily break into pieces.

    In a dramatic event called a supernova, the inner core of a heavy star suddenly collapses after it runs out of fuel to burn. During the powerful explosion this collapse triggers, elements that are heavier than iron can form and get ejected out into the universe.

    Astronomers are still figuring out the details of other fantastic stellar events that form larger atoms. For example, colliding neutron stars can release enormous amounts of energy – and elements such as gold – on their way to forming black holes.

    Understanding how atoms are made just requires learning a little general relativity, plus some nuclear, particle and atomic physics. But to complicate matters, there is other stuff in the universe that doesn’t appear to be made from normal atoms at all, called dark matter. Scientists are investigating what dark matter is and how it might form.


    Hello, curious kids! Do you have a question you’d like an expert to answer? Ask an adult to send your question to CuriousKidsUS@theconversation.com. Please tell us your name, age and the city where you live.

    And since curiosity has no age limit – adults, let us know what you’re wondering, too. We won’t be able to answer every question, but we will do our best.

    Stephen L. Levy receives funding from the National Science Foundation and the National Institutes of Health. He is affiliated with CyteQuest, Inc.

    ref. How do atoms form? A physicist explains where the atoms that make up everything around come from – https://theconversation.com/how-do-atoms-form-a-physicist-explains-where-the-atoms-that-make-up-everything-around-come-from-256172

    MIL OSI – Global Reports

  • MIL-OSI Global: Astronomy has a major data problem – simulating realistic images of the sky can help train algorithms

    Source: The Conversation – USA – By John Peterson, Assoc. Professor of Physics and Astronomy, Purdue University

    A simulation of a set of synthetic galaxies. Photons are sampled from these galaxies and have been simulated through the Earth’s atmosphere, a telescope and a sensor using a code called PhoSim. John Peterson/Purdue

    Professional astronomers don’t make discoveries by looking through an eyepiece like you might with a backyard telescope. Instead, they collect digital images in massive cameras attached to large telescopes.

    Just as you might have an endless library of digital photos stored in your cellphone, many astronomers collect more photos than they would ever have the time to look at. Instead, astronomers like me look at some of the images, then build algorithms and later use computers to combine and analyze the rest.

    But how can we know that the algorithms we write will work, when we don’t even have time to look at all the images? We can practice on some of the images, but one new way to build the best algorithms is to simulate some fake images as accurately as possible.

    With fake images, we can customize the exact properties of the objects in the image. That way, we can see if the algorithms we’re training can uncover those properties correctly.

    My research group and collaborators have found that the best way to create fake but realistic astronomical images is to painstakingly simulate light and its interaction with everything it encounters. Light is composed of particles called photons, and we can simulate each photon. We wrote a publicly available code to do this called the photon simulator, or PhoSim.

    The goal of the PhoSim project is to create realistic fake images that help us understand where distortions in images from real telescopes come from. The fake images help us train programs that sort through images from real telescopes. And the results from studies using PhoSim can also help astronomers correct distortions and defects in their real telescope images.

    The data deluge

    But first, why is there so much astronomy data in the first place? This is primarily due to the rise of dedicated survey telescopes. A survey telescope maps out a region on the sky rather than just pointing at specific objects.

    These observatories all have a large collecting area, a large field of view and a dedicated survey mode to collect as much light over a period of time as possible. Major surveys from the past two decades include the SDSS, Kepler, Blanco-DECam, Subaru HSC, TESS, ZTF and Euclid.

    The Vera Rubin Observatory in Chile has recently finished construction and will soon join those. Its survey begins soon after its official “first look” event on June 23, 2025. It will have a particularly strong set of survey capabilities.

    The Rubin observatory can look at a region of the sky all at once that is several times larger than the full Moon, and it can survey the entire southern celestial hemisphere every few nights.

    The Vera Rubin Observatory will take in lots of light to construct maps of the sky.
    Rubin Observatory/NSF/AURA/B. Quint, CC BY-SA

    A survey can shed light on practically every topic in astronomy.

    Some of the ambitious research questions include: making measurements about dark matter and dark energy, mapping the Milky Way’s distribution of stars, finding asteroids in the solar system, building a three-dimensional map of galaxies in the universe, finding new planets outside the solar system and tracking millions of objects that change over time, including supernovas.

    All of these surveys create a massive data deluge. They generate tens of terabytes every night – that’s millions to billions of pixels collected in seconds. In the extreme case of the Rubin observatory, if you spent all day long looking at images equivalent to the size of a 4K television screen for about one second each, you’d be looking at them 25 times too slow and you’d never keep up.

    At this rate, no individual human could ever look at all the images. But automated programs can process the data.

    Astronomers don’t just survey an astronomical object like a planet, galaxy or supernova once, either. Often we measure the same object’s size, shape, brightness and position in many different ways under many different conditions.

    But more measurements do come with more complications. For example, measurements taken under certain weather conditions or on one part of the camera may disagree with others at different locations or under different conditions. Astronomers can correct these errors – called systematics – with careful calibration or algorithms, but only if we understand the reason for the inconsistency between different measurements. That’s where PhoSim comes in. Once corrected, we can use all the images and make more detailed measurements.

    Simulations: One photon at a time

    To understand the origin of these systematics, we built PhoSim, which can simulate the propagation of light particles – photons – through the Earth’s atmosphere and then into the telescope and camera.

    A simulation of photons traveling from a single star to the Vera Rubin Observatory, made using PhoSim. The layers of turbulence in the atmosphere move according to wind patterns (top middle), and the mirrors deform (top right) depending on the temperature and forces exerted on them. The photons with different wavelengths (colors) are sampled from a star, refract through the atmosphere and then interact with the telescope’s mirrors, filter and lenses. Finally, the photons eject electrons in the sensor (bottom middle) that are counted in pixels to make an image (bottom right). John Peterson/Purdue

    PhoSim simulates the atmosphere, including air turbulence, as well as distortions from the shape of the telescope’s mirrors and the electrical properties of the sensors. The photons are propagated using a variety of physics that predict what photons do when they encounter the air and the telescope’s mirrors and lenses.

    The simulation ends by collecting electrons that have been ejected by photons into a grid of pixels, to make an image.

    Representing the light as trillions of photons is computationally efficient and an application of the Monte Carlo method, which uses random sampling. Researchers used PhoSim to verify some aspects of the Rubin observatory’s design and estimate how its images would look.

    A simulations of a series of exposures of stars, galaxies and background light through the Rubin observatory using PhoSim. Photons are sampled from the objects and then interact with the Earth’s atmosphere and Rubin’s telescope and camera.
    John Peterson/Purdue

    The results are complex, but so far we’ve connected the variation in temperature across telescope mirrors directly to astigmatism – angular blurring – in the images. We’ve also studied how high-altitude turbulence in the atmosphere that can disturb light on its way to the telescope shifts the positions of stars and galaxies in the image and causes blurring patterns that correlate with the wind. We’ve demonstrated how the electric fields in telescope sensors – which are intended to be vertical – can get distorted and warp the images.

    Researchers can use these new results to correct their measurements and better take advantage of all the data that telescopes collect.

    Traditionally, astronomical analyses haven’t worried about this level of detail, but the meticulous measurements with the current and future surveys will have to. Astronomers can make the most out of this deluge of data by using simulations to achieve a deeper level of understanding.

    John Peterson does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Astronomy has a major data problem – simulating realistic images of the sky can help train algorithms – https://theconversation.com/astronomy-has-a-major-data-problem-simulating-realistic-images-of-the-sky-can-help-train-algorithms-258786

    MIL OSI – Global Reports

  • MIL-OSI Global: Neuropathic pain has no immediate cause – research on a brain receptor may help stop this hard-to-treat condition

    Source: The Conversation – USA – By Pooja Shree Chettiar, Ph.D. Candidate in Medical Sciences, Texas A&M University

    Neuropathic pain is experienced both physically and emotionally. Salim Hanzaz/iStock via Getty Images

    Pain is easy to understand until it isn’t. A stubbed toe or sprained ankle hurts, but it makes sense because the cause is clear and the pain fades as you heal.

    But what if the pain didn’t go away? What if even a breeze felt like fire, or your leg burned for no reason at all? When pain lingers without a clear cause, that’s neuropathic pain.

    We are neuroscientists who study how pain circuits in the brain and spinal cord change over time. Our work focuses on the molecules that quietly reshape how pain is felt and remembered.

    We didn’t fully grasp how different neuropathic pain was from injury-related pain until we began working in a lab studying it. Patients spoke of a phantom pain that haunted them daily – unseen, unexplained and life-altering.

    These conversations shifted our focus from symptoms to mechanisms. What causes this ghost pain to persist, and how can we intervene at the molecular level to change it?

    More than just physical pain

    Neuropathic pain stems from damage to or dysfunction in the nervous system itself. The system that was meant to detect pain becomes the source of it, like a fire alarm going off without a fire. Even a soft touch or breeze can feel unbearable.

    Neuropathic pain doesn’t just affect the body – it also alters the brain. Chronic pain of this nature often leads to depression, anxiety, social isolation and a deep sense of helplessness. It can make even the most routine tasks feel unbearable.

    About 10% of the U.S. population – tens of millions of people – experience neuropathic pain, and cases are rising as the population ages. Complications from diabetes, cancer treatments or spinal cord injuries can lead to this condition. Despite its prevalence, doctors often overlook neuropathic pain because its underlying biology is poorly understood.

    Neuropathic pain can be debilitating.
    Kate Wieser/Moment via Getty Images

    There’s also an economic cost to neuropathic pain. This condition contributes to billions of dollars in health care spending, missed workdays and lost productivity. In the search for relief, many turn to opioids, a path that, as seen from the opioid epidemic, can carry its own devastating consequences through addiction.

    GluD1: A quiet but crucial player

    Finding treatments for neuropathic pain requires answering several questions. Why does the nervous system misfire in this way? What exactly causes it to rewire in ways that increase pain sensitivity or create phantom sensations? And most urgently: Is there a way to reset the system?

    This is where our lab’s work and the story of a receptor called GluD1 comes in. Short for glutamate delta-1 receptor, this protein doesn’t usually make headlines. Scientists have long considered GluD1 a biochemical curiosity, part of the glutamate receptor family, but not known to function like its relatives that typically transmit electrical signals in the brain.

    Instead, GluD1 plays a different role. It helps organize synapses, the junctions where neurons connect. Think of it as a construction foreman: It doesn’t send messages itself, but directs where connections form and how strong they become.

    This organizing role is critical in shaping the way neural circuits develop and adapt, especially in regions involved in pain and emotion. Our lab’s research suggests that GluD1 acts as a molecular architect of pain circuits, particularly in conditions like neuropathic pain where those circuits misfire or rewire abnormally. In parts of the nervous system crucial for pain processing like the spinal cord and amygdala, GluD1 may shape how people experience pain physically and emotionally.

    Fixing the misfire

    Across our work, we found that disruptions to GluD1 activity is linked to persistent pain. Restoring GluD1 activity can reduce pain. The question is, how exactly does GluD1 reshape the pain experience?

    In our first study, we discovered that GluD1 doesn’t operate solo. It teams up with a protein called cerebellin-1 to form a structure that maintains constant communication between brain cells. This structure, called a trans-synaptic bridge, can be compared to a strong handshake between two neurons. It makes sure that pain signals are appropriately processed and filtered.

    But in chronic pain, the bridge between these proteins becomes unstable and starts to fall apart. The result is chaotic. Like a group chat where everyone is talking at once and nobody can be heard clearly, neurons start to misfire and overreact. This synaptic noise turns up the brain’s pain sensitivity, both physically and emotionally. It suggests that GluD1 isn’t just managing pain signals, but also may be shaping how those signals feel.

    What if we could restore that broken connection?

    This image highlights the presence of GluD1, in green and yellow, in a neuron of the central amygdala, in red.
    Pooja Shree Chettiar and Siddhesh Sabnis/Dravid Lab at Texas A&M University, CC BY-SA

    In our second study, we injected mice with cerebellin-1 and saw that it reactivated GluD1 activity, easing their chronic pain without producing any side effects. It helped the pain processing system work again without the sedative effects or disruptions to other nerve signals that are common with opioids. Rather than just numbing the body, reactivating GluD1 activity recalibrated how the brain processes pain.

    Of course, this research is still in the early stages, far from clinical trials. But the implications are exciting: GluD1 may offer a way to repair the pain processing network itself, with fewer side effects and less risk of addiction than current treatments.

    For millions living with chronic pain, this small, peculiar receptor may open the door to a new kind of relief: one that heals the system, not just masks its symptoms.

    The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    ref. Neuropathic pain has no immediate cause – research on a brain receptor may help stop this hard-to-treat condition – https://theconversation.com/neuropathic-pain-has-no-immediate-cause-research-on-a-brain-receptor-may-help-stop-this-hard-to-treat-condition-256982

    MIL OSI – Global Reports

  • MIL-OSI Russia: SHU and Shandong Institute of Technology and Business agreed on cooperation

    Translation. Region: Russian Federal

    Source: State University of Management – Official website of the State –

    On June 23, a delegation from Shandong Institute of Technology and Business (SIITB) visited the National University of Management to sign a cooperation agreement.

    Rector of the State University of Management Vladimir Stroev, vice-rectors Maria Karelina and Dmitry Bryukhanov and director of the Institute of Marketing Gennady Azoev introduced the guests to the history of the university and the main areas in which cooperation is possible.

    “Our university has been training management personnel for various areas of the economy for over 100 years. We have both a humanitarian and a technical component of training. In addition, many students independently study Chinese, as they see more prospects in it than in English. GUU is actively developing cooperation with the People’s Republic of China: our university has a center for social, political and economic research in China, and last year we conducted an internship for 50 graduates of the presidential program for training management personnel in China,” Vladimir Stroyev noted.

    Rector of SHITB Tao Hu spoke about the history and capabilities of his university, noting the presence of similar positions and interests:

    “Thank you for the invitation, you have a very beautiful university. We are pleased that the interaction between our countries and our universities is developing. Since 1985, the Shandong Institute has been training personnel, primarily in the field of economics. And we really value international cooperation. I am sure that we will be able to work well on joint projects.”

    The parties discussed the possibility of admitting GUU graduates to master’s programs at SHITiB: “Business Management and Entrepreneurship”, “Applied Economics”, “Computer Science”, as well as admitting SHITiB graduates to the GUU master’s program “International Marketing and Brand Management”.

    Another area of cooperation will be the exchange of teachers for teaching language and special courses and the implementation of scientific cooperation programs.

    At the end of the meeting, a ceremonial signing of a cooperation agreement on the issues outlined took place.

    Please note: This information is raw content directly from the source of the information. It is exactly what the source states and does not reflect the position of MIL-OSI or its clients.

    MIL OSI Russia News

  • MIL-OSI United Kingdom: Student entrepreneurs are flourishing at ARU

    Source: Anglia Ruskin University

    The Helmore building at ARU’s East Road campus in Cambridge

    Anglia Ruskin University (ARU) is one of the leading institutions for student start-up companies in the country, according to new data from the Higher Education Statistics Agency (HESA).

    A total of 123 ventures were formed by ARU students in the latest reporting period of 2023/24, placing Anglia Ruskin seventh in the UK and top across all universities in the East of England.

    ARU’s Anglia Ruskin Enterprise Academy helps entrepreneurial students and recent graduates through a diverse range of support programmes, activities, opportunities, and events.

    Last year, ARU became the first UK university to receive the prestigious Entrepreneurial University Award from the National Centre for Entrepreneurship in Education (NCEE).

    “At ARU we make every effort to help all our students discover and explore entrepreneurship, regardless of their background or what or where they might be studying. We aim to help them develop the mindset and skills to get them started on their own personal entrepreneurial journeys and career paths.

    “Starting your own business can seem daunting, but we are fortunate to have students full of ideas and ambitions. In return, we offer them the support and guidance they need to help turn their dreams into reality and make a difference.”

    Professor Gary Packham, Pro Vice Chancellor for Student Enterprise and Entrepreneurship at ARU

    Among the recent start-ups is The Community Classroom CIC, founded by Nirvana Yarger, a graduate from the Distance Learning MA Education with Montessori course. The social enterprise offers accessible and inclusive educational opportunities for home-educated children, helping families who need an alternative to mainstream education.

    “While teaching in a mainstream primary school, I always felt that the National Curriculum and mainstream school approach did not provide the best outcomes for many children.

    “I never lost my desire to be an educator. While completing my MA at ARU, I gained a deeper understanding of home education and the reasons families choose to deregister their children from school.

    “I was fortunate to be chosen for the ARU Social Value Fund and I learned the fundamentals of business planning, including forecasting and market research. I was eventually awarded a £5,000 grant to launch The Community Classroom. We would not be where we are today without ARU’s support.”

    Nirvana Yarger, who is a former teacher

    Cosmin Diaconu, based in Cambridge, founded sustainable fashion company RetroGusto after graduating from ARU, and has built a collaborative network, involving ARU graduates from various disciplines, including graphic design, interior design, and marketing, all united by their passion for sustainability and independent businesses.

    Cosmin’s participation in ARU’s ThinkBigARU pitching competition last year helped him secure valuable partnerships, and his work has since featured in publications such as Varsity, Velvet Magazine, and GAY45, reflecting his commitment to diverse representation in fashion.

    “The Anglia Ruskin Enterprise Academy gave me the support and tools to grow my business with more clarity and confidence.

    “The feedback from the pitch competition was invaluable, and their seminars offered practical insights from successful entrepreneurs that continue to shape how I develop my brand and practice.”

    BA (Hons) Fashion Design graduate Cosmin Diaconu

    MIL OSI United Kingdom

  • MIL-OSI Russia: Students of SPbGASU took part in the festival “T-Dvor”

    Translation. Region: Russian Federal

    Source: Saint Petersburg State University of Architecture and Civil Engineering – Saint Petersburg State University of Architecture and Civil Engineering – Participants from SPbGASU

    Students of the Faculty of Forensic Science and Law in Construction and Transport together with representatives of the Center for Student Entrepreneurship and Career of SPbGASU visited the youth festival “T-Dvor” organized by T-Bank on June 20. The event took place in the cultural space “Nikolskie Ryady” and was dedicated to career and educational opportunities for young people.

    The goal of the festival is to create an open platform for communication between students, young professionals and employers, where they can learn about labor market trends, new formats of training and personal growth.

    During the panel discussion, the participants discussed what modern education should be like and came to the conclusion that the main requirements for it are flexibility, accessibility and practice-orientedness. In their opinion, for successful career growth it is important to have the opportunity to improve professionally without interruption from work, for which it is necessary to develop distance learning in master’s programs and other digital educational platforms.

    The lecture “Professions of the Future: Where Are You in a World That Has Not Been Built Yet” attracted great interest. The speakers talked about combining technical thinking and a humanitarian approach – the ability to work with data, understand technology and at the same time think critically and creatively. According to experts, it is precisely these specialists who will be especially in demand in the coming years.

    At the session “University vs. Work: How to Do It All,” participants learned how to effectively combine studies, part-time work, and personal life. Students especially remembered three pieces of advice from experts: it is necessary to plan not only tasks, but also rest; do not be afraid to ask for help – this is also part of professional growth; discipline is the basis of sustainable development, it can be “pumped up” just like muscles.

    “The T-Dvor festival has become an excellent opportunity for our students not only to get acquainted with new educational formats, but also to think about their professional future and the path to it,” noted Margarita Sapozhnikova, Deputy Dean of the Faculty of Forensic Expertise and Law in Construction and Transport for Career Guidance.

    Please note: This information is raw content directly from the source of the information. It is exactly what the source states and does not reflect the position of MIL-OSI or its clients.

    MIL OSI Russia News

  • MIL-OSI Russia: Presentation of Russian-language documentary prose “Chinese Seeds” held in China

    Translation. Region: Russian Federal

    Source: People’s Republic of China in Russian – People’s Republic of China in Russian –

    Source: People’s Republic of China – State Council News

    BEIJING, June 23 (Xinhua) — The presentation of the Russian-language documentary prose “Chinese Seeds or How I Grew Wheat in Kazakhstan” took place in Beijing last week.

    The event was held as part of the 31st Beijing International Book Fair, which ended on Sunday in the Chinese capital, the Keji Ribao/Science and Technology Daily newspaper reported.

    The authors of the new book are Jin Min, chief correspondent of the Nongye Kejibao (Agricultural Science and Technology Newspaper), and Zhang Zhengmao, a leading researcher at the Northwest University of Agriculture and Forestry.

    The documentary prose “Chinese Seeds” details the cultivation of high-quality wheat varieties and the results of cooperation between scientific researchers from both sides, which served as a vivid example of the mutual convergence of the aspirations of the peoples of the two countries within the framework of the joint construction of the “Belt and Road”.

    “Chinese Seeds or How I Grew Wheat in Kazakhstan” was published in Chinese in March 2023. According to the plan, this book will also be published in English, Spanish, Vietnamese and Korean.

    The author of the book, Zhang Zhengmao, who was in Astana, presented to the participants of the presentation via video link the development of the Chinese-Kazakhstani project of the Research Center for Analysis and Testing of Grain Quality.

    The new book was published by Guangxi Kesuejishu Chubanshe (Guangxi Science and Technology) Publishing House. Its director, Cen Gang, said the publication of the book will further promote exchanges between China and Kazakhstan. -0-

    MIL OSI Russia News

  • MIL-OSI Russia: Exclusive: China and Kazakhstan open a new chapter in cooperation in the field of sustainable development technologies – President of the NAS of the Republic of Kazakhstan

    Translation. Region: Russian Federal

    Source: People’s Republic of China in Russian –

    Source: People’s Republic of China – State Council News

    Astana, June 23 (Xinhua) — China and Kazakhstan are opening a new chapter in cooperation in the field of sustainable development technologies, Akhilbek Kurishbayev, President of the National Academy of Sciences of the Republic of Kazakhstan (NAS RK) and Rector of the Kazakh National Agrarian Research University (KazNAIU), said in an interview with Xinhua.

    The Kazakhstan-China Center for Science and Technology Transfer, established in February 2025 at the National Academy of Sciences of the Republic of Kazakhstan jointly with the Zhejiang University of Technology and leading Chinese high-tech companies, opens a new page in the development of innovative partnership. Within its structure, the International Joint Laboratory of Spatio-Temporal Artificial Intelligence (AI) and Sustainable Development is being formed, which has already outlined priority areas at the launch stage.

    “A stable platform will be formed on the basis of the center, on which scientists from Kazakhstan, China and other countries of the Central Asian region will work according to a single program, with clearly defined goals and objectives, concentrating resources on conducting research and obtaining effective results, including adapting Chinese technologies to national conditions,” noted A. Kurishbayev.

    According to him, organizational and technical preparatory work is in full swing, and the laboratory will begin full-scale operations in the near future.

    “We have high hopes for the work of this center and its laboratory. I am sure that these hopes will be justified,” shared A. Kurishbaev. “The basis for this is our common desire for cooperation and the concentration of common scientific potential to solve a single problem,” he added.

    Speaking about his own contribution to the development of bilateral scientific cooperation, A. Kurishbayev recalled that since 2007, as Vice Minister of Agriculture of Kazakhstan, he took the most active part in establishing and developing mutually beneficial cooperation with China. The first steps in developing cooperation in the field of science and trade in agriculture were agreements on phytosanitary and veterinary safety.

    According to him, a lot of work has been done since then: joint laboratories have been created, internships have been organized, and the Alliance for Agricultural Education, Science, and Innovation in the Field of Great Silk Road Technologies has been formed.

    “I have been to China many times, visited leading research institutes and universities,” he shared. “The scale of development of artificial intelligence, smart cities, green technologies, genetics, as well as approaches to modeling natural disasters are impressive.”

    Kazakhstan, according to him, has prospects in such areas as digitalization of the agricultural sector, water technologies, natural resource management and sustainable development of rural areas – it is in these areas that deep and practice-oriented cooperation with Chinese scientific schools is possible.

    He also emphasized the importance of environmental partnership: “Our countries are located in a single ecosystem of the Central Asian region, and we are doomed not only to live here together, but also to bear responsibility for its preservation and improvement. Therefore, it is extremely important for us to search for new environmentally friendly technologies that allow us to move away from “dirty” production and take the path of “green” development and, on this basis, create conditions for a more comfortable life not only for the present, but also for future generations. This is our sacred duty, and we have no other way. We all understand this very well.”

    A. Kurishbaev also noted the deteriorating environmental situation in the world. According to him, the negative consequences will be felt especially strongly by the fragile ecosystem of Central Asia. “This process can only be stopped by joint efforts, based on the results of research by our scientific organizations. All this is in our hands. This requires not only our joint desires, but also our determination to implement them in practice,” concluded A. Kurishbaev. –0–

    MIL OSI Russia News

  • MIL-OSI USA: Nguyen’s Injectable Piezoelectric Gel Could Treat Osteoarthritis without Surgery

    Source: US State of Connecticut

    Millions of Americans suffer from osteoarthritis, a painful joint disease that wears down cartilage and can severely impact mobility. Pain medications only mask symptoms, and surgical option carry risks of infection and immune rejection.

    Thanh Nyugen examines a sample of piezoelectric nanofibers which will be used for the injectable hydrogel for cartilage regeneration. (Contributed photo)

    At the University of Connecticut, a research team led by Thanh Nguyen, associate professor of mechanical engineering and biomedical engineering, believes the future of joint repair might lie in a tiny electrical spark—and a simple injection.

    Backed by a $2.3M grant from the National Institutes of Health (NIH) and National Institute of Biomedical Imaging and Bioengineering (NIBIB), Nguyen and his team are developing an injectable hydrogel designed to stimulate cartilage regeneration in large animal models.

    “With current treatments, we’re managing the pain, not healing the tissue,” says Nguyen. “We’re hoping that the body’s own mechanical movements—like walking—can generate tiny electrical signals that encourage cartilage to grow back.”

    The innovation harnesses the body’s natural bioelectric signals to promote healing. The injectable gel contains a piezoelectric scaffold—a composite made from biodegradable poly-L-lactic acid (PLLA) nanofibers and magnesium oxide nanoparticles. When subjected to mechanical stress—such as joint movement or ultrasound—this scaffold generates small electrical charges.

    “By delivering [electrical] signals directly to damaged areas, the scaffold can stimulate cell activity and encourage the regeneration of strong, durable cartilage, particularly in high-load joints like the knees and hips.” — Thanh Nguyen, College of Engineering

    These mimic the body’s natural electrical cues that guide tissue development and repair.

    “By delivering these signals directly to damaged areas, the scaffold can stimulate cell activity and encourage the regeneration of strong, durable cartilage, particularly in high-load joints like the knees and hips,” Nguyen says. “This method also is cell-free and drug-free, a major advantage over traditional regenerative therapies that often require lab-grown stem cells.”

    The new grant-funded study, titled “Injectable Cell-Free Piezoelectric Scaffold to Treat Osteoarthritis in Large Animal Models,” will run through 2029. It’s based on two previous studies by Nguyen, his former postdoctoral fellow Yang Liu (now a professor at Peking University, China) and his former student Tra Vinikoor ’24 Ph.D. (now an advisor at the federal Food and Drug Administration). In these studies, the team injected the gel into the knees of rabbits with damaged cartilage, and within two months, saw re-formed, functional cartilage in the animals’ knees.

    Their work was published in the top medical journals of Science Translational Medicine and Nature Communication. (See previous UConn Today articles: Regrowing Cartilage in a Damaged Knee Gets Closer to Fixing Arthritis and Gel Repairs Cartilage Without Surgery, With Electricity)

    Nguyen’s team will spend the next four years testing the injectable gel’s effectiveness in large animal models. This is a key step before human clinical trials. (contributed photo)

    Over the next four years, Nguyen’s team will test the gel’s effectiveness in large animal models, a key step before human clinical trials. Along with four other active NIH Research Project (RO1) grants funding Nguyen’s work with piezoelectric biomaterials, the group hopes that the result of this project will successfully demonstrate that a single injection, followed by brief external ultrasound sessions, can significantly restore cartilage function in severe osteoarthritis cases.

    Nguyen’s research is highly interdisciplinary and at the interface of biomaterials, nano/micro-technology, and medicine. He credits the project’s progress to a “deeply collaborative” environment at UConn, where engineering and biomedical science intersect in innovative ways.

    The NIH/NIBIB grant is the fourth grant Nguyen received in FY25. Others include: “MAP Technology for Single-Admin and Co-Delivery of Polio and Other Vxs,” supported by a $4M grant from the Gates Foundation; “Bionic Self-Charged Bone Composite Scaffold,” supported by a $2.1 award from NIH/NIBIB; and “Advancing Multi-bNAbs Microneedle Patch Technology For HIV-1 Prevention in Breastfeeding Infants,” supported by a $1.5M grant from NIH/National Institute of Allergy and Infectious Diseases.

    In addition, Nguyen served as the Materials Research Society’s Early Career Distinguished Presenter at the organization’s meeting in 2025. He spoke about his work on “Current Advances of Biodegradable and Biocompatible nanofiber-based materials for tissue engineering and drug delivery.”

    “We’re building hope for people who’ve been told their only option is a joint replacement,” he says.

    MIL OSI USA News