Category: Reports

  • MIL-OSI Global: DIY musicians: how digital ‘bedroom pop’ has transformed the music industry

    Source: The Conversation – UK – By Paul G. Oliver, Lecturer in Digital Innovation and Entrepreneurship, Edinburgh Napier University

    The ever-advancing technologies of our digital age have transformed many industries, including – and perhaps especially – music. One of the most significant shifts has been the rise of DIY artists. These independent musicians take on roles traditionally held by record labels and managers, such as producing, recording, promoting and distributing their music.

    The ubiquitous nature of digital platforms has enabled artists to reach their audiences more directly. According to a study by MIDiA Research, independent artists generated over US$1.2 billion (£900 million) in 2020, accounting for 5.1% of the global recorded music market, reflecting how digital transformations continue to reshape the music industry.

    The COVID pandemic further accelerated this process, forcing artists to find new ways to connect with their audiences when live performances were no longer possible. Many independent musicians turned to digital platforms as crucial tools to engage with their fans and generate income.

    Platforms such as TikTok, Twitch, Instagram Live, YouTube, Patreon and Bandcamp saw a surge in usage as artists adapted to the new reality, showcasing their music to a global audience and attracting new fans who might have never discovered them otherwise. These platforms became lifelines for visibility and growth when traditional avenues were shut down.

    As a lecturer in digital innovation and entrepreneurship, my work looks at the relationship between digital transformation and DIY culture in the music industry and how it is changing the game for fledgling musicians and the business end of music too.

    DIY and artistic integrity

    The DIY ethos, rooted in independence and resistance to mainstream commercialisation, has evolved very successfully in the digital domain. Historically associated underground cultures, this ethos emphasises creativity, self-management and sustainability.

    DIY artists are often inspired by the punk movement, which championed autonomy and a do-it-yourself approach to music production and distribution. This ethos is now applied digitally, where artists use online platforms to stay independent while reaching a global audience, that in more analogue times would just not have been possible.

    One of the significant challenges DIY artists face is balancing artistic integrity with the ability to make a living. While digital platforms offer unprecedented opportunities for exposure and direct-to-fan (D2F) engagement, they also introduce new pressures and dependencies.

    For example, the algorithms that govern visibility on platforms like YouTube and Spotify can also be unpredictable, often favouring more commercial content over niche or experimental works, forcing artists to compromise their creative vision to achieve financial viability.

    While DIY artists are known for their self-sufficiency, some commercial artists have also adopted elements of the DIY approach, particularly in their use of digital platforms to bypass traditional industry structures.

    Being discovered and making money

    There are numerous success stories of DIY artists who have used digital platforms to build their careers commercially. For example, the British singer-songwriter Arlo Parks has gained significant recognition by blending personal experiences with broader social themes.

    Her success is a testament to the power of authenticity and the ability to connect with a diverse audience through digital platforms. Similarly, artists like Billie Eilish and (her brother) Finneas have shown how bedroom pop can achieve mainstream success, showing the potential of DIY approaches in the digital age.

    Social media platforms play a vital role in the success of DIY artists by helping audiences discover new talent. Platforms like Instagram and TikTok are particularly effective for reaching younger audiences and creating viral content. TikTok, for example, has over 1 billion active users worldwide, and its algorithm can propel a song to viral status overnight – significantly boosting an artist’s visibility and reach.

    Subscription platforms like Patreon, Bandcamp and YouTube enable artists to make money from their work directly. These platforms allow fans to financially support their favourite artists, offering exclusive content, early access to new releases and other perks in exchange for a subscription fee. This D2F model helps artists generate a steady income, enabling them to focus more on their creative endeavours while maintaining a direct connection with their audience.

    Despite the vast opportunities digital platforms create, DIY artists face big challenges, for example, in terms of financial instability. A recent report by Help Musicians revealed that 98% of musicians are worried about rising costs in the UK. An inability to make a proper living has led many artists to seek alternative income sources, such as crowdfunding and exclusive content through subscription services like Patreon.

    However, the pressure to maintain a consistent online presence can also affect mental health – as One Direction’s Liam Payne spoke about in the months before his death – making it essential for artists to balance D2F engagement and personal wellbeing.

    DIY artists like Clairo, who rose to fame through her self-produced online content, have also spoken of her struggles with the pressures of maintaining a public persona and the toll it can take on mental health.

    DIY communities operating within the digital domain thrive on mutual support and collaboration because artists support each other with production, promotion and distribution. This sense of community is crucial for maintaining the DIY ethos and managing the complexities of the digital domain.

    The future of music looks promising, with this intersection between DIY culture, creativity and digital platforms continuing to evolve and offer new opportunities for artists. The DIY music market grew by 7.6% between 2021 and 2024.

    However, for this growth to continue, these platforms must remain artist-friendly and provide fair compensation for creators. Independent musicians can thrive in the digital domain by embracing the DIY ethos and using digital platforms with the potential for global reach, D2F engagement, and diversified income streams, providing a robust foundation for sustainable careers.



    Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.


    Paul G. Oliver does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. DIY musicians: how digital ‘bedroom pop’ has transformed the music industry – https://theconversation.com/diy-musicians-how-digital-bedroom-pop-has-transformed-the-music-industry-233364

    MIL OSI – Global Reports

  • MIL-OSI Global: Bank of Canada’s latest interest rate cut: Monetary policy is not enough to address economic issues on its own

    Source: The Conversation – Canada – By Sorin Rizeanu, Assistant Professor, Gustavson School of Business, University of Victoria

    The Canadian and American economies are deeply intertwined. With the United States Federal Reserve cautious amid mixed signals from the labour market and rising inflation worries, the Bank of Canada has just lowered its key interest rate to 3.75 per cent – cutting it by half a percentage point.

    Strong U.S. job growth and cooling inflation could result in a smaller Fed rate cut compared to its previous cut and to Canada’s recent cut. It could also pause the rate entirely, which may change economic conditions in the U.S. and Canada in the months to come. Upcoming U.S. elections complicate the problem further.

    In Canada, cooling inflation, slowing manufacturing sales and more cautious consumer spending opens the door to another half percentage point rate cut by the end of the year.

    But does the Bank of Canada have the ability to offset shifts in U.S. monetary policies through its own monetary instruments? In fact, how much room does it have to diverge from U.S. policy at all?

    Monetary conditions are transmitted from the world’s biggest financial centres to the rest of the world through gross credit flows and leverage. Any policy differences between Canada and the U.S. immediately impact Canada, including spillover effects on the loonie exchange rates and other widespread economical and social effects.

    Canada’s double trilemmas

    Canada’s key challenges include economic growth as a potential recession looms, taming inflation, housing, managing interest rates while private and public debt is sky-high and stabilizing Canada’s commodity-linked currency in an increasingly volatile geopolitical environment. Failing to address these challenges could lead to severe systemic imbalances.

    A country cannot have an independent monetary policy, stable exchange rate and free capital flows simultaneously. It must choose one side of this triangle and give up the opposite corner.
    (Sorin Rizeanu), CC BY-ND

    The Bank of Canada has good reasons to cut the interest rate back to 2.5 to 3.5 per cent, but this could have a significant impact on the loonie.

    Canada is facing two sets of trilemmas: a monetary one for the central bank and a fiscal one for the government. On the monetary side, stable exchange rates, independent monetary policy and financial market openness are three objectives that cannot all be achieved simultaneously. European countries have sacrificed monetary independence in exchange for a strong euro and financial openness.

    It’s impossible for policymakers to pursue all three choices at the same time. For instance, a country spending more without raising taxes has to increase public debt and deficit.
    (Sorin Rizeanu), CC BY-ND

    Canada, in contrast, has opted for free capital mobility and independent monetary policy at the expense of exchange rate stability. This allows the loonie to be determined by market forces, giving the central bank the ability to adjust interest rates while capital moves freely across the border.

    On the fiscal side, the government is grappling with climate change, immigration and wealth inequality. However, there is also strong public resistance to higher taxes, and public debt and deficits are currently at alarming levels.

    If the central banks are at odds

    If the Bank of Canada were to cut interest rates while the Fed doesn’t, the loonie would likely depreciate sharply, forcing a response. Such a divergence happened in June 2024, with the Fed following with a 0.5 per cent cut only in September.

    On such short-term deviations, sterilization is typically implemented to dampen the depreciation of the loonie by acquiring Canadian dollars and selling reserves.

    If the central banks were to remain at odds in the longer term, a decrease in money supply as investors flee would likely cause a decrease in domestic bank lending, which is already under pressure from public and private debt and increased default rates.

    This could decrease longer term interest rates and put additional pressure on the economy through the capital account. If investors believe the central bank is merely delaying the inevitable depreciation of its currency, it could also reinforce carry trade dynamics — an investment strategy where money is borrowed at a low cost in one currency to earn higher returns from investments in another currency.

    The bond market would also react, with notable effects in key economic sectors and asset valuation. Long-term interest rates tend to align more across countries than short-term rates, especially if global factors are influencing real rates or if investors are seeking safer assets.

    While the Bank of Canada can set its policy rate independently of the Fed’s rate, it has less control over the long-term. Long-term rates are tied to exchange rates and reflect expectations for future short-term rates and risk factors. Mortgage rates and corporate borrowing rates would be affected as well.

    Monetary policy can’t be the only answer

    The Bank of Canada’s mandate is to “keep inflation low, stable and predictable.” While this can be fulfilled through rate cuts, diverging from U.S. policy will have widespread effects on the Canadian economy. These impacts will be uneven, with indebted investors and banks likely benefiting while the working class may bear the brunt.

    The Bank of Canada focuses on providing liquidity to the financial sector, often with little regulation or oversight. However, this approach tends to overlook challenges faced by the working class. In 2022, for instance, Bank of Canada Governor Tiff Macklem advised against employers increasing wages to match inflation over concern that a wage-price spiral would occur.

    Even if the central bank wanted to address these issues, it’s limited by the ability to manage multiple outputs with just one instrument. As a result, the central bank should report not only on inflation, but also on the overall trade-offs of rate cuts.

    The Bank of Canada has a vested interest in tampering the effects of a new rate cut, especially since it could trigger a “capital famine” in the long-term and weaken the Canadian dollar. In the short-term, divergences from the U.S. will likely be manageable, but in the longer term, currency depreciation may be unavoidable to keep the economy afloat.

    Monetary policy is vital, but it’s merely the first line of defence against inflation. To truly address Canada’s economic issues, both monetary and fiscal policies need to work together in harmony, with a broader public discussion that goes beyond inflation.

    Sorin Rizeanu does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Bank of Canada’s latest interest rate cut: Monetary policy is not enough to address economic issues on its own – https://theconversation.com/bank-of-canadas-latest-interest-rate-cut-monetary-policy-is-not-enough-to-address-economic-issues-on-its-own-238396

    MIL OSI – Global Reports

  • MIL-OSI Global: Harris and Trump differ widely on gun rights, death penalty and other civil liberties questions

    Source: The Conversation – USA – By Donovan A. Watts, Assistant Professor of Political Science, Auburn University

    The Bill of Rights secures key liberties for U.S. citizens against the government’s power. U.S. Congress via Wikimedia Commons

    As the election nears, voters are considering the two leading presidential candidates’ records on a wide range of issues, including civil liberties – a broad term used to describe the constitutionally protected freedoms that protect citizens from excessive government power. These key freedoms are contained in the Bill of Rights, the first 10 amendments to the U.S. Constitution. For example, the protection for free speech under the First Amendment and the right to bear arms under the Second Amendment define people’s abilities to criticize the government and own weapons for private use.

    In turn, as a scholar of American politics, I have seen that Kamala Harris and Donald Trump have very different records on these crucial American rights.

    First Amendment freedoms of speech and press

    As California’s attorney general, Harris indirectly found herself in a battle with the First Amendment. For many years, state law required nonprofit organizations registered in California to report names and addresses of donors of amounts over US$5,000 in a single year. In 2010, the year before Harris became attorney general, her predecessor began actually enforcing that law, which Harris continued when she took office in 2011. In 2014, several conservative groups sued Harris, saying her office’s enforcement of the law was violating their First Amendment right to give money anonymously.

    Part of Harris’ job was to oversee the defense of the law in court, arguing that soliciting donor names did not bar donor disclosure requirements like California’s. The case lasted beyond her term as California’s top law enforcement officer: The U.S. Supreme Court declared parts of the law unconstitutional in 2021, after Harris had become vice president.

    While he was president, Trump’s First Amendment record was more about the media than free speech. He repeatedly declared the press “the enemy of the people.” He has suggested that media outlets who provide coverage he dislikes lose their broadcasting licenses and has pressed to change laws about libel in ways that would make it easier for public figures to file suit against unfavorable coverage.

    As California’s attorney general, Kamala Harris worked to reduce gun violence in the state.
    Kevork Djansezian/Getty Images

    Second Amendment right to bear arms

    Dating back to her tenure as a district attorney in San Francisco and as California’s attorney general, Harris has been an advocate for stricter gun control laws. However, she is not seeking to take away Americans’ guns – and recently revealed that she herself is a gun owner.

    When serving as district attorney in San Francisco, Harris worked with the city’s mayor at the time, Gavin Newsom, to develop some of the strictest local gun regulations in the country. In December 2004, Proposition H was placed on the ballot and passed by majority vote in November 2005. Proposition H banned possessing a handgun within San Francisco, with a few exceptions, and banned purchasing, possession, distribution and manufacturing of all firearms in the city. However, the proposition was overruled by the San Francisco Superior Court, which said gun ownership should be regulated at the state level.

    And in 2008, as the U.S. Supreme Court was preparing to hear a key gun control case, Harris led 18 elected prosecutors who urged the justices that a broad right to gun ownership could endanger local and state firearm laws. In a 5-4 decision, the Supreme Court held that the Second Amendment guarantees an individual the right to possess firearms.

    However, the Supreme Court’s ruling did not stop Harris in her continued fight for gun regulation. She pushed for additional funding to confiscate guns from thousands of people whom California law said were banned from having them. Later as a U.S. senator from 2017 to 2021, Harris continued to advocate for gun regulation by sponsoring bills that would have enacted universal background checks and ban assault rifles.

    During Harris’ term as vice president, she oversaw the White House Office of Gun Violence Prevention, which seeks to focus government attention on a wide range of policies to reduce gun violence, including restrictions on firearms, increased mental health services and new powers for prosecutors to use against people who use firearms when committing a crime.

    In 2019, while he was president, Donald Trump spoke to a National Rifle Association meeting and expressed support for the organization.
    AP Photo/Michael Conroy

    Trump’s record on firearms, meanwhile, has been mixed. As president, he signed legislation in 2017 that softened background check requirements for gun buyers with particular mental illness diagnoses. And during the COVID-19 pandemic, he objected to the fact that many local orders to close businesses to protect public health included shutting gun shops.

    Yet in 2018, he also moved to ban bump stocks – a device attached to a semiautomatic firearm that enables it to fire more rapidly. His ban was overturned by the Supreme Court in June 2024.

    Trump also supported and signed the Fix NICS Act, a bipartisan law that strengthened reporting to the federal gun background checks system by requiring federal agencies to submit semiannual certification reports to the attorney general on their compliance with recordkeeping and transmission requirements.

    Eighth Amendment protections against ‘cruel and unusual punishments’

    The Eighth Amendment’s protection against “cruel and unusual punishments” has often been used by the Supreme Court to evaluate uses of the death penalty.

    Harris has consistently pledged to refuse to seek the death penalty in criminal cases, noting a multitude of systemic flaws that result in its disproportional application based on defendants’ race and income. She also noted the cost to taxpayers of keeping prisoners on death row. Harris’ position was tested just months into her service as district attorney when a police officer was shot and killed in the line of duty in 2004. Harris declined to seek the death penalty for the shooter, who was convicted of murder and is serving a life sentence without the possibility of parole.

    While attorney general of California, however, she defended in court the state’s power to impose the death penalty. But when, in March 2024, the state’s governor – Newsom – declared a halt to executions, sparing all 737 people on California’s death row, Harris praised the action.

    Trump’s record on capital punishment dates back long before his political career. In 1989, he took out full-page newspaper ads calling for the return of the death penalty in New York. He specifically wanted it to be applied to the Central Park Five, five young Black and Hispanic men who were wrongly accused of raping and beating a woman. They pleaded not guilty but served years in prison before being exonerated by DNA evidence and the actual criminal’s confession.

    During his term as president, Trump resumed federal executions after a 17-year hiatus, executing 13 people in the last six months of his presidency, the last of which was just four days before his term ended.

    All in all, as voters decide who to vote for in the upcoming election, analyzing both candidates’ record on civil liberties is a good step in making an informed decision.

    Donovan A. Watts does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Harris and Trump differ widely on gun rights, death penalty and other civil liberties questions – https://theconversation.com/harris-and-trump-differ-widely-on-gun-rights-death-penalty-and-other-civil-liberties-questions-240762

    MIL OSI – Global Reports

  • MIL-OSI Global: Can Trump just order new names for Denali and the Gulf of Mexico? A geographer explains who decides what goes on the map

    Source: The Conversation – USA – By Innisfree McKinnon, Associate Professor of Geography, University of Wisconsin-Stout

    Known as Mount McKinley until 2015, Denali’s current name reflects what Native Alaskans call the mountain. Arterra/Universal Images Group via Getty Images

    President Donald Trump’s executive order to rename the Gulf of Mexico and Alaska’s Denali, the tallest peak in the country, has resulted in lots of discussion. While for some, such renaming might seem less important than the big problems the country faces, there is a formal process in the United States for renaming places, and that process is taken seriously.

    Usually, so people don’t get confused, official, agreed-upon names are used by the government. In the U.S., place names are standardized by the U.S. Board on Geographic Names, which is part of the U.S. Geological Survey, the agency in charge of making maps.

    In his executive order, Trump asks the Board on Geographic Names “to honor the contributions of visionary and patriotic Americans” and change its policies and procedures to reflect that.

    Usually, renaming a place starts locally. The people in the state or county propose a name change and gather support. The process in each state is different.

    Lake Bde Maka Ska, formerly Lake Calhoun, is the largest lake in Minneapolis.
    YinYang/E+ via Getty

    How to change a place name

    Minnesota recently changed the name of a large lake in Minneapolis to Bde Maka Ska, which the Minneapolis Park Board described as “a Dakota name for the lake that has been passed down in oral history for many years.”

    The board voted to change the name and took its request to the county commissioners. When the county agreed, the request was then sent to the Minnesota Department of Natural Resources, which made it official for Minnesota. Then, the state of Minnesota sent the request to the Board on Geographic Names, which made it official for the entire U.S.

    It’s a lot of paperwork for something so seemingly minor, but people get passionate about place names. It took 40 years to rename Denali from the name established in the late 19th century, Mount McKinley.

    The state of Alaska requested the name change in 1975, but the Board on Geographic Names didn’t take action. Members of the Ohio congressional delegation – President William McKinley was from Ohio – objected over many years to requests to rename the mountain, and the board did not act on those requests.

    The president appoints the secretary of the Interior Department. The secretary works with the heads of related agencies to appoint the Board on Geographic Names. Current committee policy states, “Input from State geographic names authorities, land
    management agencies, local governments, and Tribal Governments
    are actively pursued.”

    In 2015, President Barack Obama named a new leader for the Department of the Interior, Sally Jewell. Just as Obama made a trip to Alaska in late August 2015, Jewell declared the name change official under a law that allows the secretary of the Interior to change a name if the board doesn’t act on the proposal in a “reasonable” amount of time.

    “This name change recognizes the sacred status of Denali to many Alaska Natives,” Jewell said. “The name Denali has been official for use by the State of Alaska since 1975, but even more importantly, the mountain has been known as Denali for generations. With our own sense of reverence for this place, we are officially renaming the mountain Denali in recognition of the traditions of Alaska Natives and the strong support of the people of Alaska.”

    If someone objects to a name change, they could ask the courts to rule on whether the name change was made legally. Going back to Bde Maka Ska, some people objected to changing the name from Lake Calhoun, so they took the state natural resources agency to court. Eventually, the Minnesota Supreme Court ruled that the name change was done correctly.

    Alaska’s two U.S. senators and prominent state figures have strongly objected to Trump’s renaming attempt.

    How not to change a place name

    Renaming the Gulf of Mexico is a different kind of case, however, from renaming a geographic place within U.S. borders.

    The gulf is not within the territorial U.S. On the coast, the first 12 miles from shore are considered part of that country, but outside of that is international waters.

    The Board on Geographic Names could change the name to Gulf of America on official U.S. maps, but there is no international board in charge of place names. Each country decides what to call places. And there is no official way for the U.S. to make other countries change the name.

    It’s possible that the U.S. could formally ask other countries to change the name, or even impose sanctions against countries that don’t comply.

    If the names were officially changed in the U.S., the government would use the new names in official documents, signage and maps. As for all the people and companies in the world that make maps, they usually use the official names. But there is nothing that would force them to, if they believed that a certain name is more widely recognized.

    Innisfree McKinnon does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Can Trump just order new names for Denali and the Gulf of Mexico? A geographer explains who decides what goes on the map – https://theconversation.com/can-trump-just-order-new-names-for-denali-and-the-gulf-of-mexico-a-geographer-explains-who-decides-what-goes-on-the-map-248112

    MIL OSI – Global Reports

  • MIL-OSI Global: Amid LA fires, neighbors helped each other survive – 60 years of research shows how local heroes are crucial to disaster response

    Source: The Conversation – USA – By Tricia Wachtendorf, Professor of Sociology and Director, Disaster Research Center, University of Delaware

    Neighbors fill and pass a bucket of pool water to help extinguish a spot fire in Pacific Palisades, Calif., on Jan. 9, 2025. Brian van der Brug / Los Angeles Times via Getty Image

    As wildfires swept through neighborhoods on the outskirts of Los Angeles in January 2025, stories about residents there helping their neighbors and total strangers began trickling out on social media.

    Accounts of Hollywood stars clearing streets for emergency vehicles to get through and raising money for fire victims were widely circulated. But there were many other examples of less-famous people helping older neighbors to safety, and even showing up with trailers to evacuate horses.

    Businesses, including fitness centers, opened their facilities so evacuees could shower or charge their phones. Organizations that routinely work with homeless populations quickly mobilized their members to help ensure people living on the streets and in camps could get to secure, safe locations away from the fires and hazardous air quality.

    Disasters, by definition, overwhelm local resources, making civilian responders like these essential. Sixty years of research at the University of Delaware’s Disaster Research Center and by others examining the social aspects of disaster has repeatedly shown effective disaster management requires mobilizing community resources far beyond official channels.

    Often the response happens through local groups that form in response to a clear need in the community and with shared skills and interests. And this is exactly what we are witnessing in Los Angeles.

    Civilians helping often number in the thousands

    The number of those who step up to help during disasters varies by event, but it can be tremendous.

    Following the 1995 Oklahoma City bombing, over 6,800 volunteers worked with the Red Cross on the response. That same year, volunteers responding to the Kobe earthquake in Japan logged more than 1 million person-days of activity, a measure of the number of people times the hours they contributed.

    People use garden hoses to try to prevent homes from catching fire in Altadena, Calif., on Jan. 8, 2025. Neighbors rushed to help neighbors as the wind blew burning embers into neighborhoods.
    Mario Tama/Getty Images

    In an in-depth study of the Sept. 11, 2001, World Trade Center attacks, we interviewed local residents who used their retired fireboat to pump water for the firefighters at ground zero. Operators of tug, ferry and tour boats in and around New York City immediately responded to quickly evacuate 500,000 people in the area from danger. In fact, the majority of the boats involved belonged to private companies. Other volunteers queued evacuees and organized supplies and rides to get people home.

    Over 900 people, most acting in unofficial capacities, were awarded medals or ribbons for their efforts in just the marine response after the World Trade Center attack.

    A survey of residents after the 1985 Mexico City earthquake found that nearly 10% of local residents volunteered in the first three weeks of the response. Following the 1989 Loma Prieta earthquake, in California, a survey of residents in Santa Cruz and San Francisco counties found that two-thirds of the public were involved in response activities.

    Local businesses are often quick to help in disasters. Greg Dulan, center, who runs a soul food restaurant and food truck, hands out hot meals to wildfire evacuees at a church in Pasadena, Calif., on Jan. 15, 2025.
    Jason Armond/Los Angeles Times via Getty Images

    However, much of the work local residents contribute during and after disasters goes unaccounted for in official reports.

    There is no mechanism to quantify the full extent to which a neighbor or a complete stranger helps someone flee from peril. Yet when people are trapped and minutes count, research shows it is family, friends and neighbors who are already on the scene and are most likely to save lives. It’s often everyday citizens who also take on immediate tasks such as debris removal. Providing a phone, a car, a place to do laundry, or a little bit of elbow grease can fill a gap and let firefighters and other formal responders focus on critical operations.

    Getting the right help to where it’s needed

    Every study of a large-scale disaster conducted by the Disaster Research Center has revealed some level of emergent, informal helping behavior.

    The lack of public understanding about the large number of local residents already involved, often including disaster victims themselves, can lead to an influx of outsiders eager to help. Their arrival can actually pose challenges for the disaster response.

    When too many people show up, or when people try to operate outside their areas of expertise, they can put themselves and others at further risk. Communities often need supplies, but unsolicited goods of the wrong kind or at the wrong time can create more problems than they solve.

    Local groups such as the Pasadena Community Job Center organize volunteers to send them where help is requested. This group is removing debris from streets in Pasadena, Calif., in the wake of the Eaton Fire on Jan. 14, 2025.
    Zoë Meyers/AFP via Getty Images

    So, what can you do to best support these local efforts?

    Making a financial contribution to a trusted disaster response or local organization can go a long way to providing the support communities actually need. Organizations such as the American Red Cross or Feeding America, or local community-based groups that routinely work in the area, are often best suited to help where it’s needed the most.

    Skilled help will be needed for the long term

    Also, remember that disasters don’t end when the emergency is over. Survivors of the Los Angeles-area fires face years of confusing and frustrating recovery tasks ahead.

    Offering help after the immediate threat has passed – particularly skilled help, such as experience in construction or expertise in managing insurance and FEMA paperwork – is just as important.

    For example, after fires in 1970 destroyed hundreds of homes in the San Diego area, local architects, engineers and contractors donated their time and skills to help people rebuild. Their work was coordinated by a local architect and member of the Chamber of Commerce to ensure projects were assigned to reputable volunteers.

    As we recognize the important ways that neighbors and strangers helped those around them, the broader community can support wildfire victims by responding to offering the right help as recovery needs emerge. Just about every skill that is useful in calm times will be needed in these difficult months and years ahead.

    Tricia Wachtendorf receives funding from the National Science Foundation and Arnold Ventures Foundation.

    James Kendra receives funding from the National Science Foundation and the Centers for Disease Control and Prevention.

    ref. Amid LA fires, neighbors helped each other survive – 60 years of research shows how local heroes are crucial to disaster response – https://theconversation.com/amid-la-fires-neighbors-helped-each-other-survive-60-years-of-research-shows-how-local-heroes-are-crucial-to-disaster-response-247660

    MIL OSI – Global Reports

  • MIL-OSI Global: The Brutalist: an architect’s take on a film about one man’s journey to realise his visionary building

    Source: The Conversation – UK – By Phevos Kallitsis, Associate Head Academic, School of Architecture Art and Design, University of Portsmouth

    For anyone involved in architecture, it’s no surprise that a film focusing on a visionary architect and his profession demands the epic dimensions of cinematography, drama and a running time of 215 minutes, as in Brady Corbet’s The Brutalist. This week the film was nominated in ten Oscar categories including best picture, best director and best actor.

    Despite architects being present in film from the early stages of cinema, architecture’s role in society has rarely been at the epicentre of the narrative.

    Notable exceptions are King Vidor’s The Fountainhead (1949), where the architect is a vessel for Ayn Rand’s hymn to individualism; Peter Greenaway’s The Belly of an Architect (1987), which looks at the political stance of architects; and last year’s Megalopolis, where the architect is the ultimate coordinator of everyday life. But I never felt these films grasped the reality of architecture’s complex obligations or the challenges beyond designing.


    Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.


    The Brutalist tells the story of the fictional Hungarian architect László Tóth (Adrian Brody) who, after surviving the Holocaust and forced separation from his wife (Felicity Jones), emigrates to Philadelphia to work in the furniture shop of his prosperous cousin (Alessandro Nivola).

    Unexpectedly, Tóth is tasked with refurbishing the study of a wealthy industrialist Harrison Van Buren (Guy Pearce), who despite his initial negative reaction, hires him to design an enormous library in memory of his mother.

    In the process, Van Buren takes Tóth under his wing and helps him bring his wife to the US. The commission of the building is a joyous moment, but as the process of design and construction throws up challenges, the tension escalates.

    Epic films usually depict the rise and fall of their protagonist, but The Brutalist explores the interconnected fates of the architect and his buildings. Tóth is aware of what is at stake. Once at the top of his game in Hungary, he is ostracised for his modernism which is considered anti-German by the Nazis. He is also condemned for being a Jew.

    But Van Buren gives Tóth a second chance after a news story praises the building and he discovers the Hungarian’s previous work and his connection to the radical German Bauhaus movement.

    From that point onward, we would expect that Tóth has gained his client’s trust. His joy at getting the authorities’ approval for the building is soon punctured by the obsessive Van Buren hiring consultants to check his work and keep tabs on the budget. Soon Tóth is beset by other problems as a railway accident delays the arrival of materials causing a hiatus.

    Restarting the project is accompanied by constant concerns for health and safety and the pressures of any other potential delays. Tóth is also experiencing problems in his personal life, but Corbet and Mona Fastvold’s screenplay is driven by the challenges of realising his vision for this new groundbreaking building.

    The Brutalist demonstrates the intrinsic role the client plays and how the architect is beholden to them – in this case necessitating the negotiation of a tricky relationship with the demanding Van Buren. As Italian architect Aldo Rossi writes in his book The Architecture of the City, “the architecture that is going to be realised is always an expression of the dominant class”.

    And the dominant class wants things done their way. Tóth is even ready to sacrifice his fee to realise his vision. He needs the building to make a name for himself at a time when capitalism is producing unprecedented opportunities for architectural expression.

    It is the period about which American architect Philip C. Johnson proclaims:, “the battle for modern architecture has been won”. Think of Frank Lloyd Wright’s Johnson Wax tower or Ludwig Mies Van der Rohe’s Lake Shore Drive Apartments, or Eero Saarinen’s General Motors Technical Center to reveal how the US became the main proponent of this ambitious expansive style.

    A memorable scene in the cavernous marble quarries of Carrara in Italy is both magnificent and ominous. The sheer scale that renders humans the size of ants underscores the clash between nature and power, in the level of extraction required for materials, and the exploitation of people and planet to satisfy the egos of two competing masculinities.

    In the past, “What does an architect do?” was a question I often was asked by clients who wanted me to justify my fee. This is a question I now ask my students to reveal their own perceptions and values.

    Architecture is one of the three main fine arts of antiquity. However, beyond the artistry and the aesthetics, its role has been developing to meet the needs of its time. In a post-war world, architects were compelled to go beyond efficiency; they needed to create an identity and capture the public’s imagination, while creating buildings with market value.

    Architects take many aspects into consideration. Tóth draws beautifully, has knowledge of materials and technology, reads the landscape and understands the environment. He also manages the budget and has to promote himself in a world that mocks his accent and others him as a foreigner – architecture has a long way to go when it comes to inclusivity.

    US modernism is full of immigrant architects who either moved there very young like Estonian Louis Kahn and Finn Eero Saarinen, or by accepting teaching positions like Germans Walter Gropius and Mies Van der Rohe did after the closure of the Bauhaus.

    So The Brutalist needs its three and half hours to tell the saga of an immigrant architect’s life and the long arduous years it takes to complete a cherished project. As an architect in a digital era, it made me nostalgic for paper, charcoal drawings and physical models. And wish that architects had a filmmaker’s power to complete the construction of a building like a speeded-up film montage.

    Phevos Kallitsis does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. The Brutalist: an architect’s take on a film about one man’s journey to realise his visionary building – https://theconversation.com/the-brutalist-an-architects-take-on-a-film-about-one-mans-journey-to-realise-his-visionary-building-248127

    MIL OSI – Global Reports

  • MIL-OSI Global: Wood burning stoves are a serious problem for your health – and the environment

    Source: The Conversation – UK – By Asit Kumar Mishra, Research Fellow in School of Public of Health, University College Cork

    Monkey Business Images/Shutterstock

    There is something cosy and appealing about settling down next to a roaring fire in winter but, every year, nearly 61,000 premature deaths in Europe are caused by air pollution as a result of people burning wood or coal to heat their homes.

    Wood-burning stoves are often considered safer, cleaner and more attractive than open fires. This may, in part, explain why from 2021 to 2022, sales of wood-burning stoves increased by 40% in the UK.

    However, burning wood is not necessarily a healthier or greener alternative to coal or gas for home heating.

    Wood burning produces a complex chemical mixture of fine particulate matter (PM2.5) and gases, which can be breathed deep into the lungs. The specific contents vary based on the type of stove and the type of fuel, but chemicals can include carbon monoxide, oxides of nitrogen and a range of volatile organic compounds, such as cancer-causing formaldehyde and benzene.

    Exposure to wood smoke affects the heart, blood vessels and the respiratory system – and PM2.5 is considered to be the biggest threat. Wood smoke increases the risk of heart attacks and strokes and can exacerbate chronic obstructive pulmonary disease (COPD) and asthma. Exposure to PM2.5 from wood burning can also cause premature death.

    Exposure to this pollution also leads to loss of work days, reduced productivity, higher expenses on healthcare and increased hospital admissions.

    The risks are higher for people over 65, children, pregnant women and people with existing heart or lung conditions. Chronic wood smoke inhalation has been associated with systemic inflammation, which can make the lungs more vulnerable to infections, such as flu and COVID.

    In the UK and Ireland, solid fuel heating is the main source of outdoor PM2.5 during wintertime. While wood is the dominant solid fuel in the UK, peat burning is regularly found to make the largest contribution to PM2.5 in Ireland.

    Under cold, stagnant weather conditions, air pollution, even in small rural towns, can be as high as that found in very polluted parts of north India.

    Exposure to outdoor air pollution caused by wood burning is an obvious health risk. But the pollution also finds its way into homes, worsening indoor air quality. Also, when lighting or refuelling a wood stove, large quantities of PM2.5 escape into the indoor air. Depending on how effective the home ventilation is, the PM2.5 levels can take hours to reduce.

    Looks aren’t everything

    In surveys carried out in Ireland and the UK, it was found that most people using solid fuel stoves did it for the aesthetics and the “homely feel”. The desire to save money or necessity came next.

    Most people who use indoor wood burning in London are in wealthier neighbourhoods, while those most affected by the consequent air pollution are in poorer areas.

    Educational campaigns regarding the effect of wood-burning stoves on health and the environment can be an important tool to reduce their usage. New initiatives, such as the Clean Air Night held in the UK and Ireland, are valuable in raising awareness and possibly changing long-term heating habits.

    Encouraging users to move to more efficient and renewable heating technologies like heat pumps can reduce emissions and harm to health. This move even works out to be cheaper, except for people who source their own wood.

    Communities can also be provided with information on their local air quality, allowing them to visualise real-time effects of their actions. For example, the PM2.5 sensor network map for Cork is freely accessible to the community and identifies locations and times when PM2.5 pollution is unhealthy.

    If you have a wood burner, you could check that the pollution levels aren’t too high before you fire it up.

    How to reduce emissions

    People who rely on solid fuel stoves as their only source of home heating can adopt the following measures to reduce emissions. Use low-emission labelled stoves that reduce pollution. When burning, have small hot fires, with enough air supply and do not let the fire smoulder.

    Choose carefully what is burnt, in compliance with relevant regulations. Do not burn garbage, plastics, cardboard, treated or painted wood in your stoves. These items increase exposure to toxic pollutants.

    Ensure that stoves are installed and maintained annually by professionals. And, when lighting up or refuelling, make sure that the room the stove is in is well ventilated. This means open windows, no blocked vents, and exhaust fans or kitchen hoods can be used for additional ventilation.

    People who use solid fuel stoves as a secondary source of heating could consider using the stove less or even stopping using it altogether. That really would be a breath of fresh air.

    Asit Kumar Mishra is a DOROTHY co-fund Fellow and Marie Skłodowska-Curie Fellow and receives funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 101034345.

    John Wenger has previously received funding from several governmental organisations in Ireland for research into solid fuel burning, including the EPA and Irish Research Council.

    ref. Wood burning stoves are a serious problem for your health – and the environment – https://theconversation.com/wood-burning-stoves-are-a-serious-problem-for-your-health-and-the-environment-245737

    MIL OSI – Global Reports

  • MIL-OSI Global: Why peat is a key ingredient in whisky and the climate crisis

    Source: The Conversation – UK – By Toby Ann Halamka, Postdoctoral Researcher in Organic Geochemistry, School of Earth Sciences, University of Bristol

    Kondor83/Shutterstock

    Burnt. Smoky. Medicinal. Each of these represents a subcategory of “peaty” whisky in the Scotch Whisky Research Institute’s brightly coloured flavour wheel.

    A more chemistry-focused flavour wheel might include names like lignin phenols, aromatic hydrocarbons or nitrogen-containing heterocycles. Perhaps less appealing, but these chemicals define the flavours of Scotch whisky and represent just a few of the many types of organic carbon that are stored in peatlands.

    However, when peat is burned for the production of whisky, ancient carbon is released into the atmosphere. Approximately 80% of Scotch whisky is made using peat as a fuel source for drying barley during the malting process. The aromas of the burning peat, or “reek” as it is known in the industry, are steeped into the grains providing the intense smoky flavours associated with many Scotch whiskies.

    Historically, peat was a critical fuel resource for Scotland – a nation famously rich in peatlands with few trees for wood-burning. But as the industry has modernised, peat burning in whisky manufacturing has become less a story of adapting to resource limitations and more one of tradition and distinctive flavouring.

    There is little debate about the importance of peat burning in generating some of the most highly sought-after flavours in the world of whisky. Some enthusiasts identifying as “peat heads” track the parts per million (ppm) of peaty compounds in their favourite brands. The ppm measure represents phenol concentrations (a group of aromatic organic compounds) in the malted barley. But this does not represent how peaty your whisky will taste as much will get lost in subsequent processes. Nor does the ppm represent how much peat was burned in production.

    Most of the peat that is extracted in Scotland is used in horticulture as compost to grow things like mushrooms, lettuce and houseplants. However, both the Scottish and UK governments are making efforts to reduce peat extraction for gardening needs.

    The Scotch whisky industry makes up about 1% of total peat use in Scotland. But, as horticulture practices change, this may represent a larger portion of peat use in the future.

    In 2023, the Scotch whisky industry outlined a long-term sustainability plan that expresses goodwill but lacks clearly defined goals towards peatland restoration.

    Such policies that ban or limit the use of peat in certain industries have followed an increased awareness of how important peatlands are to locking carbon away instead of releasing it into our atmosphere. Despite making up only about 3% of Earth’s land surfaces, peatlands store more carbon than all the world’s forests.

    So, should you worry about the climate consequences of peat use in Scotch whisky?

    No matter how you slice it, harvesting peat is not good for the environment – and getting your hands on a nice dry slab of peat to extract those smoky flavours is no easy task. Peat is formed by waterlogged, oxygen-poor conditions that slow the natural breakdown process of plant material.

    While it is critical for healthy peatlands, excess water is not ideal for burning or transporting peat. Hence, peat extraction usually involves the extensive draining of peatlands. This halts the natural peat accumulation process and releases greenhouse gases from the now-degraded peats into the atmosphere.

    More than 80% of Scotland’s peatlands are degraded.

    Some recovery efforts are being made, and it has been suggested that the whisky industry can offset their peat degradation by investing in peat restoration. But, peatland restoration is a long-term and imprecise solution that might take decades to properly assess, while existing peatlands are needed as a natural carbon sink now.

    Flavour innovations

    There are reasons for “peat heads” (both whisky fans and climate warriors) to feel optimistic about the future of this industry.

    For decades, the barley malting industry has focused on extracting the most flavour out of the least peat. Innovations in enhanced peat burning efficiency and investigations into peat flavouring alternatives are just some of the ways that the whisky industry is decreasing its peat footprint.

    Change in this sector takes time. Any innovations in whisky made today must age for at least three years before being ready for the “flavour wheel”. This delay underscores the urgency of developing new methods as it will take time to find the perfect eco-friendly recipe that compromises neither the taste nor tradition of Scotch whisky.

    In the meantime, whisky drinkers can seek out distilleries that are taking active steps to decrease their environmental impact and try drinking peat-free or peat-efficient whiskies.

    To continue celebrating the uniqueness of peat as a flavour in whisky, we need to better acknowledge the effect it has on peatland degradation and continue to advocate for positive changes in the industry.

    The story of peat use in Scotch whisky will continue to evolve. But while experimenting with future flavours, Scotland must preserve one of this nation’s most precious environmental resources.


    Don’t have time to read about climate change as much as you’d like?

    Get a weekly roundup in your inbox instead. Every Wednesday, The Conversation’s environment editor writes Imagine, a short email that goes a little deeper into just one climate issue. Join the 40,000+ readers who’ve subscribed so far.


    Toby Ann Halamka receives funding from the CERES (Climate, Energy and Carbon in Ancient Earth Systems) UKRI grant at the University of Bristol.

    Mike Vreeken does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Why peat is a key ingredient in whisky and the climate crisis – https://theconversation.com/why-peat-is-a-key-ingredient-in-whisky-and-the-climate-crisis-245497

    MIL OSI – Global Reports

  • MIL-OSI Global: The Trumps want you to buy their meme coins, but history should make us cautious about the hype

    Source: The Conversation – UK – By Emmanuel Mogaji, Associate Professor in Marketing, Keele University

    Just before assuming office as the 47th president of the United States, Donald Trump introduced his meme coin – $Trump. The digital token attracted lots of attention, and a couple of days after its launch the combined value of the coins was nearly US$8.5 billion (£6.9 billion).

    Trump venturing into meme coins is perhaps not surprising, given his history of branding everything from sneakers to bibles. The first lady followed suit with a meme coin of her own ($Melania, which briefly outperformed her husband’s coin).

    History shows us that speculative hypes like this are not new. Hype can distort rational decision-making, with investors often neglecting due diligence and failing to ask the usual important questions of their investment.

    In 17th-century Netherlands, tulip bulbs became status symbols. Rare varieties could fetch six times a typical salary – until the bubble burst, leaving many financially devastated. Similarly, the South Sea Bubble of the 18th century saw the South Sea Company’s stock price skyrocket based on speculative frenzy (and a high-profile figurehead in King George I) before crashing back down. And the dotcom bubble of the early 2000s saw unproven tech startups achieve sky-high valuations on sheer optimism until the inevitable crash.

    The rise of meme coins, including the Trump ventures, bears similarities to the frenzy surrounding these past phenomena. They are driven by hype, the perception of scarcity and the promise of high returns. These factors can inflate the value irrationally and lead to significant financial risks for those who invest.

    Meme coins thrive on the power of hype. Prominent figures like Trump and viral sensations such as internet star Haliey Welch’s failed cryptocurrency have the power to generate enormous buzz. Like the tulip mania of the 1600s, these digital tokens don’t hold any intrinsic value but instead rely on public sentiment to drive prices up. The hype can quickly make them seem indispensable and highly valuable, even though they have no physical existence.

    The ease of access to meme coins also boosts their popularity. People can buy them online using simple apps or websites – much like shopping for any other product – without the need for a broker or intermediary. This autonomy appeals to modern investors, allowing them to manage their assets from the comfort of their homes. However, the simplicity and convenience often mask the high risks involved.

    Social media amplifies the excitement surrounding meme coins, creating a community vibe that fuels their popularity. The constant buzz on platforms and among influencers generates Fomo (fear of missing out), pressuring people to join the bandwagon in pursuit of the potential gains. But this rush can lead to ill-informed decisions.

    Meme coins are seen as opportunities for quick and substantial profits – an anonymous buyer (the so-called Lucky Crypto Trader) reportedly made US$100 million within hours on Trump’s coin. But these successes are rare and unpredictable. For most consumers, investing in meme coins is like gambling, with no guarantees of returns and a high likelihood of losses.

    Is it ethical?

    As a researcher in financial services marketing and fintech, I focus on the ethical and financial implications of meme coins.

    Cryptocurrencies remain largely unregulated, leaving investors without protection. So the influence of prominent figures like the Trumps hyping these assets raises questions of accountability and fairness. This lack of oversight puts inexperienced consumers at significant financial risk, which only serves to underline the need for caution.

    The parallels with past speculative bubbles offer valuable lessons. From tulip mania to the dotcom bust, history shows us the dangers of unchecked hype and speculative investments. Consumers should learn from these events to avoid repeating the same mistakes in the cryptocurrency era. There are some basic principles would-be buyers should bear in mind.

    To navigate the risks associated with meme coins and cryptocurrencies, consumers should find out more about the technology and become more aware of the trends and performance of the coins. Managing expectations is crucial; speculative investments are unpredictable and the hype can die away quickly. Diversifying investments rather than concentrating all funds in one asset or market can spread risk and provide greater financial stability.

    Education is equally important – taking the time to read the fine print on investment opportunities, such as Trump’s coin disclaimer that it is not an investment vehicle, is essential to understanding the true nature of these assets.

    Trump’s venture into meme coins is the latest in a long history of speculative financial trends, and he will probably not be the last to capitalise on this craze.

    But until regulatory frameworks catch up, consumers should tread carefully, ensuring that their pursuit of profits does not come at the expense of their financial security.

    Emmanuel Mogaji does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. The Trumps want you to buy their meme coins, but history should make us cautious about the hype – https://theconversation.com/the-trumps-want-you-to-buy-their-meme-coins-but-history-should-make-us-cautious-about-the-hype-248057

    MIL OSI – Global Reports

  • MIL-OSI Global: Almost 2 million people in the UK didn’t have the right ID to vote in 2024

    Source: The Conversation – UK – By Ralph Scott, Leverhulme Early Career Fellow in Politics, University of Bristol

    The 2024 general election was the first in the UK’s history to be run under a system of voter ID. When heading to the polling station, people could only vote if they proved their identity first. This was the result of a law brought in in 2023 and that had already applied to local elections in England that year.

    Using data from the British Election Study, we tracked people eligible to vote between 2023 and 2024 and found that 5% of people eligible to vote – nearly 2 million people – didn’t own any recognised voter identification. This lack of ID was concentrated among poorer and less educated voters.

    Of course, lacking photographic ID is not necessarily a permanent state. Some people will have been in the process of renewing passports and driving licences during this period. All of these people would also have been eligible for a voter authority certificate, a form of identification brought in with the new law – although we found take up of these was low.

    We found that around 0.5% of all voters reported being turned away at polling stations as a result of lacking ID in the local elections of 2023. We also found that four times as many people (around 2%) reported not voting because they knew they didn’t have the right ID.

    The equivalent figures were slightly lower at the general election of 2024, but a meaningful contingent still did not participate. Around 1.3% of electors – or over half a million people – were turned away or didn’t show up at all because of voter identification requirements.

    While administrative records can provide accurate numbers about how many people were turned away at the polling station, they tell us little about people who were discouraged from even trying to vote because they didn’t have the right ID. So it is clear from our analysis that the impact of voter ID on turnout is likely larger than previous estimates based on polling station returns.

    Who benefits?

    We also found that the Conservatives were more likely to benefit from the voter ID law than other parties.

    This is not surprising when we consider demographic factors. As our research shows, Conservative voters are more likely to own ID, because they are more likely to be older and more affluent. Despite changes in social patterns of party support since the 2016 Brexit referendum, this pattern still holds true.

    The types of identification which are allowed under the new law – and especially the decision to allow older people but not younger people to use travel passes – exacerbates these differences.

    Who didn’t have ID?

    Percentage of party supporters (general election vote intention) without photo ID, May 2023 (lighter column) and 2024 (darker column)
    British Election Study, CC BY-ND

    The chart above shows the percentage lacking photo ID by general election vote intention, as measured in May 2023 (lighter bars) and May 2024 (shaded bars), shortly before the general election was called.

    In 2024, only 2.4% of Conservative supporters were likely to not have photo ID, while 3.8% of Labour supporters and 4.1% of Reform supporters were lacking.

    One notable difference is an increase in Liberal Democrats and non-voters with no photo identification in 2024, although this is almost entirely due to a change in the number of people supporting the Liberal Democrats or deciding not to vote rather than changes in people’s actual ownership of ID.

    Liberal Democrat voters had the lowest proportion of supporters without voter ID in 2023 (1.3%), but in 2024, the Liberal Democrat rate exceeded that of the Conservatives (2.9%).

    There are still opportunities to mitigate the risks posed by voter ID. Ahead of the next election the new government should extend the forms of identification allowed (especially for those younger than state pension age).

    Improving public awareness around the law and the availability of voter authority certificates is another important step. There are also suggestions that a system of allowing people to vouch for others who don’t have voter ID would be an option.

    In an electorate of 49 million, if almost two million aren’t able to vote because they don’t have the right ID, there is a problem. Those interested in building trust in our democracy should consider not only minimising electoral fraud but reducing this number by as much as possible.

    Ralph Scott receives funding from the Leverhulme Trust and has previously received funding from the Economic and Social Research Council.

    Ed Fieldhouse receives funding from the Economic and Social Research Council.

    ref. Almost 2 million people in the UK didn’t have the right ID to vote in 2024 – https://theconversation.com/almost-2-million-people-in-the-uk-didnt-have-the-right-id-to-vote-in-2024-246270

    MIL OSI – Global Reports

  • MIL-OSI Global: Amid LA fires, neighbors helped each other survive – 60 years of research shows local heroes are crucial to disaster response

    Source: The Conversation – USA – By Tricia Wachtendorf, Professor of Sociology and Director, Disaster Research Center, University of Delaware

    Neighbors fill and pass a bucket of pool water to help extinguish a spot fire in Pacific Palisades, Calif., on Jan. 9, 2025. Brian van der Brug / Los Angeles Times via Getty Image

    As wildfires swept through neighborhoods on the outskirts of Los Angeles in January 2025, stories about residents there helping their neighbors and total strangers began trickling out on social media.

    Accounts of Hollywood stars clearing streets for emergency vehicles to get through and raising money for fire victims were widely circulated. But there were many other examples of less-famous people helping older neighbors to safety, and even showing up with trailers to evacuate horses.

    Businesses, including fitness centers, opened their facilities so evacuees could shower or charge their phones. Organizations that routinely work with homeless populations quickly mobilized their members to help ensure people living on the streets and in camps could get to secure, safe locations away from the fires and hazardous air quality.

    Disasters, by definition, overwhelm local resources, making civilian responders like these essential. Sixty years of research at the University of Delaware’s Disaster Research Center and by others examining the social aspects of disaster has repeatedly shown effective disaster management requires mobilizing community resources far beyond official channels.

    Often the response happens through local groups that form in response to a clear need in the community and with shared skills and interests. And this is exactly what we are witnessing in Los Angeles.

    Civilians helping often number in the thousands

    The number of those who step up to help during disasters varies by event, but it can be tremendous.

    Following the 1995 Oklahoma City bombing, over 6,800 volunteers worked with the Red Cross on the response. That same year, volunteers responding to the Kobe earthquake in Japan logged more than 1 million person-days of activity, a measure of the number of people times the hours they contributed.

    People use garden hoses to try to prevent homes from catching fire in Altadena, Calif., on Jan. 8, 2025. Neighbors rushed to help neighbors as the wind blew burning embers into neighborhoods.
    Mario Tama/Getty Images

    In an in-depth study of the Sept. 11, 2001, World Trade Center attacks, we interviewed local residents who used their retired fireboat to pump water for the firefighters at ground zero. Operators of tug, ferry and tour boats in and around New York City immediately responded to quickly evacuate 500,000 people in the area from danger. In fact, the majority of the boats involved belonged to private companies. Other volunteers queued evacuees and organized supplies and rides to get people home.

    Over 900 people, most acting in unofficial capacities, were awarded medals or ribbons for their efforts in just the marine response after the World Trade Center attack.

    A survey of residents after the 1985 Mexico City earthquake found that nearly 10% of local residents volunteered in the first three weeks of the response. Following the 1989 Loma Prieta earthquake, in California, a survey of residents in Santa Cruz and San Francisco counties found that two-thirds of the public were involved in response activities.

    Local businesses are often quick to help in disasters. Greg Dulan, center, who runs a soul food restaurant and food truck, hands out hot meals to wildfire evacuees at a church in Pasadena, Calif., on Jan. 15, 2025.
    Jason Armond/Los Angeles Times via Getty Images

    However, much of the work local residents contribute during and after disasters goes unaccounted for in official reports.

    There is no mechanism to quantify the full extent to which a neighbor or a complete stranger helps someone flee from peril. Yet when people are trapped and minutes count, research shows it is family, friends and neighbors who are already on the scene and are most likely to save lives. It’s often everyday citizens who also take on immediate tasks such as debris removal. Providing a phone, a car, a place to do laundry, or a little bit of elbow grease can fill a gap and let firefighters and other formal responders focus on critical operations.

    Getting the right help to where it’s needed

    Every study of a large-scale disaster conducted by the Disaster Research Center has revealed some level of emergent, informal helping behavior.

    The lack of public understanding about the large number of local residents already involved, often including disaster victims themselves, can lead to an influx of outsiders eager to help. Their arrival can actually pose challenges for the disaster response.

    When too many people show up, or when people try to operate outside their areas of expertise, they can put themselves and others at further risk. Communities often need supplies, but unsolicited goods of the wrong kind or at the wrong time can create more problems than they solve.

    Local groups such as the Pasadena Community Job Center organize volunteers to send them where help is requested. This group is removing debris from streets in Pasadena, Calif., in the wake of the Eaton Fire on Jan. 14, 2025.
    Zoë Meyers/AFP via Getty Images

    So, what can you do to best support these local efforts?

    Making a financial contribution to a trusted disaster response or local organization can go a long way to providing the support communities actually need. Organizations such as the American Red Cross or Feeding America, or local community-based groups that routinely work in the area, are often best suited to help where it’s needed the most.

    Skilled help will be needed for the long term

    Also, remember that disasters don’t end when the emergency is over. Survivors of the Los Angeles-area fires face years of confusing and frustrating recovery tasks ahead.

    Offering help after the immediate threat has passed – particularly skilled help, such as experience in construction or expertise in managing insurance and FEMA paperwork – is just as important.

    For example, after fires in 1970 destroyed hundreds of homes in the San Diego area, local architects, engineers and contractors donated their time and skills to help people rebuild. Their work was coordinated by a local architect and member of the Chamber of Commerce to ensure projects were assigned to reputable volunteers.

    As we recognize the important ways that neighbors and strangers helped those around them, the broader community can support wildfire victims by responding to offering the right help as recovery needs emerge. Just about every skill that is useful in calm times will be needed in these difficult months and years ahead.

    Tricia Wachtendorf receives funding from the National Science Foundation and Arnold Ventures Foundation.

    James Kendra receives funding from the National Science Foundation and the Centers for Disease Control and Prevention.

    ref. Amid LA fires, neighbors helped each other survive – 60 years of research shows local heroes are crucial to disaster response – https://theconversation.com/amid-la-fires-neighbors-helped-each-other-survive-60-years-of-research-shows-local-heroes-are-crucial-to-disaster-response-247660

    MIL OSI – Global Reports

  • MIL-OSI Global: Political assassinations, police violence and lack of press freedom: 3 barriers to peace in Mozambique

    Source: The Conversation – Africa – By Corinna Jentzsch, Assistant Professor of International Relations, Leiden University

    Mozambique’s parliament and its new president, Daniel Chapo, were sworn in in mid-January 2025 after a tumultuous post-election period of protests, barricades and police violence.

    The 9 October 2024 elections prompted countless reports of fraud, leading the European Union election observer mission to note

    irregularities during counting of votes and unjustified alteration of election results.

    Based on this, and other accounts of fraud, the opposition candidate Venâncio Mondlane claimed to have won the elections and coordinated several weeks of protests across the country.

    These were met with a harsh police response. Over 4,200 people were reportedly arrested, 730 shot and 300 killed with live ammunition between 21 October 2024 and 16 January 2025.

    After spending several weeks abroad, Mondlane returned to Mozambique on 9 January to join ongoing political talks between the government and opposition parties.




    Read more:
    Mozambique’s deadly protests: how the country got here


    How can Mozambique move forward?

    To get out of its political crisis will not be easy. It will require the party in power, Frelimo, to fundamentally change how it deals with disagreement and discontent. Buying off political opposition elites, as has been done in the past, will not calm this political storm.

    Based on my research into political violence, I suggest that the cycles of violence in the country can only be broken if the new president addresses three issues related to state repression. He needs to do this in dialogue with opposition forces to earn trust and public support for the new government.

    The three issues are:

    • putting an end to violence perpetrated by the police and army

    • ending political assassinations and ensuring accountability for the ones that have taken place

    • protecting media freedom and ending violence against journalists.

    No more blind eye to police (and army) violence

    Human rights experts urged the government in November 2024 to end the post-election violence and allow thorough investigations. Experts appointed by the UN Human Rights Council expressed concerns about

    violations of the right to life, including of a child, deliberate killings of unarmed protesters and the excessive use of force by the police deployed to disperse peaceful protests.

    Such extensive repression has been a common response by the Mozambican security forces over the past years, with severe consequences for the evolution of conflict. For example, state repression has been a major contributor to armed conflict in the northern province of Cabo Delgado, where an Islamist insurgency has been raging since 2017. Victims of violence by security forces are an important source of recruits for the insurgency.

    Accountability for political assassinations

    Mozambique has suffered from targeted killings of political opposition figures. The most recent, high-profile political assassinations took place after the elections in October. Elvino Dias, Mondlane’s lawyer, and Paulo Guambe, an official of Podemos, the political party that supported Mondlane’s run for president, were shot dead in Maputo by unknown gunmen.

    Dias was preparing a court case challenging the election results.

    Mozambique has a long history of such political assassinations. These have rarely been investigated and no one has been held accountable. The government and police regularly deny any involvement, and people have come to speak of “death squads” seeking to intimidate the political opposition and civil society.

    Freedom of the press and civil society

    The ability of the press in Mozambique to hold people accountable for their actions has been severely constrained. Its ability to report and investigate those involved in state-sanctioned violence has been a challenge for a long time.

    In its annual report for 2023 the Media Institute of Southern Africa documented the extent to which journalists had been intimidated and attacked. It reported that such incidents increased during election periods.

    This was indeed the case in the 2024 pre-election period. Journalists faced arrests when, for example, reporting on police trying to disrupt opposition parties’ events.

    Mozambique enjoys a diverse media landscape, including multiple private and local media outlets. Nevertheless, press freedom has been curtailed. An example has been the treatment of journalists investigating the armed conflict in Cabo Delgado. Soon after the conflict began in October 2017, the government barred journalists from visiting the province, and many of those reporting nevertheless were detained and held for extended periods or arrested for unsubstantiated charges.




    Read more:
    Mozambique’s long struggle to build a nation – four novels that tell the story


    The case of Amade Abubacar made headlines in 2019 when he was detained and held for 13 days in military barracks without access to a lawyer. He was then charged with “violation of state secrets” and “public instigation to crime”.

    What Abubacar did was report on the insurgency. Since then, the situation has got worse for the media. Last year, the Cabo Delgado governor Valige Tauabo accused unnamed journalists of colluding with the
    insurgents.

    As I was writing this, news reached me that Arlindo Chissale, a journalist and political activist from Nacala, had been arrested, tortured and killed by the “death squads” mentioned earlier on 7 January 2025. Arlindo worked with me on researching the conflict in Cabo Delgado.

    Freedom of the press is important to hold the new government accountable for the promises it has made to the Mozambican people.

    The way forward

    Chapo delivered a well-crafted inauguration address on 15 January. It was well crafted because, as some analysts commented, he incorporated many of the policies being advocated by Mondlane.




    Read more:
    Venâncio Mondlane is Mozambique’s political challenger: what he stands for


    He said in his speech that he had heard what the protesters were telling him during the demonstrations. And he promised to promote unity, human rights and political dialogue to (re-)create social and political stability.

    Chapo is also aware of the waves being made by Mondlane, who has recognised the political power of mobilising people around the issue of police violence. On his return to Mozambique, Mondlane presented the government with a list of demands to be implemented in the first 100 days of the new government. The first was that steps needed to be taken to stop the violence against the population.

    Since his return he has also met victims of violence at the hands of the police and army.

    The challenge is that Chapo’s party, Frelimo, which has been in power since independence in 1975, is strong and can severely curtail the president’s ability to introduce relevant reforms.




    Read more:
    Mozambique’s cycles of violence won’t end until Frelimo’s grip on power is broken


    It’s therefore far from clear whether Chapo can pursue any of his suggested policy goals.

    Dialogue with Mondlane is necessary. But if this leads to another “elite bargain” that might get him a cabinet position but does not benefit the common people, Mozambicans will not calm down. Any agreement must address the lack of accountability for police violence, stop political assassinations, and allow journalists to investigate political violence.

    Corinna Jentzsch has received research funding from the Dutch Research Council (NWO).

    ref. Political assassinations, police violence and lack of press freedom: 3 barriers to peace in Mozambique – https://theconversation.com/political-assassinations-police-violence-and-lack-of-press-freedom-3-barriers-to-peace-in-mozambique-248153

    MIL OSI – Global Reports

  • MIL-OSI Global: Seizure of Sally Mann’s photographs in Texas revives old debates about obscenity and freedom of expression

    Source: The Conversation – USA – By Amy Werbel, Professor of the History of Art, Fashion Institute of Technology (FIT)

    Photographer Sally Mann poses with her dog in 2004. Michael Williamson/The Washington Post via Getty Images

    Four photographs by celebrated artist Sally Mann were recently removed from the walls of an exhibition at the Modern Art Museum of Fort Worth at the behest of local Republican officials, who claimed they constituted child pornography. The Fort Worth Police Department is now investigating the allegation.

    Those photographs – taken more than 30 years ago – feature Mann’s children posing in the nude on the family’s isolated farm in rural Virginia. They were included in an exhibition titled “Diaries of Home,” which also featured images by renowned photographers LaToya Ruby Frazier, Nan Goldin and Catherine Opie, among others.

    One of the seized photographs depicts her son’s naked torso dripping with a melted popsicle, suggesting the innocence and messiness of childhood. In another, Mann’s naked daughter tiptoes across a tabletop, evoking both her strength and vulnerability.

    For decades, these works have elicited admiration and, yes, condemnation.

    I’m an art historian, and my most recent book documents the rise of art censorship following passage of the nation’s first federal anti-obscenity law in 1873, which became known as the Comstock Act after its chief lobbyist, the Christian evangelical activist Anthony Comstock.

    Today, the Comstock Act is in the news mostly because it prohibits abortion medication, which was considered a form of obscenity alongside erotic images, sculptures and sex toys. But in the law’s early years, it was used to confiscate vast quantities of art and literature deemed lewd, obscene or erotic. Though this form of censorship has since been deemed unconstitutional by various U.S. Supreme Court decisions, debates over what constitutes obscenity, child pornography and artistic expression persist.

    To me, the events surrounding the removal of Mann’s photographs echo those of a censorious past.

    Evangelical underpinnings

    Throughout Comstock’s career, evangelical Christians served as the most fervent supporters of his work; they were behind the creation of the New York Society for the Suppression of Vice, which funded his investigations.

    Anthony Comstock.
    Bettmann/Getty Images

    Comstock’s censorship campaigns varied. Sometimes he went after nude drawings, paintings and sculptures. But even relatively tame photographs of actresses wearing tights attracted his ire.

    In Fort Worth, objections originated from local Christian activists and organizations. Chief among them was the Danbury Institute, which penned an open letter to the Fort Worth museum, accusing Mann’s photographs of “normalizing pedophilia” and the exhibition more generally of “promoting “the breakdown of the God-ordained definition of family” through its depiction of LGBTQ parents. In its mission statement, the institute declares that “Scripture is authoritative, inerrant, infallible, and sufficient.”

    Comstock similarly believed that “God’s Law” ought to be the guiding standard for American jurisprudence. To justify seizing and destroying an enormous array of images and objects during his 43-year career, Comstock often claimed to be battling Satan.

    His efforts were broadly popular when it came to the sexually explicit images that tended to circulate in bars and saloons. But he eventually ran afoul of Americans’ more liberal and pluralistic attitudes when he targeted art and popular culture.

    Courts expand freedom of expression

    Over the course of the 20th century, the Comstock Act lost most of its teeth.

    Judges and juries increasingly upheld civil liberties claims in cases concerning freedom of expression, vastly expanding the scope of the First Amendment.

    In 1973, the Supreme Court established the current three-part “test” for obscenity. The final prong of that test dictates that a work is not obscene if it has “serious literary, artistic, political or scientific value.”

    In my view, there’s no credible claim that Mann’s long-celebrated photographs do not have serious artistic value.

    Following the removal of Mann’s photographs, arts advocates were quick to point out that the seized images are featured on prominent museum websites around the country. The National Coalition against Censorship and Artists at Risk organization issued strong statements in support of the exhibition of Mann’s photographs.

    Sally Mann’s ‘Holding the Weasel’ on display at a Sotheby’s press preview in 2008.
    Timothy A. Clary/AFP via Getty Images

    The Fort Worth sheriff’s office, which is holding the images, is reportedly evaluating whether they violate Texas’ child pornography statute. But because Mann’s photographs do not depict any sexual acts, the only phrase in this state law that could be deemed relevant is “lewd exhibition,” with “lewd” defined as an intent to stimulate sexual desire.

    Here, context is key. As one critic of the removal of Mann’s works pointed out, “Most everyone reading this can easily make a distinction between going to a museum and opening Pornhub.”

    By selectively removing a few of Mann’s photographs from the exhibition and suggesting they may be child pornography, Texas officials stripped them of their context as works of art. In doing so, they introduced the photographs to an audience that would never have seen them in an art museum but that now may search for them online with prurient intent.

    Once again, I can’t help but see a connection to Comstock’s crusades. His efforts backfired, to a degree, in that the targets of his ire, from student drawings of nude models to birth control literature, ended up getting more publicity than they otherwise would have.

    Curators also play a role

    Despite legal protections, curators are still sensitive to how works of art may offend viewers and have developed a set of practices to accommodate these sensitivities.

    Three years ago, I interviewed curators and directors in academic art museums and galleries across the country as a fellow at the University of California National Center for Free Speech and Civic Engagement.

    My research focused on how museum professionals deal with the exhibition of potentially controversial artwork. They spoke to me about a variety of best practices. For example, prominently displayed content warnings allow viewers to choose to opt in or avoid the exhibition altogether. Thoughtful placement of the works and supplemental commentary add more context and provoke thought and discussion.

    Fort Worth’s Modern Art Museum, where Mann’s photographs were displayed.
    Michael Barera, CC BY-SA

    The curators of “Diaries of Home” clearly followed these best practices.

    They stated the objective of their exhibition: to “examine conceptions of home in all their complexity,” and to feature the perspectives of women, LGBTQ and nonbinary artists and subjects. A content warning was visible to audiences before they entered the gallery: “This exhibition features mature themes that may be sensitive for some viewers.” Museum staff provided wall labels, tours and artist discussions.

    These contributions situate the exhibited artworks within a broader conversation about families in America today, which are diverse in makeup, in definition and in lifestyle.

    In other words, they show how these are serious, thoughtful works of art.

    Although I can’t imagine any sort of successful criminal prosecution will take place, I do think damage has been done. This may have been a goal from the start.

    Threatening legal action undoubtedly has a chilling effect. The Modern Art Museum in Fort Worth must grapple with a potential loss of donors. It takes time, money and effort to respond to official and public critics.

    In Comstock’s era, civil liberties activists, artists and arts organizations rose to the challenge of defending their freedom of speech.

    Those who value artistic expression today will have to follow in their footsteps.

    Amy Werbel receives funding from the State University of New York and the UC National Center for Free Speech and Civic Engagement.

    ref. Seizure of Sally Mann’s photographs in Texas revives old debates about obscenity and freedom of expression – https://theconversation.com/seizure-of-sally-manns-photographs-in-texas-revives-old-debates-about-obscenity-and-freedom-of-expression-247321

    MIL OSI – Global Reports

  • MIL-OSI Global: Navigating deepfakes and synthetic media: This course helps students demystify artificial intelligence technologies

    Source: The Conversation – USA – By Mozhdeh Khodarahmi, Associate Library Director, Macalester College

    A Macalester College course helps students navigate a rapidly evolving digital landscape. Khanchit Khirisutchalual/Getty Images

    Uncommon Courses is an occasional series from The Conversation U.S. highlighting unconventional approaches to teaching.

    Title of course:

    AI Literacy and Building Resilience to Misinformation

    What prompted the idea for the course?

    As an associate director of a college library, I’ve watched artificial intelligence technologies become commonplace in society. They help shape our media. They influence our social interactions.

    And they’re also reshaping education.

    Through conversations with colleagues and students, I discovered an urgent need: a course that demystifies AI and provides students with tools to navigate a rapidly evolving digital landscape.

    This need is relevant today given the increasing prevalence of online misinformation.

    AI-driven social media algorithms – used by Facebook and TikTok, for example – and content generation tools like ChatGPT can amplify certain voices while obscuring others.

    Those using AI tools maliciously can also create entirely false content, such as deepfake videos or misleading AI-generated news articles. By understanding these dynamics, students can become more discerning consumers and responsible users of information.

    I worked with faculty member Michael Griffin and associate director of academic technology Tamatha Perlman to design a course that introduces students to several AI fields.

    They include machine learning – how computer systems imitate the way humans learn – and deep learning, which uses artificial neural networks to learn from data.

    We also delve into generative AI – a type of AI that can produce images, videos and other forms of data – and prompt engineering, which designs prompts to guide AI models.

    What does the course explore?

    The course explores two themes: AI literacy and building resilience to misinformation.

    Students learn AI technologies such as natural language processing, which allows machines to understand and generate human language, and generative AI. They explore how these tools influence the ways information is created, shared and interpreted.

    We then delve into the ethical implications of AI, from data privacy to bias and algorithmic transparency – the principle of making AI decision-making processes understandable and open for review.

    The idea is to foster a nuanced understanding of AI’s potential benefits. One example is AI tools that personalize educational content by adapting lessons to a student’s learning pace and style.

    We also examine its potential pitfalls. Some AI hiring tools, for example, have discriminated against specific demographic groups, such as systems that disproportionately rejected women’s resumes for technical jobs.

    The course also explores cognitive biases, or systematic patterns of deviation from rationality in judgment, which can make people more susceptible to misinformation.

    We look at confirmation bias, the inclination to search for information that supports one’s preexisting beliefs. We also examine recency effect, the tendency to give more weight to recent information over earlier data.

    Students experiment with AI tools such as ChatGPT, Gemini and NotebookLM. They do so to examine misinformation case studies and participate in discussions on some complex questions.

    They include: When does AI assist in learning? When does it hinder learning? How can AI be used more responsibly? How can we know when it’s being manipulated?

    Why is this course relevant now?

    AI tools are increasingly embedded in social media and news content. This makes it critical for students to discern credible sources from misleading content.

    As AI technologies evolve, so too do the methods for spreading misinformation.

    They include AI-generated images and synthetic media, which is digitally created or altered content designed to appear authentic.

    All of these technologies can be difficult to identify and authenticate. This course gives students the tools to make informed decisions in a digital age.

    What’s a critical lesson from the course?

    Many students are surprised to learn that AI-powered platforms tailor content to match their interests.

    For example, watching a series of videos on a particular topic can lead to being shown increasingly similar content, reinforcing existing beliefs. This, in turn, can shape perceptions and distort reality.

    To address this, we introduce students to practical techniques for broadening their information sources. They also learn to cross-reference facts and scrutinize AI-curated content.

    For instance, we practice a technique called “lateral reading,” where students verify information by examining multiple sources simultaneously.

    What materials does the course feature?

    UNESCO’s Media and Information Literacy Curriculum – E-version inspired our syllabus.

    Besides academic journal articles, we draw extensively from articles and videos published by The New York Times, The Washington Post and other major news outlets to analyze misinformation stories. These sources offer ample real-life examples, enabling students to engage with timely and relevant case studies.

    We also review the AI Competency Framework for Students and the AI Competency Framework for Teachers, launched by UNESCO in September 2024. These frameworks provide valuable insights into fostering AI literacy and ethical engagement with AI technologies.

    What will the course prepare students to do?

    The goal is to empower students to approach digital information with a critical and informed mindset. This will position them as responsible citizens in a world increasingly shaped by AI.

    The course will also help students feel more confident when identifying credible sources, cross-checking information and making sense of AI-powered content. These skills will serve students well in their academic and personal lives.

    Mozhdeh Khodarahmi does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Navigating deepfakes and synthetic media: This course helps students demystify artificial intelligence technologies – https://theconversation.com/navigating-deepfakes-and-synthetic-media-this-course-helps-students-demystify-artificial-intelligence-technologies-243689

    MIL OSI – Global Reports

  • MIL-OSI Global: Harvard expands its definition of antisemitism – when does criticism of Israel cross a line?

    Source: The Conversation – USA – By Joshua Shanes, Professor of Jewish Studies, College of Charleston

    Harvard has adopted a broader definition of antisemitism. Education Images/Universal Images Group via Getty Images

    As part of Harvard University’s agreement in response to two federal lawsuits filed by Jewish students alleging antisemitic discrimination, it will adopt the International Holocaust Remembrance Alliance, or IHRA, “working definition” of antisemitism.

    This is a definition favored by many Jewish community leaders and politicians because its broad language can be applied to most anti-Israel rhetoric. This includes Kenneth Marcus, who served as assistant secretary of education during the first Trump administration and represented the students as chairman of the Louis D. Brandeis Center for Human Rights Under Law.

    In contrast, many scholars prefer either the competing Jerusalem Declaration on Antisemitism or the definition offered by the Nexus Task Force, a committee of experts led by the Bard Center for the Study of Hate. I am a member of the Nexus group and also helped compose its 2024 “Campus Guide to Identifying Antisemitism.”

    The controversy over this move indicates that many well-intentioned people still struggle to understand what exactly constitutes antisemitism and when anti-Israel rhetoric crosses the line.

    As a scholar of modern Jewish history, I offer this primer that helps answer this question.

    History of antisemitism

    There has been a sharp increase in antisemitism around the world since the Oct. 7, 2023, massacre by Hamas and Israel’s subsequent military attacks in the Gaza Strip.

    Anti-Jewish animosity dates to antiquity. The early Christian church attacked Jews, whom it blamed for crucifying Christ, and claimed to replace them as God’s chosen people. The Gospel of John in the New Testament accused Jews of being Satan’s children, while others called them demons intent on sacrificing the souls of men.

    Medieval Christians added other myths, such as the blood libel – the lie that Jews ritually murdered Christian children for their blood. Other myths accused them of poisoning wells or desecrating the consecrated host of the Eucharist to reenact the murder of Christ; some even claimed that Jews had inhuman biology such as horns or that they suckled at the teats of pigs.

    Such lies led to violent persecution of Jews over many centuries.

    Modern antisemitism

    In the 19th century, these myths were supplanted by the additional element of race – the claim that Jewishness was immutable and could not be changed via conversion. Though this idea first appeared in 15th-century Spain, it was deeply connected to the rise of modern nationalism.

    Nineteenth-century ethno-nationalists rejected the idea of a political nation united in a social contract with each other. They began imagining the nation as a biological community linked by common descent in which Jews might be tolerated but could never truly belong.

    Finally, in 1879, the German journalist Wilhelm Marr pushed the term “antisemitism” to reflect that his anti-Jewish ideology was based on race, not religion. Marr imagined the Jews as a foreign, “semitic” race, referring to the language group that includes Hebrew. The term has since persisted to mean specifically anti-Jewish hostility or prejudice.

    The myth of a Jewish conspiracy

    Modern antisemitism built on those premodern foundations, which never completely disappeared, but was fundamentally different. It emerged as part of the new politics of the democratic modern era.

    Antisemitism became the core platform of new political parties, which used it to unite otherwise opposing groups, such as shopkeepers and farmers, anxious about the modernizing world. In other words, it was not merely prejudice; it was a worldview that explained the entire world to its believers by blaming all of its faults on this scapegoat.

    Unlike earlier anti-Jewish hatred, this was less about religion and more about political and social issues. Antisemites believed the conspiracy theory that Jews all over the world controlled the levers of government, media and banking, and that defeating them would solve society’s problems.

    Thus, one of the most important features of modern antisemitic mythology was the belief that Jews constituted a single, malevolent group, with one mind, organized for the purpose of conquering and destroying the world.

    Negative traits attributed to Jews

    Antisemitic books and cartoons often used claws or tentacles to symbolize the “international Jew,” a shadowy figure they blamed for leading a global conspiracy, strangling and destroying society. Others depicted him as a puppet master running the world.

    In the late 19th century, Edmond Rothschild, head of the most famous Jewish banking family, was villainized as the symbol of international Jewish wealth and nefarious power. Today, the billionaire liberal philanthropist George Soros is often portrayed in similar ways.

    This myth that Jews constitute an international creature plotting to harm the nation has inspired massacres of Jews since the 19th century, beginning with the Russian pogroms of 1881 and leading up to the Holocaust.

    More recently, in 2018, Robert Bowers murdered 11 Jews at the Tree of Life synagogue in Pittsburgh because he was convinced that Jews, collectively under the guidance of Soros, were working to destroy America by facilitating the mass migration of nonwhite people into the country.

    Modern antisemites ascribe many immutable negative traits to Jews, but two are particularly widespread. First, Jews are said to be ruthless misers who care more about their allegedly ill-gotten wealth than the interests of their countries. Second, Jews’ loyalty to their countries is considered suspect because they are said to constitute a foreign element.

    Since Israel’s establishment in 1948, this hatred has focused on the accusation that Jews’ primary loyalty is to Israel, not the countries they live in.

    Antisemitism and anti-Zionism

    In recent years, the relationship between antisemitism and anti-Zionism has taken on renewed importance. Zionism has many factions but roughly refers to the modern political movement that argues Jews constitute a nation and have a right to self-determination in that land.

    Some activists claim that anti-Zionism – ideological opposition to Zionism – is inherently antisemitic because they equate it with denying Jews the right to self-determination and therefore equality.

    Others feel that there needs to be a clearer separation between anti-Zionism and antisemitism. They argue that equating anti-Zionism with antisemitism leads to silencing criticism of Israel’s structural mistreatment of Palestinians.

    Zionism in practice has meant the achievement of a flourishing safe haven for Jews, but it has also led to dislocation or inequality for millions of Palestinians, including refugees, West Bank Palestinians who still live under military rule, and even Palestinian citizens of Israel who face legal and social discrimination. Anti-Zionism opposes this, and critics argue that it should not be labeled antisemitic unless it taps into those antisemitic myths or otherwise calls for violence or inequality for Jews.

    This debate is evident in these competing definitions of antisemitism. Remarkably, the three main definitions tend to agree on the nature of antisemitism except regarding the relationship of anti-Israel rhetoric to antisemitism. The IHRA definition, which is by design vague and open to interpretation, allows for a wider swath of anti-Israel activism to be labeled antisemitic than the others.

    The Jerusalem Declaration, in contrast, understands rhetoric to have “crossed the line” only when it engages in antisemitic mythology, blames diaspora Jews for the actions of the Israeli state, or calls for the oppression of Jews in Israel. IHRA defenders use that definition to label a call for binational democracy – meaning citizenship for West Bank Palestinians – to be antisemitic. Likewise, they label boycotts, even of West Bank settlements that most of the world considers illegal, to be antisemitic. The Jerusalem Declaration does not.

    In other words, the key to identifying whether anti-Israel discourse has masked antisemitism is to see evidence of antisemitic mythology. For example, if Israel is described as leading an international conspiracy, or if it holds the key to solving global problems, all three definitions agree this is antisemitic.

    Equally, if Jews or Jewish institutions are held responsible for Israeli actions or are expected to take a stand one way or another regarding them, again all three definitions agree that this crosses the line because it is based on the myth of a global Jewish conspiracy.

    Identity and pride

    Critically, for many Jews living in other countries, Zionism is not primarily a political argument about the state of Israel. It instead constitutes a sense of Jewish identity and pride, even a religious identity. In contrast, many protests against Israel and Zionism are focused not on ideology but on the Israeli government and its real or alleged actions.

    This disconnect can lead to confusion if protests conflate Jews with Israel just because they are Zionist, which is antisemitic. On the other hand, Jews sometimes take protests against Israel in defense of Palestinian rights to be attacks on their Zionist identity and thus antisemitic, when they are not. There are certainly gray areas, but in general, calls for Palestinian equality, I believe, are legitimate even when they upset people with Zionist identities.

    Harvard’s statement captures this distinction. It posted a statement that, “For many Jewish people, Zionism is a part of their Jewish identity,” and added that Jews who subscribe to this identity must not be excluded from campus events on that basis.

    This does not mean that Jews are protected from hearing contrary views, any more than they are protected from hearing Christian preachers on campus or professors who teach secular views of the Bible. It means that they cannot be excluded based only on those beliefs.

    This does not, however, require an adoption of the IHRA definition of antisemitism, which goes much further. Many advocates of the IHRA definition use it to label political calls for Palestinian equality as antisemitic, as well as accusations against Israel that they consider wrong or unfair.

    Harvard’s adoption of the IHRA definition, accordingly, would mean that any speech that calls for full equality for Palestinians risks academic and legal sanction, even without any material discrimination against Jewish students. It is thus opposed by students who advocate for Palestinian rights as well as supporters of free speech more generally.

    Editor’s note: This is an updated version of an article first published on Jan. 29, 2024

    Joshua Shanes does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Harvard expands its definition of antisemitism – when does criticism of Israel cross a line? – https://theconversation.com/harvard-expands-its-definition-of-antisemitism-when-does-criticism-of-israel-cross-a-line-248199

    MIL OSI – Global Reports

  • MIL-OSI Global: Red light therapy shows promise for pain relief, inflammation and skin conditions – but other claims might be hyped

    Source: The Conversation – USA – By Praveen Arany, Associate Professor of Oral Biology, University at Buffalo

    A treatment typically lasts from three to 15 minutes. Rich Legg/E+ via Getty Images

    Red light therapy is increasingly viewed as a promising treatment for wrinkles, acne, psoriasis, scars and sun-damaged skin, and as a supportive therapy for some kinds of cancer. But does red light therapy live up to the hype that it’s practically a panacea for all sorts of ailments?

    Praveen Arany is a professor of oral biology, biomedical engineering and surgery at the University of Buffalo and an expert on the uses of light and lasers for medical purposes. He explains how red light therapy works, for what diseases and conditions it may be most useful, and if red light home devices are effective.

    What is red light therapy?

    Treatment with red light therapy involves exposure to red light at a very low dose in a hospital or clinic.

    It’s also called low-power laser therapy, soft laser therapy, cold laser therapy and nonthermal LED light therapy.

    The umbrella term is called photobiomodulation therapy, which covers other colors, or wavelengths, that have health benefits. These light wavelengths span the visible to the near-infrared spectrum.

    Red light is easily the most popular of the photobiomodulation therapies. That’s primarily due to its availability – the treatment has been around more than three decades.

    While it’s true that other colors are also clinically and commercially available, researchers are still studying them to determine exactly how effective they are. That said, green light therapy is generally used to treat migraines; yellow light for depression; and blue light to kill resistant strains of bacteria, like MRSA infections, and to treat seasonal affective disorder, a depression that typically onsets in late fall and continues through winter.

    The professional laser in the doctor’s office may be more effective than at-home LED devices.

    How does red light therapy work?

    Put simply, red light stimulates the cells in your body, energizing them while initiating blood flow to the affected area. That, in turn, spurs healing, similar to how your body responds to a cut by clotting the blood to heal a wound.

    The treatment is simple and painless. The patient, either seated or lying comfortably, is exposed to the red light for three to 15 minutes. They may experience a feeling of warmth during treatment, but it should not be uncomfortable or hot. The clinician will likely recommend eye shields.

    Used correctly, red light therapy is very safe. Overdosing – staying under the light too long or receiving treatments at very high power – does not necessarily cause harm, but it might reduce or slow benefits. However, just as some people are more prone to sunburn than others, some patients may be more sensitive to this light and might see redness in the skin. Those patients should receive lower light doses during treatment.

    What medical conditions can the therapy help?

    Randomized, controlled clinical trials show that red light therapy can reduce pain, inflammation and tissue damage. Because all of these things are prevalent in many illnesses, photobiomodulation may be a powerful adjunct for treating a wide range of diseases.

    One example is cancer. There’s now strong evidence that red light therapy can lessen pain and inflammation from radiation, chemotherapy and bone marrow stem cell transplants. Red light therapy has also reduced other complications from cancer treatment, including oral ulcers, scars and fibrosis.

    Other recent human clinical studies show that photobiomodulation helps heal diabetic and burn wounds, as well as some types of ulcers. However, this therapy should not replace good wound care treatment, such as disinfection. Photobiomodulation has also worked for patients with neck and back pain and tennis elbow.

    What about other uses for red light therapy?

    Although not proven to be effective by randomized controlled trials with large samples, which is the gold standard of research, red light therapy has been shown to benefit patients with Parkinson’s, Alzheimer’s, multiple sclerosis, fibromyalgia, arthritis, macular degeneration, myopia and autism in clinical case reports and lab research studies.

    A word of caution, however: Red light therapy may not work for all the medical conditions that proponents say it does. Red light therapy is also used for cardiovascular health, elevating mood, relieving anxiety, improving muscle performance and recovery from sports injuries, and providing anti-aging benefits to the skin. While there’s some evidence to support these types of treatments, rigorous research studies are still missing.

    Research indicates that red light therapy could help with myopia in children and macular degeneration.

    What about its commercial use?

    This is a rapidly evolving field. Both LED and laser devices – beds, lamps, helmets and face masks – are readily available in clinical and nonclinical settings, such as medical spas, gyms and beauty salons. They’re also available for at-home use.

    Laser devices are more powerful and are typically found at a hospital, clinic or doctor’s office. An LED, or light-emitting diode, is less powerful and more often used in commercial or home settings.

    The general consensus is that LEDs are OK to use in commercial establishments like beauty salons and medical spas, provided practitioners receive the appropriate training. But the use of laser devices should be relegated to clinical specialists. That’s because lasers, in untrained hands, have the potential to do more damage than LEDs.

    As for some home products, their quality and reliability may be questionable; they might not meet minimum quality standards of output power or wavelength.

    The U.S. Food and Drug Administration appears to be moving toward more rigorous evaluations of these products, especially with lasers, but there is a critical need for a certifying agency or body to take this on. These agencies would test the devices to make sure they’re actually meeting specifications. That hasn’t happened yet, but as it stands now, several scientific and professional organizations are exploring the possibilities.

    Praveen Arany consults for Wndrhlth and has cofounded two companies, OptiMed Technologies and Directed Energy Therapeutics. He has received funding from Univeristy at Buffalo, NIH, AFOSR and various PBM companies including Summus Medical, Kerber Applied Research, Thor Photomedicine, Vielight. He is affiliated with Optica, IADR-AADOCR, ASLMS, WALT, NAALT, WFLD and ALD.

    ref. Red light therapy shows promise for pain relief, inflammation and skin conditions – but other claims might be hyped – https://theconversation.com/red-light-therapy-shows-promise-for-pain-relief-inflammation-and-skin-conditions-but-other-claims-might-be-hyped-240426

    MIL OSI – Global Reports

  • MIL-OSI Global: The technology that runs Congress lags so far behind the modern world that its flag-tracking system just caught up to 2017-era Pizza Hut

    Source: The Conversation – USA – By Lorelei Kelly, Research Lead, Modernizing Congress, McCourt School of Public Policy, Georgetown University

    Tracking one of these items to your door has been possible since 2017 – tracking the other is all new. FTiare/iStock / Getty Images Plus

    On a typical day, you can’t turn on the news without hearing someone say that Congress is broken. The implication is that this dereliction explains why the institution is inert and unresponsive to the American people.

    There’s one element often missing from that discussion: Congress is confounding in large part because its members can’t hear the American people, or even each other. I mean that literally. Congressional staff serve in thousands of district offices across the nation, and their communications technology doesn’t match that of most businesses and even many homes.

    Members’ district offices only got connected to secure Wi-Fi internet service in 2023. Discussions among members and congressional staff were at times cut short at 40 minutes because some government workers were relying on the free version of Zoom, according to congressional testimony in March 2024.

    Congressional testimony discusses meetings being cut off at 40 minutes.

    The information systems Congress uses have existed largely unchanged for decades, while the world has experienced an information revolution, integrating smartphones and the internet into people’s daily personal and professional lives. The technologies that have transformed modern life and political campaigning are not yet available to improve the ability of members of Congress to govern once they win office.

    Slow to adapt

    Like many institutions, Congress resists change; only the COVID-19 pandemic pushed it to allow online hearings and bill introductions. Before 2020, whiteboards, sticky notes and interns with clipboards dominated the halls of Congress.

    Electronic signatures arrived on Capitol Hill in 2021 – more than two decades after Congress passed the ESIGN Act to allow electronic signatures and records in commerce.

    The nation spends about US$10 million a year on technology innovation in the House of Representatives – the institution that declares war and pays all the federal government’s bills. That’s just 1% of the amount theater fans have spent to see ‶Hamilton“ on Broadway since 2015.

    It seems the story of American democracy is attractive to the public, but investing in making it work is less so for Congress itself.

    The chief administrative office in Congress, a nonlegislative staff that helps run the operations of Congress, decides what types of technology can be used by members. These internal rules exist to protect Congress and national security, but that caution can also inhibit new ways to use technology to better serve the public.

    Finding a happy medium between innovation and caution can result in a livelier public discourse.

    The pandemic compelled Congress to allow witnesses to testify before committees by videoconference.
    Stefani Reynolds-Pool/Getty Images

    A modernization effort

    Congress has been working to modernize itself, including experimenting with new ways to hear local voices in their districts, including gathering constituent feedback in a standardized way that can be easily processed by computers.

    The House Natural Resources Committee was also an early adopter of technology for collaborative lawmaking. In 2020, members and committee staff used a platform called Madison to collaboratively write and edit proposed environmental justice legislation with communities across the country that had been affected by pollution.

    House leaders are also looking at what is called deliberative technology, which uses specially designed websites to facilitate digital participation by pairing collective human intelligence with artificial intelligence. People post their ideas online and respond to others’ posts. Then the systems can screen and summarize posts so users better understand each other’s perspectives.

    These systems can even handle massive group discussions involving large numbers of people who hold a wide range of positions on a vast set of issues and interests. In general, these technologies make it easier for people to find consensus and have their voices heard by policymakers in ways the policymakers can understand and respond to.

    Governments in Finland, the U.K., Canada and Brazil are already piloting deliberative technologies. In Finland, roughly one-third of young people between 12 and 17 participate in setting budget priorities for the city of Helsinki.

    In May 2024, 45 U.S.-based nonprofit organizations signed a letter to Congress asking that deliberative technology platforms be included in the approved tools for civic engagement.

    In the meantime, Congress is looking at ways to use artificial intelligence as part of a more integrated digital strategy based on lessons from other democratic legislatures.

    A panel discussion of various ideas for modernizing how Congress hears from the American people.

    Finding benefits

    Modernization efforts have opened connections within Congress and with the public. For example, hearings held by video conference during the pandemic enabled witnesses to share expertise with Congress from a distance and open up a process that is notoriously unrepresentative. I was home in rural New Mexico during the pandemic and know three people who remotely testified on tribal education, methane pollution and environmental harms from abandoned oil wells.

    New House Rules passed on Jan. 3, 2025, encourage the use of artificial intelligence in day-to-day operations and allow for remote witness testimony.

    Other efforts that are new to Congress but long established in business and personal settings include the ability to track changes in legislation and a scheduling feature that reduces overlaps in meetings. Members are regularly scheduled to be two places at once.

    Another effort in development is an internal digital staff directory that replaces expensive directories compiled by private companies assembling contact information for congressional staff.

    The road ahead

    In 2022, what is now called ”member-directed spending“ returned to Congress with some digital improvements. Formerly known as “earmarks,” this is the practice of allowing members of Congress to handpick specific projects in their home districts to receive federal money. Earmarks were abolished in 2011 amid concerns of abuse and opposition by fiscal hardliners. Their 2022 return and rebranding introduced publicly available project lists, ethics rules and a search engine to track the spending as efforts to provide public transparency about earmarks.

    Additional reforms could make the federal government even more responsive to the American people.

    Some recent improvements are already familiar. Just as customers can follow their pizza delivery from the oven to the doorstep, Congress in late 2024 created a flag-tracking app that has dramatically improved a program that allows constituents to receive a flag that has flown over the U.S. Capitol. Before, different procedures in the House and Senate caused time-consuming snags in this delivery system.

    At last, the world’s most powerful legislature caught up with Pizza Hut, which rolled out this technology in 2017 to track customers’ pizzas from the store to the delivery driver to their front door.

    Lorelei Kelly has received funding from Democracy Fund and the Hewlett Foundation for her research on modernization in the US Congress.

    ref. The technology that runs Congress lags so far behind the modern world that its flag-tracking system just caught up to 2017-era Pizza Hut – https://theconversation.com/the-technology-that-runs-congress-lags-so-far-behind-the-modern-world-that-its-flag-tracking-system-just-caught-up-to-2017-era-pizza-hut-245931

    MIL OSI – Global Reports

  • MIL-OSI Global: President Trump promises to make government efficient − and he’ll run into the same roadblocks as Presidents Taft, Roosevelt, Roosevelt, Truman, Eisenhower, Carter, Reagan, Clinton and Bush, among others

    Source: The Conversation – USA – By Jennifer Selin, Associate Professor of Law, Arizona State University

    President Donald Trump signs executive orders in the Oval Office of the White House on Jan. 20, 2025. Anna Moneymaker/Getty Images

    As President Donald Trump issued a slew of executive orders and directives on his first day of his second administration, he explained his actions by saying, “It’s all about common sense.”

    For over a century, presidents have pursued initiatives to improve the efficiency and effectiveness of government, couching those efforts in language similar to Trump’s.

    Many of these, like Trump’s Department of Government Efficiency, which he appointed billionaire Elon Musk to run, have been designed to capitalize on the expertise of people outside of government. The idea often cited as inspiration for these efforts: The private sector knows how to be efficient and nimble and strives for excellence; government doesn’t.

    But government, and government service, is about providing something that the private sector can’t. And outsiders often don’t think about the accountability requirements that the laws and Constitution of the United States impose on government workers and agencies.

    Congress, though, can help address these problems and check inappropriate proposals. It can also stand in the way of reform.

    Charles E. Merriam, left, and Louis Brownlow, members of the President’s Reorganization Committee, leave the White House after discussing government reorganization with President Franklin D. Roosevelt on Sept. 23, 1938.
    Harris & Ewing, photographer, Library of Congress

    Proposing reform is nothing new

    Perhaps the most famous group to work with a president on improving government was President Franklin D. Roosevelt’s Committee on Administrative Management, established in 1936.

    That group, commonly referred to as the Brownlow Committee, noted that while critics predicted Roosevelt would bring “decay, destruction, and death of democracy,” the executive branch – and the president who sat atop it – was one of the “very greatest” contributions to modern democracy.

    The committee argued that the president was unable to do his job because the executive branch was badly organized, federal employees lacked skills and character, and the budget process needed reform. So it proposed a series of changes designed to increase presidential power over government to enhance performance. Congress went along with some of these proposals, giving the president more staff and authority to reorganize the executive branch.

    Since then, almost every president has put together similar recommendations. For example, Presidents Harry S. Truman and Dwight D. Eisenhower appointed former President Herbert Hoover to lead advisory commissions designed to recommend changes to the federal government. President Jimmy Carter launched a series of government improvement projects, and President George W. Bush even created scorecards to rank agencies according to their performance.

    In his first term, Trump issued a mandate for reform to reorganize government for the 21st century.

    This time around, Trump has taken executive actions to freeze government hiring, create a new entity to promote government efficiency, and give him the ability to fire high-ranking administrators who influence policy.

    Most presidential proposals generally fail to come to fruition. But they often spark conversations in Congress and the media about executive power, the effectiveness of federal programs, and what government can do better.

    Most presidents have tried the same thing

    Historically, most presidents and their advisers – and indeed most scholars – have agreed that government bureaucracy is not designed in ways that promote efficiency. But that is intentional: Stanford political scientist Terry Moe has written that “American public bureaucracy is not designed to be effective. The bureaucracy arises out of politics, and its design reflects the interests, strategies, and compromises of those who exercise political power.”

    A common presidential response to this practical reality is to propose government changes that make it look more like the private sector. In 1982, President Ronald Reagan brought together 161 corporate executives overseen by industrialist J. Peter Grace to make recommendations to eliminate government waste and inefficiency, based on their experiences leading successful corporations.

    In 1993, President Bill Clinton authorized Vice President Al Gore to launch an effort to reinvent the federal government into one that worked better and cost less.

    The Clinton administration created teams in every major federal agency, modeled after the private sector’s efficiency standards, to move government “From Red Tape to Results,” as the title of the administration’s plan said.

    An introductory page from the 1993 National Performance Review executive summary, commissioned by the Clinton administration.
    CIA.gov

    Presidential attempts to make government look and work more like people think the private sector works often include adjustments to the terms of federal employment to reward employees who excel at their jobs.

    In 1905, for example, President Theodore Roosevelt established a Committee on Department Methods to examine how the federal government could recruit and retain highly qualified employees. One hundred years later, federal agencies still experienced challenges](https://www.gao.gov/assets/gao-03-2.pdf) related to hiring and retaining people who could effectively achieve agency missions.

    President Bill Clinton applauds as Vice President Al Gore speaks at a press conference on March 3, 1994, at which Gore gave Clinton a report of the National Performance Review.
    Paul J. Richards/AFP via Getty Images

    So why haven’t these plans worked?

    At least the past five presidents have faced problems in making long-term changes to government.

    In part, this is because government reorganizations and operational reforms like those contemplated by Trump require Congress to make adjustments to the laws of the United States, or at least give the president and federal agencies the money required to invest in changes.

    Consider, for example, presidential proposals to invest in new technologies, which are a large part of Trump and Musk’s plans to improve government efficiency. Since at least 1910, when President William Howard Taft established a Commission on Economy and Efficiency to address the “unnecessarily complicated and expensive” way the federal government handled and distributed government documents, presidents have recommended centralizing authority to mandate federal agencies’ use of new technologies to make government more efficient.

    But transforming government through technology requires money, people and time. Presidential plans for government-wide change are contingent upon the degree to which federal agencies can successfully implement them.

    To sidestep these problems, some presidents have proposed that the government work with the private sector. For example, Trump announced a joint venture with technology companies to invest in the government’s artificial intelligence infrastructure.

    Yet as I have found in my previous research, government investment in new technology first requires an assessment of agencies’ current technological skills and the impact technology will have on agency functions, including those related to governmental transparency, accountability and constitutional due process. It’s not enough to go out and buy software that tech giants recommend agencies acquire.

    The things that government agencies do, such as regulating the economy, promoting national security and protecting the environment, are incredibly complicated. It’s often hard to see their impact right away.

    Recognizing this, Congress has designed a complex set of laws to prevent political interference with federal employees, who tend to look at problems long term. For example, as I have found in my work with Paul Verkuil, former chairman of the Administrative Conference of the United States, Congress intentionally writes laws that require certain government positions to be held by experts who can work in their jobs without worrying about politics.

    Congress also writes the laws the federal employees administer, oversees federal programs and decides how much money to appropriate to those programs each year.

    So by design, anything labeled a “presidential commission on modernizing/fixing/refocusing government” tells only part of the story and sets out an impossible task. The president can’t make it happen alone. Nor can Elon Musk.

    Jennifer L. Selin has received funding and/or support for her research on the executive branch from the Administrative Conference of the United States. The views in this piece are those of the author and do not represent the position of the Administrative Conference or the federal government.

    ref. President Trump promises to make government efficient − and he’ll run into the same roadblocks as Presidents Taft, Roosevelt, Roosevelt, Truman, Eisenhower, Carter, Reagan, Clinton and Bush, among others – https://theconversation.com/president-trump-promises-to-make-government-efficient-and-hell-run-into-the-same-roadblocks-as-presidents-taft-roosevelt-roosevelt-truman-eisenhower-carter-reagan-clinton-and-bush-among-others-247957

    MIL OSI – Global Reports

  • MIL-OSI Global: US Supreme Court is unabashedly liberal − in its writing style

    Source: The Conversation – USA – By Jill Barton, Professor and Director of Legal Writing, University of Miami

    The current Supreme Court has upended historic precedent on abortion protections and drawn scrutiny for ethics conflicts, while its docket remains packed with high-profile cases set to dominate headlines in the months ahead.

    Yet one of its lesser-known departures from the past lies in its approach to punctuation.

    Justice Neil Gorsuch boldly departed from court tradition in 2017 with his first Supreme Court opinion. In 11 pages, he used 15 contractions. He even used one in the first paragraph: “That’s the nub of the dispute now before us,” he casually stated.

    Gorsuch’s predecessor, the late Justice Antonin Scalia, was known as a gifted, dramatic writer. Scalia thought that contractions – combining two words with an apostrophe into a shorter form, such as “don’t” in place of “do not” – were “intellectually abominable.”

    Gorsuch’s strikingly informal phrasing signaled a shift toward a more modern, conversational writing style by all nine justices.

    While the court’s politics have veered right, the justices’ prose has arguably shifted left, becoming more liberal and accessible. Today’s Supreme Court unanimously and actively embraces a progressive writing style, rebelling against old-school grammar rules, according to my study of 10,000 pages of opinions from the past decade.

    Twitter touts #GorsuchStyle

    The first opinion assigned to new justices is usually a slog. In a kind of hazing tradition, they are typically assigned to write on a tedious legal issue that easily wins unanimous agreement.

    Gorsuch used his short opinion on the dry topic of debt collection to declare a more colloquial style. In Henson v. Santander, the Harvard Law graduate spoke directly to readers, using “you” and variations of that personal pronoun 17 times, something his colleagues rarely did. Gorsuch wrote with apparent nonchalance, calling a debt collector “the repo man.”

    Journalists and court watchers took notice, brewing an online conversation about #GorsuchStyle.

    Now, most of the justices use contractions. Arguing that creativity would be stifled in a copyright infringement case, Justice Elena Kagan insisted: “And there’s the rub. (Yes, that’s mostly Shakespeare.).”

    Hey, you − I’m talking to you

    While Gorsuch might have sharpened the quill of the court’s writing revolution, all nine justices now write more casually to reach an increasingly savvy public. A few justices even drop oh-so-casual exclamation marks in their opinions.

    “The majority huffs that ‘nobody disputes’ various of these ‘points of law,’” Kagan decried in a 2021 dissent against a decision curtailing voting rights. “Excellent! I only wish the majority would take them to heart.”

    In its 2023-24 term, my research finds, the justices appealed to readers using “you” and variations of it nearly 300 times in their 60 opinions – up 40% from five years ago.

    “A police officer can seize your car if he claims it is connected
    to a crime committed by someone else,” Justice Sonia Sotomayor told readers, dissenting in a 2024 seizure case.

    Deploying both “you” and a contraction, Justice Ketanji Brown Jackson recently quipped in a 2024 criminal bribery decision: “But you don’t have to take my word for that.”

    Given that many good writers – lawyers, academics and journalists among them – avoid personal pronouns as a matter of style, the justices’ new direction shows a surprising lack of formality.

    The writing style of the justices today starkly contrasts that of their predecessors, who commonly used dense wording and labyrinthine sentences. Take this 1944 line from Justice Robert H. Jackson, whom several justices name as the writer they admire most:

    “But here is an attempt to make an otherwise innocent act a crime merely because this prisoner is the son of parents as to whom he had no choice, and belongs to a race from which there is no way to resign.”

    His writing feels lyrical and powerful but is in no way playful or personal.

    Chief Justice John Roberts, known for his rhetorical prowess, has long lamented that the media must summarize and translate the court’s lengthy opinions for the public. In 2017, he praised the monumental desegregation decision, Brown v. Board of Education, for its brevity.

    At just 10 pages, Roberts said, newspapers “had to publish the whole thing so that people could read it. They didn’t get to say, ‘Oh, this is what this means.’”

    Good, clear writing has power

    The court’s embrace of a more accessible writing style comes as its own popularity is plummeting. While 80% of Americans viewed the court favorably in the mid-1990s, only about 50% do now.

    The 2022 decision to overturn Roe v. Wade was particularly controversial, inciting two years of protests by abortion-rights supporters and a national argument over reproductive rights. But even conservative critics decried the court’s July 2024 decision to broaden presidential immunity in Trump v. United States as “a mess” and an “incoherent” “embarrassment.”

    Protesters opposing the Supreme Court’s overturning of Roe v. Wade gather in Washington, D.C., on June 24, 2024.
    Aashish Kiphayet/Middle East Images/AFP via Getty Images

    Roberts, who began his career as a young lawyer in the Reagan administration, has earned a reputation for taking a measured, long-term approach to avoid controversy, and he strives to unify the justices in consensus. The first few opinions of the 2024-2025 term, including the decision to ban TikTok, were unanimous – as are roughly 50% of the court’s decisions, though these tend to address less contentious issues.

    But leaks of draft opinions and memos about the justices’ confidential deliberations paint a picture of a storied institution in disarray. Scrutiny of the Supreme Court is mounting, and critics, including former President Joe Biden, have called for a binding ethics code and term limits.

    For the Roberts Court, the challenge ahead lies in securing its legitimacy among a deeply polarized American public. The justices making their opinions more approachable may be a small gesture in that direction.

    “The thing about the Supreme Court that I think is so magnificent is that the justices get to actually explain their votes,” Jackson told NPR on Sept. 4, 2024. “We are the one branch of government in which that is the standard.”

    Can clear, powerful arguments presented in plain, straightforward language help rebuild trust in the institution? The justices’ subtle shift toward modernizing their writing suggests they believe it might.

    Jill Barton does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. US Supreme Court is unabashedly liberal − in its writing style – https://theconversation.com/us-supreme-court-is-unabashedly-liberal-in-its-writing-style-245503

    MIL OSI – Global Reports

  • MIL-OSI Global: Reproductive health care faces legal and surveillance challenges post-Roe – new research offers guidance

    Source: The Conversation – USA – By Nora McDonald, Assistant Professor of Information Technology, George Mason University

    Providers play a central role in reproductive health privacy. FG Trade/iStock via Getty Images

    Long before Roe v. Wade was overturned, reproductive justice advocates had been sounding the alarm about the increasing number of women subjected to criminal investigation for suspected abortion, stillbirth or miscarriage. These cases were often initiated by health care providers and bolstered by state laws used to prosecute women for having abortions.

    Newer laws, however, incentivize people outside of health care, including friends and family members, to report someone they suspect of having an abortion or helping someone else with an abortion. Coupled with the unprecedented access that authorities now have to digital information, these laws create new avenues for prosecution.

    In the post-Roe era, people capable of pregnancy face growing threats. Health care providers, family, friends, information on personal devices and virtually any activity that can be observed or recorded pose privacy risks that can lead to prosecution. I study online privacy. This vast scope for potential surveillance and privacy intrusion is a key focus of the research my colleagues and I conduct.

    In a recent paper, we surveyed reproductive health care providers about their privacy and security practices. We used the results to map the path of a hypothetical “Jane” to illustrate how people can identify privacy risks in their own situations. This choose-your-adventure approach helps readers navigate the potential legal, digital and personal challenges involved in accessing reproductive health care – and reveals the grim stakes.

    Privacy protections

    Historically, health care providers who opposed abortion have been the primary sources for reporting patients suspected of seeking abortions. While they remain a significant threat, additional risks to patient privacy have emerged. For example, state laws increasingly compel providers to hand over medical records.

    This circumvents new Health Insurance Portability and Accountability Act protections meant to shield protected reproductive health information from use in investigations when people seek abortions in states where the procedure is legal. Authorities might also be able to access records across state lines where abortion is legal – for example, when different electronic health record systems can share data.

    It is also possible that, in the future, electronic health records could be seized across state lines. Last year, in a letter to the U.S. Department of Health and Human Services, 19 state attorneys general protested the new federal data privacy rules. Texas followed up with a lawsuit against the Biden administration over the rule.

    Even so-called shield laws adopted by some states meant to protect people seeking abortions from record seizures have loopholes.

    Under the Biden administration, the U.S. Department of Health and Human Services added a privacy rule to protect reproducitve health data.

    Privacy vulnerabilities

    Despite some protections offered by the Health Insurance Portability and Accountability Act, additional gaps in safeguarding reproductive health information persist. Data captured outside medical portals, such as from apps or pharmacy transactions, often falls outside the federal law’s scope.

    It’s important to note that apps that capture consumer reproductive health data, like period trackers, do not necessarily pose a greater risk than informants. But the dystopian potential of governments reaching into personal intimate data, and the simplicity of the remedy – deleting an app – draw disproportionate attention.

    While it’s not entirely clear whether period trackers are definitively good or bad from a digital privacy perspective, they do offer potential benefits, such as helping people prevent unwanted pregnancies and thus avoid prosecution.

    Once reported to authorities, activities conducted on personal devices – browsing history, purchases, location data, and messages with friends or family – can become evidence in prosecutions. Authorities have shown a willingness to subpoena records from social media platforms, and they frequently access personal devices.

    Additionally, laws that incentivize family, friends and partners to report suspected abortions create a threat of surveillance from intimate associates. These dynamics are exacerbated by new laws that criminalize “trafficking” minors – transporting them across state lines – for abortion services.

    Providers’ role protecting privacy

    In our research, my colleagues and I found that reproductive health care providers can play a critical role in guiding patients on adopting privacy strategies and helping them navigate an increasingly complex landscape of privacy threats. Clinics are trusted spaces for affordable, progressive care that often shield patients from judgment or harm.

    Based on our interviews with reproductive health care providers, the protocols they use to manage communications, billing and other aspects of patient interactions have proved effective at protecting privacy, especially for vulnerable populations like minors or people with abusive partners. However, people seeking abortions face more nuanced threats. Providers tend to overlook digital risks and threats of prosecution tied to patients’ devices and records.

    This gap in awareness leaves patients without critical guidance for protecting their privacy. Our initial research conducted in the aftermath of the Dobbs decision revealed that people capable of pregnancy express profound concerns about reproductive privacy, yet often feel inadequately prepared to navigate its complexities.

    Findings from our forthcoming research suggest that many patients take extensive precautions, yet it’s not clear how effectively they can prioritize their digital strategies. At the same time, these people place significant trust in their reproductive health care providers, especially because they often deem existing guidance on privacy untrustworthy or insufficient.

    Although providers may currently be less attuned to the newer privacy risks, they could play a crucial role in addressing them. By incorporating digital privacy and threat modeling into their care, providers can help patients navigate a complex landscape of threats in an environment of pervasive surveillance.

    Nora McDonald does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Reproductive health care faces legal and surveillance challenges post-Roe – new research offers guidance – https://theconversation.com/reproductive-health-care-faces-legal-and-surveillance-challenges-post-roe-new-research-offers-guidance-246869

    MIL OSI – Global Reports

  • MIL-OSI Global: Newly discovered photos of Nazi deportations show Jewish victims as they were last seen alive

    Source: The Conversation – USA – By Wolf Gruner, Professor of History, USC Dornsife College of Letters, Arts and Sciences

    Deportation of Jews in Bielefeld, Germany, on Dec. 13, 1941. Courtesy City Archive Bielefeld, CC BY-SA

    The Holocaust was the first mass atrocity to be heavily photographed.

    The mass production and distribution of cameras in the 1930s and 1940s enabled Nazi officials and ordinary people to widely document Germany’s persecution of Jews and other religious and ethnic minorities.

    I co-direct an international research project to collect every available image documenting Nazi mass deportations of Jews, Roma and Sinti, as well as euthanasia victims, in Nazi Germany between 1938 and 1945. The most recently discovered series of images will be unveiled on Jan. 27, 2025 – Holocaust Remembrance Day.

    In most cases, these are the very last pictures taken of Holocaust victims before they were deported and perished. That fact gives the project its name, #LastSeen.

    A few of the images we’ve tracked down were taken by Jewish people, not Nazi officials, offering a rare glimpse of Nazi mass deportations from a victim’s perspective. As descendants of survivors help our researchers identify the deportees in these images and tell their stories, we give previously faceless victims a voice.

    Jewish Germans assemble for deportation in Breslau, Germany, in November 1941.
    Courtesy of Regional Association of Jewish Communities in Saxony, Germany, CC BY-SA

    A growing archive

    The #LastSeen project is a collaboration between several German academic and educational institutions and the USC Dornsife Center for Advanced Genocide Research in the United States. When it began in late 2021, researchers knew of a few dozen deportation images of Jews from 27 German towns that had been gathered for a 2011-2012 exhibition in Berlin.

    After contacting 1,700 public and private archives in Germany and worldwide to find more, #LastSeen has now collected visual evidence from 60 cities and towns in Nazi Germany. Of these, we’ve analyzed 36 series containing over 420 images, including dozens of never-before-seen photo series from 20 towns.

    Most photographs of Nazi mass deportations from local archives published in our digital atlas were taken by the perpetrators, who documented the event for the police or municipality. That has heavily shaped our visual understanding of these crimes, because they display victims as a faceless mass. When individuals were depicted, it was most often through an antisemitic lens.

    The LastSeen digital atlas shows locations of deportations where visual documentation has been uncovered.
    Screenshot, LastSeen, CC BY-SA

    We have, however, obtained a handful of images taken from a victim’s perspective. In January 2024, the #LastSeen team shared newly discovered photographs showing the Nazi deportations in what was then Breslau, Germany – today Wroclaw, Poland.

    They were sent to us for analysis by Steffen Heidrich, a staff member of the Regional Association of Jewish Communities in Saxony, Germany, who came across an envelope titled “miscellaneous” while reorganizing his archive. It contained 13 deportation photographs – the last images taken of dozens of Jewish victims before they were transported from Breslau to Nazi-occupied Lithuania and massacred in November 1941.

    Jewish resistance

    Many of these pictures in this series show a large, mixed age group of men and women wearing the yellow star – the notorious Nazi-mandated sign for Jews – gathering outside with bundles of their belongings. Some are taken from a peculiar angle, from behind a tree or a wall, suggesting they were snapped clandestinely.

    People waiting for deportation in Breslau in November 1941.
    Courtesy of Regional Association of Jewish Communities in Saxony, Germany, CC BY-SA

    Given the deportation assembly point for the Breslau Jews, a guarded local beer garden, our researchers knew that only a person with permission to access that property could have shot these pictures.

    For these two reasons, we concluded that an employee of the Jewish community of Breslau must have documented the Nazi crimes – most likely Albert Hadda, a Jewish architect and photographer who clandestinely photographed the November 1938 pogrom in Breslau.

    Hadda’s marriage to a Christian partially protected him from persecution. Between 1941 and 1943, the city’s Jewish community tasked him with caring for the deportees at the assembly point until their forced removal.

    These 13 recently discovered pictures constitute the most comprehensive series illuminating the crime of mass deportations from a victim’s perspective in Nazi Germany. Their unearthing is testimony to the recently rediscovered widespread individual resistance by ordinary Jews who fought Nazi persecution.

    Documenting Fulda

    Our project has also identified new deportation photos taken in the German town of Fulda in December 1941, during a snowstorm.

    Previously, historians knew of only three pictures of this deportation event. Preserved in the city archive, they show the deportees at the Fulda train station during heavy snowfall.

    We discovered two new images of the same Nazi deportation, apparently taken by the same photographer, in a videotaped survivor interview in the Visual History Archive of the USC Shoah Foundation in Los Angeles.

    In 1996, the Shoah Foundation interviewed Miriam Berline, née Gottlieb, the daughter of a successful Orthodox Jewish merchant in Fulda. At the end of the two-hour interview, Berline held two photographs up to the camera. They clearly show the same snowy deportation in Fulda.

    Screenshot from Miriam Berline’s interview about the Fulda deportations.
    USC Shoah Foundation Visual History Archive, CC BY-SA

    Berline, born in 1925, escaped Nazi Germany in 1939. She did not remember how her family obtained the images but recalled the photographer as Otto Weissbach, a “wonderful” man who had helped Fulda’s Jewish families.

    Our researchers investigated and learned his name was Arthur Weissbach, a non-Jewish neighbor of the Gottliebs. The factory he owned still exists. Descendants of Jewish families have since confirmed that he kept valuables for them and took care of elderly relatives who stayed behind.

    Weissbach’s niece said he was a passionate hobby photographer. Since Weissbach kept contact with survivors after the war, he might have given the images to the Gottlieb family. Today, the family’s copies are lost, but their existence is preserved in Berline’s video interview at the USC Shoah Foundation.

    The pictures show the Jews at the Fulda train station on Dec. 9, 1941 – revealing how Nazi deportations happened in plain view.

    The day before, Jewish men and women from around Fulda had been summoned and spent the night at a local school gym. In the morning, they were taken to the train station and forced by police to board a train to Kassel, in central Germany, and then eastward onto Riga, in Nazi-occupied Latvia.

    In total, 1,031 Jews were deported from Kassel to Riga. Only 12 from Fulda survived.

    Identifying the deportation victims

    It is difficult to identify the people in the photos we discover. So far, we’ve published 279 biographies in the digital atlas.

    In the future, artificial intelligence may help us identify more people from the photos in our collection. But for now, this process takes exhaustive research with the help of local researchers and descendants of survivors, whose names are known from archived transport lists.

    Families often struggle to recognize individuals in these images, but sometimes they have family photos that help us do so.

    Take, for example, this posed family portrait of two young girls. They are Susanne and Tamara Cohn.

    Susanne and Tamara Cohn, circa 1939.
    Private Archive, CC BY-SA

    Relatives of the Cohn family had this photo. It, along with data from the local Nazi transport list, established that two girls photographed in one of his Breslau deportation shots were the daughters of Willy Cohn.

    Cohn, a well-known German-Jewish medieval historian and high school teacher in Breslau, kept a detailed diary about the persecution of the town’s Jews from 1933 to 1941. It was unearthed and published in the 1990s.

    This photo, below, may be the last picture ever taken of his children with their mother, Gertrud.

    Gertrud, Susanne and Tamara Cohn, Breslau, November 1941.
    #LastSeen Project, CC BY-SA

    New insights

    The #LastSeen research project is generating new insights into the history of Nazi mass deportations, new methodologies for photo analysis and new tools for Holocaust education.

    In addition to the digital atlas, which has been visited by more than 50,000 people since its launch in 2023, we have developed several award-winning educational tools, including an online game that invites students to search for clues, facts and images of Nazi deportations in an artificial attic.

    In workshops for teachers and seminars with students, #LastSeen teaches the history of Nazi deportations and demonstrates how historical photo research works. In Fulda, for example, high schoolers helped us locate the exact places where the photographs were taken.

    Those pictures will be published in our atlas on Holocaust Remembrance Day 2025. A public commemoration in Fulda will feature the local students’ contributions.

    Depending on fundraising, we hope to extend the #LastSeen project beyond Germany. Collecting images from all 20-plus European countries annexed or occupied by the Nazis will help us better understand these crimes and advance research and education in new ways.

    Wolf Gruner is the director of the USC Dornsife Center for Advanced Genocide, which is a partner of the multiinstitutional research project #LastSeen.

    ref. Newly discovered photos of Nazi deportations show Jewish victims as they were last seen alive – https://theconversation.com/newly-discovered-photos-of-nazi-deportations-show-jewish-victims-as-they-were-last-seen-alive-246929

    MIL OSI – Global Reports

  • MIL-OSI Global: Microgravity in space may cause cancer − but on Earth, mimicking weightlessness could help researchers develop treatments

    Source: The Conversation – USA – By Sai Deepika Reddy Yaram, Ph.D. Student in Chemical and Biomedical Engineering, West Virginia University

    Cancer cells are more hardy in the low-gravity conditions of space. koto_feja/iStock via Getty Images Plus

    As space travel gains traction and astronauts spend increasing amounts of time in space, studying its effects on health has become increasingly critical.

    Is space travel truly safe? Far from it – research has shown that the effects of space radiation and microgravity on the human body are both detrimental and long-lasting. Creating space conditions on Earth, however, could potentially help researchers treat cancer.

    We are biomedical engineers studying how the body’s cells change under microgravity. Mimicking microgravity conditions on Earth allows researchers to study its effects without the need for space travel.

    Lab research in space

    Microgravity is a condition where gravity is extremely weak and objects are almost weightless. This occurs in space, where Earth’s gravity barely affects astronauts.

    Being in a microgravity environment for an extended period of time can lead to several health issues, including bone loss, muscle weakness, face puffiness and heart changes. Even after astronauts return to Earth, their bodies do not completely go back to normal.

    Studying how cells, organs and tissues respond to microgravity can help scientists better understand how to address any related harmful changes to the body. However, conducting research on lab samples in space faces significant challenges.

    In addition to monitoring lab samples, astronauts have no small number of other tasks to attend to while in space.
    NASA/AP Photo

    It is costly to launch equipment and samples, and experiments need to be planned around weightless conditions and the force of launch. Strict deadlines, limited access to space missions and dependence on astronauts to conduct experiments increase the complexity of these studies, making accuracy and cooperation crucial for success.

    Accessing samples after they have been sent to space can also be difficult. They risk being damaged while in the harsh conditions of space and during transport back to Earth.

    The process of planning and carrying out a lab study in space can be time-consuming, limiting the practicality of frequent experimentation.

    Studying microgravity on Earth

    To address these issues, scientists have developed equipment capable of simulating microgravity conditions on Earth.

    One such device is the clinostat, a machine that continuously spins samples to mimic the effects of low gravity. By constantly rotating, it spreads the effects of gravity evenly so that the sample is “weightless” or close to it. To mimic the effects of microgravity, the clinostat must rotate at just the right speed – fast enough that the sample doesn’t react to gravity, but not so fast that it feels other strong forces.

    Another method called dielectrophoresis places particles such as cells in a nonuniform electric field. Unlike a uniform electric field, which is the same strength and direction everywhere, a nonuniform electric field changes in strength or direction at different points. This uneven field causes cells to move based on differences in their electrical properties compared with the liquid surrounding them, enabling researchers to separate and study them. While this technique has been widely used on Earth, exploring its application in microgravity environments could allow researchers to more precisely manipulate particles and conduct research not feasible under Earth’s gravity.

    Tools such as clinostats and dielectrophoresis provide an easier, cheaper and faster way to study microgravity’s effects on cells compared with space missions. They are cost-effective and portable, requiring less expensive equipment and a smaller volume of samples to quickly generate reliable data.

    This video demonstrates particles separating via dielectrophoresis.

    Microgravity and cancer

    While microgravity can cause cancer, it could also potentially help researchers better understand and treat cancer.

    Cancer is one of the most challenging diseases to treat because it evolves rapidly and often becomes resistant to available treatments. By observing cancer cells in microgravity, researchers can study how they grow, divide and respond to drugs under different conditions. In simple terms, we are taking cancer cells out of their comfort zone to see how they react to an unknown environment.

    For example, researchers have observed that cancer cells have improved survival under microgravity. They also saw changes to their electrical properties. Other studies have shown that microgravity can alter immune cell function and how cells communicate with each other.

    Our team and others hypothesize that cancer cells may respond more effectively to certain drugs when exposed to a weightless environment. We’re looking into whether we can use microgravity to manipulate cancer cells to behave less aggressively and become more vulnerable to treatment.

    This research is still in its infancy. But if successful, these insights could help researchers develop new treatments that are more effective back here on Earth.

    The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    ref. Microgravity in space may cause cancer − but on Earth, mimicking weightlessness could help researchers develop treatments – https://theconversation.com/microgravity-in-space-may-cause-cancer-but-on-earth-mimicking-weightlessness-could-help-researchers-develop-treatments-242895

    MIL OSI – Global Reports

  • MIL-OSI Global: One large Milky Way galaxy or many galaxies? 100 years ago, a young Edwin Hubble settled astronomy’s ‘Great Debate’

    Source: The Conversation – USA – By Chris Impey, University Distinguished Professor of Astronomy, University of Arizona

    The Andromeda galaxy helped Edwin Hubble settle a great debate in astronomy. Stocktrek Images via Getty Images

    A hundred years ago, astronomer Edwin Hubble dramatically expanded the size of the known universe. At a meeting of the American Astronomical Society in January 1925, a paper read by one of his colleagues on his behalf reported that the Andromeda nebula, also called M31, was nearly a million light years away – too remote to be a part of the Milky Way.

    Hubble’s work opened the door to the study of the universe beyond our galaxy. In the century since Hubble’s pioneering work, astronomers like me have learned that the universe is vast and contains trillions of galaxies.

    Nature of the nebulae

    In 1610, astronomer Galileo Galilei used the newly invented telescope to show that the Milky Way was composed of a huge number of faint stars. For the next 300 years, astronomers assumed that the Milky Way was the entire universe.

    As astronomers scanned the night sky with larger telescopes, they were intrigued by fuzzy patches of light called nebulae. Toward the end of the 18th century, astronomer William Herschel used star counts to map out the Milky Way. He cataloged a thousand new nebulae and clusters of stars. He believed that the nebulae were objects within the Milky Way.

    Charles Messier also produced a catalog of over 100 prominent nebulae in 1781. Messier was interested in comets, so his list was a set of fuzzy objects that might be mistaken for comets. He intended for comet hunters to avoid them since they did not move across the sky.

    As more data piled up, 19th century astronomers started to see that the nebulae were a mixed bag. Some were gaseous, star-forming regions, such as the Orion nebula, or M42 – the 42nd object in Messier’s catalog – while others were star clusters such as the Pleiades, or M45.

    A third category – nebulae with spiral structure – particularly intrigued astronomers. The Andromeda nebula, M31, was a prominent example. It’s visible to the naked eye from a dark site.

    The Andromeda galaxy, then known as the Andromeda nebula, is a bright spot in the sky that intrigued early astronomers.

    Astronomers as far back as the mid-18th century had speculated that some nebulae might be remote systems of stars or “island universes,” but there was no data to support this hypothesis. Island universes referred to the idea that there could be enormous stellar systems outside the Milky Way – but astronomers now just call these systems galaxies.

    In 1920, astronomers Harlow Shapley and Heber Curtis held a Great Debate. Shapley argued that the spiral nebulae were small and in the Milky Way, while Curtis took a more radical position that they were independent galaxies, extremely large and distant.

    At the time, the debate was inconclusive. Astronomers now know that galaxies are isolated systems of stars, much smaller than the space between them.

    Hubble makes his mark

    Edwin Hubble was young and ambitious. At the of age 30, he arrived at Mount Wilson Observatory in Southern California just in time to use the new Hooker 100-inch telescope, at the time the largest in the world.

    Edwin Hubble uses the telescope at the Mount Wilson Observatory.
    Hulton Archives via Getty Images

    He began taking photographic plates of the spiral nebulae. These glass plates recorded images of the night sky using a light-sensitive emulsion covering their surface. The telescope’s size let it make images of very faint objects, and its high-quality mirror allowed it to distinguish individual stars in some of the nebulae.

    Estimating distances in astronomy is challenging. Think of how hard it is to estimate the distance of someone pointing a flashlight at you on a dark night. Galaxies come in a very wide range of sizes and masses. Measuring a galaxy’s brightness or apparent size is not a good guide to its distance.

    Hubble leveraged a discovery made by Henrietta Swan Leavitt 10 years earlier. She worked at the Harvard College Observatory as a “human computer,” laboriously measuring the positions and brightness of thousands of stars on photographic plates.

    She was particularly interested in Cepheid variables, which are stars whose brightness pulses regularly, so they get brighter and dimmer with a particular period. She found a relationship between their variation period, or pulse, and their intrinsic brightness or luminosity.

    Once you measure a Cepheid’s period, you can calculate its distance from how bright it appears using the inverse square law. The more distant the star is, the fainter it appears.

    Hubble worked hard, taking images of spiral nebulae every clear night and looking for the telltale variations of Cepheid variables. By the end of 1924, he had found 12 Cepheids in M31. He calculated M31’s distance as a prodigious 900,000 light years away, though he underestimated its true distance – about 2.5 million light years – by not realizing there were two different types of Cepheid variables.

    His measurements marked the end of the Great Debate about the Milky Way’s size and the nature of the nebulae. Hubble wrote about his discovery to Harlow Shapley, who had argued that the Milky Way encompassed the entire universe.

    “Here is the letter that destroyed my universe,” Shapley remarked.

    Always eager for publicity, Hubble leaked his discovery to The New York Times five weeks before a colleague presented his paper at the astronomers’ annual meeting in Washington, D.C.

    An expanding universe of galaxies

    But Hubble wasn’t done. His second major discovery also transformed astronomers’ understanding of the universe. As he dispersed the light from dozens of galaxies into a spectrum, which recorded the amount of light at each wavelength, he noticed that the light was always shifted to longer or redder wavelengths.

    Light from the galaxy passes through a prism or reflects off a diffraction grating in a telescope, which captures the intensity of light from blue to red.

    Astronomers call a shift to longer wavelengths a redshift.

    It seemed that these redshifted galaxies were all moving away from the Milky Way.

    Hubble’s results suggested the farther away a galaxy was, the faster it was moving away from Earth. Hubble got the lion’s share of the credit for this discovery, but Lowell Observatory astronomer Vesto Slipher, who noticed the same phenomenon but didn’t publish his data, also anticipated that result.

    Hubble referred to galaxies having recession velocities, or speeds of moving away from the Earth, but he never figured out that they were moving away from Earth because the universe is getting bigger.

    Belgian cosmologist and Catholic priest Georges Lemaitre made that connection by realizing that the theory of general relativity described an expanding universe. He recognized that space expanding in between the galaxies could cause the redshifts, making it seem like they were moving farther away from each other and from Earth.

    Lemaitre was the first to argue that the expansion must have begun during the big bang.

    Edwin Hubble is the namesake for NASA’s Hubble Space Telescope, which has spent decades observing faraway galaxies.
    NASA via AP

    NASA named its flagship space observatory after Hubble, and it has been used to study galaxies for 35 years. Astronomers routinely observe galaxies that are thousands of times fainter and more distant than galaxies observed in the 1920s. The James Webb Space Telescope has pushed the envelope even farther.

    The current record holder is a galaxy a staggering 34 billion light years away, seen just 200 million years after the big bang, when the universe was 20 times smaller than it is now. Edwin Hubble would be amazed to see such progress.

    Chris Impey does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. One large Milky Way galaxy or many galaxies? 100 years ago, a young Edwin Hubble settled astronomy’s ‘Great Debate’ – https://theconversation.com/one-large-milky-way-galaxy-or-many-galaxies-100-years-ago-a-young-edwin-hubble-settled-astronomys-great-debate-246759

    MIL OSI – Global Reports

  • MIL-OSI Global: As Syria ponders a democratic future: 5 lessons from the Arab Spring

    Source: The Conversation – USA – By Robert Kubinec, Assistant Professor of Political Science, University of South Carolina

    The fall of Bashar Assad’s dictatorship in December 2024 has ushered in a nerve-wracking time of hope and fear for Syrians concerning future governance in the long-war-torn country.

    While it’s unclear what exact political path Syria will take, the dilemmas the country faces are similar to the experiences of other Arab countries more than a decade ago. In the winter of 2010, an outbreak of protests in Tunisia spread across the region, toppling several regimes in what became known as the Arab Uprisings.

    While some countries – Egypt and Tunisia – became democracies, albeit briefly, others, like Yemen, Libya and Syria, descended into violence.

    In the intervening years, political science scholars from across the world have examined these political transformations, looking at why so many of Arab Uprising countries failed to continue down the path of democratic reform. As a political scientist with expertise in the region, I have distilled this research into five key lessons that could help guide Syria now, as it seeks to build a stable and democratic state.

    1. Islamist politicians are politicians first, Islamists second

    One of the most pressing questions when considering Syria’s post-Assad political direction is the role played by Hayat Tahrir al-Sham, the rebel group that led the overthrow of Assad.

    Hayat Tahrir al-Sham is a former al-Qaida affiliate that has since backed away from extremist ideology – though there are worries that this moderation is temporary. While some observers may think that all Islamist groups want to rigidly enforce a narrow interpretation of Islamic law like the Taliban in Afghanistan, research shows a far wider range of possibilities for the policies Islamist groups implement while in office.

    For example, the Tunisian Islamist group Ennahda stalwartly defended democracy and helped write a liberal constitution after the country ousted Zine El Abidine Ben Ali in 2011. Similarly, in Egypt after strongman leader Hosni Mubarak was removed the same year, the Muslim Brotherhood, a once-banned Islamist movement, competed successfully and fairly in the democratic process, though, of course, it faced the same challenges of any governing party in implementing policies once in power.

    But nor is such a path predetermined. Turkey’s recent democratic backslide and embrace of authoritarianism shows that Islamist politicians like President Recep Tayyip Erdogan can also undermine democracy when it serves their interests.

    What political science research has turned up time and again is that Islamist politicians are like politicians everywhere: When they need to win elections, they will gravitate toward voter concerns. According to regional survey data, a majority of Arabs express a preference for religious leaders who are apolitical.

    If Syria becomes a democracy, Hayat Tahrir al-Sham will, I believe, likely have to continue to embrace moderation. But whether the group backs democracy depends on the organization’s calculation of what its future looks like in democracy versus more authoritarian forms of governance. Broad negotiations that involve all parties in Syria can help convince Hayat Tahrir al-Sham that continuing on a path of moderation is in their best interests. While no one can forecast with certainty what Syria’s new institutions will look like, research shows that Islamists are just as likely as secular parties to support democratic norms.

    2. Ending corruption is all important

    One of the drivers of the Arab Spring and the Syrian revolution was anger over corrupt business deals. Indeed, relatives and cronies of Assad owned de facto monopolies over lucrative industries like cellphone networks. Unwinding these corrupt legacies and opening industries to competition and licensing should be an overriding priority for those seeking a less autocratic future.

    In Tunisia, established businesses fought anti-corruption reforms because they said it would hurt investment and growth. But the reason that economic growth is so poor in many parts of the Middle East is precisely due to these entrenched companies.

    Syria’s diaspora has many capable businesspeople who can return and found innovative companies if the new government opens up investment and entrepreneurship beyond people with political connections.

    3. Political disagreement is OK

    Many hope that Syria’s new government will be freely and fairly elected. For democracy to work, though, it must successfully implement changes in response to voters’ concerns.

    Initially, Syria will need to decide on basic rules like a constitution, which will involve many diverse groups. This broad coalition may have an easier time reaching compromises because of the opposition’s shared experiences under the prior dictatorship. Trying to maintain this unity, however, can mask important political debates that need to occur.

    In order for voters to see change, electoral competition must yield actual policy change. In Tunisia, top-heavy coalitions of parties promoted unity instead of tackling difficult decisions that resonated with people’s daily concerns. Over time, voters stopped identifying with parties and lost confidence in elections. Tunisia’s elected president, Kais Saied, took advantage of this apathy to shut down the country’s parliament – an action that was broadly popular despite the loss of democracy.

    A practical response to this concern is to build strong parties, a cause that pro-democracy organizations like the National Democratic Institute are very good at. Effective parties help voters by putting together a package of policies that will get through parliament and building coalitions.

    While Syria’s opposition has a lot of experience with waging war, it has relatively little in the way of running campaigns and building strong party brands. These more mundane goals are the key connective tissue that makes democracy work.

    4. Bureaucracies should serve the public

    Elections choose leaders, but lasting, popular change also requires bureaucrats who implement new policies – what is known as “horizontal accountability.” Egypt’s post-2011 democratic government left many state institutions untouched and later faced a revolt from autonomous anti-democratic agencies. Meanwhile, in Sudan, which saw a brief interlude of liberalization after the ouster of its longtime dictator, Omar al-Bashir, in 2019, democratic reformers launched an ambitious overhaul of state institutions that still failed because bureaucrats lobbied politicians for support.

    Without cooperative bureaucrats, basic state services fail, which leads to phenomena like crime waves and a loss of confidence in democracy.

    The Hayat Tahrir al-Sham-led government in Syria has already started reforming bureaucracies by prosecuting high-ranking officials from the prior regime while retaining the rank and file. Effective oversight, though, requires participation of elected leaders with the legitimacy to demand accountability from bureaucrats. For those who want to be involved in Syria’s transition, providing technical assistance to quickly rebuild ministries is one way to increase the odds of a successful transition.

    5. Keep the military close

    If Syria’s new government collapses, history suggests the military will be the most likely culprit. Egypt’s military undermined the country’s democratic transition by covertly supporting the anti-Islamist opposition. Sudan’s military acquiesced to protester demands for new leadership but kept de facto control of important government institutions.

    Recent research shows that keeping the military in check means giving it a stake in democracy by funding needed items like salaries and equipment. Just as important, however, is establishing civilian control over the military by mandating that the military report to elected leaders about its budgets, policies, and deployments. Military aid is necessary, yes, but still must be tied to strict commitments to civilian control.

    The future is Syria’s

    Political transitions are too complex to embark on easy forecasts. But the experience of nations who saw democracy rise and fall in the Arab Spring and subsequent winter can help Syria’s new leaders avoid costly political mistakes.

    Ultimately, though, the fate of the country rests with its own people. They are the ones who survived Assad’s regime – and who will make the most important decisions for Syria’s future.

    I know and have co-authored with people who wrote some of the studies that are linked to in this article.

    ref. As Syria ponders a democratic future: 5 lessons from the Arab Spring – https://theconversation.com/as-syria-ponders-a-democratic-future-5-lessons-from-the-arab-spring-246203

    MIL OSI – Global Reports

  • MIL-OSI Global: Mpox in the DRC: residents of the slum at the centre of Kinshasa’s epidemic have little chance of avoiding this major health crisis

    Source: The Conversation – Africa – By Yap Boum, Professor in the faculty of Medicine, Mbarara University of Science and Technology

    Walking through the crowded streets of the Pakadjuma neighbourhood in Kinshasa, capital of the Democratic Republic of Congo, I am struck by the vibrant atmosphere around me.

    Children play happily in puddles, surrounded by piles of plastic bags and open ditches of sewage. Shacks patched together from pieces of corrugated iron crowd the settlement. Loud rumba music blasts through the air as young people enjoy themselves in open bars, waiting for grilled pork or chicken to be served. Sex workers sit outside tin shacks in narrow alleyways, calling for customers.

    Nearby a Médecins Sans Frontières triage centre is the only reminder that this slum area is the epicentre of the mpox epidemic in Kinshasa. There are no posters, no pamphlets or banners warning residents of the dangers of this viral disease that was declared a continental and global emergency in August last year.

    At the clinic, patients suspected to have mpox are sent to one of three dedicated mpox centres in the city. Common symptoms include fever, headache, muscle ache, chills, exhaustion, swollen lymph nodes and lesions. With symptomatic care most patients get better in 7 to 35 days, depending on the severity of the case.

    As an epidemiologist co-leading the response to mpox for Africa Centres for Disease Control and Prevention, I visited Pakadjuma to get a better sense of the situation on the ground.

    Mpox has historically been a rural disease in the DRC. This microcosm of Kinshasa sheds light on the complex challenges of managing the outbreak in a city.

    Fighting on two fronts

    With a population of more than 17 million, Kinshasa is Africa’s biggest megacity. Pakadjuma is one of the city’s many overcrowded areas where people live in extreme poverty.

    Kinshasa, often called “Kin la Belle”, faces a unique crisis in the fight against mpox. Both strains of the virus, clade Ia and clade Ib, are circulating in the city simultaneously. This is first time this has happened.

    Clade Ia, which is primarily transmitted from animal to human and then within households through touch, has been endemic to Africa for decades.

    Clade Ib is a new strain and contracted predominantly through sexual contact. It is the strain that has spread rapidly across 21 African countries during the current epidemic in east and central Africa.

    This dual transmission makes the fight against mpox even more complicated: how does one tackle a public health crisis rooted in both intimate human connections and structural inequities such as living in overcrowded areas?

    Although the strains are treated similarly clinically, their spread and transmission differ.

    Clade Ia is mainly associated with zoonotic transmission (from animals to humans) in rural areas. Animal surveillance and community education are required to control spillovers.

    Clade Ib, with higher human-to-human transmissibility, necessitates intensified contact tracing, vaccination, and preventive measures in urban and peri-urban areas.

    Tailoring strategies to these differences is key to containing the outbreak.

    When condoms don’t work

    Pakadjuma, in the north-east of the city, is known for poverty and high crime rates. For many girls and young women the sex trade is their only option if they want to survive.

    One of the most pressing challenges to combat the virus in the area is curbing sexual transmission.

    Unlike HIV, where condoms can significantly reduce the risk of spread, mpox poses a different challenge: because the virus is spread by touch there is no practical preventive measure for sexual transmission apart from complete abstinence.

    Mpox lesions start in the groin, making any movement excruciating. For these sex workers, though, abstinence is not an option. It would mean losing their livelihood and the ability to feed their children.

    For their clients, who come from all over the city, it would require altering a core aspect of their lives for a disease they perceive as less lethal than Ebola. There are no easy answers to this dilemma.

    Tracing the spread

    Contact tracing, a cornerstone of outbreak control, is another hurdle.

    Identifying and tracing the contacts of sex workers is complex. As a result only a fraction of mpox cases are confirmed with laboratory analysis.

    On average, each mpox case has about 20 contacts, yet tracing clients in a highly confidential sexual network is next to impossible.

    Without effective contact tracing, infected individuals remain in the community, often seeking treatment only when their condition worsens. From discussions with Médecins Sans Frontières staff in the triage zone, it emerges that suspected mpox cases usually arrive in advanced stages of the disease, when symptoms are clearly visible. Many patients first attempt other remedies such as traditional healing methods, before seeking medical care.

    Fortunately Kinshasa benefits from a strong laboratory network led by the Institut National de la Recherche Biomédicale and test results are available within 48 to 72 hours. This state-of-the-art institute was pioneered by Dr Jean Jacques Muyembe, the microbiologist who first discovered Ebola.

    In the first week of January 2025 there were 1,155 confirmed cases and 27 deaths in the city, according to the DRC Ministry of Health.

    Even for those who seek care at the dedicated mpox centres, navigating the chaotic, congested roads is a nightmare. Yellow minibuses – ominously known locally as the “Spirit of Death” – are crammed and it can take hours to get to a destination.

    With increasing patient numbers, mpox centres in the city are overwhelmed.

    The fight on all fronts

    Addressing the mpox outbreak in Kinshasa requires a multifaceted approach which includes:

    Vaccination: Blanket vaccination drives offer the strongest hope for controlling the outbreak in hotspots such as Pakadjuma where contact tracing is almost impossible. In these cases the whole community needs to be vaccinated.

    This could break transmission chains while allowing individuals at risk, such as sex workers, to continue plying their trades.

    Prevention and control: Home care is essential, particularly in informal settlements like Pakadjuma. Providing food and material support to patients and their families and encouraging the isolation of infected relatives will help to limit the spread of the disease.

    These measures require new thinking, however, when people are trying to survive from day to day.

    Talking to the community: This is difficult because of the stigma around the disease, but it must be at the heart of the response.

    Amplifying the message: The media, local leaders and trusted community members need to be engaged to spread the word loud and clear.

    This all needs to happen immediately or the epidemic will be almost impossible to contain in this vast, sprawling city. The consequences would be dire.

    Yap Boum does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Mpox in the DRC: residents of the slum at the centre of Kinshasa’s epidemic have little chance of avoiding this major health crisis – https://theconversation.com/mpox-in-the-drc-residents-of-the-slum-at-the-centre-of-kinshasas-epidemic-have-little-chance-of-avoiding-this-major-health-crisis-247809

    MIL OSI – Global Reports

  • MIL-OSI Global: Donald Trump is firing out presidential pardons and warnings of retribution. What happens next?

    Source: The Conversation – UK – By Adam Quinn, Associate Professor in American and International Politics, University of Birmingham

    Donald Trump has now pardoned or commuted the sentences of around 1,500 January 6 protesters, including those who were convicted of crimes against police officers relating to the riot at the US Capitol.

    But use of the presidential pardon in the last few days was not restricted to the incoming president. On his last day in office, outgoing president Joe Biden signed a number of pre-emptive pardons in an effort, he suggested, to shield people from possible “retribution” at Trump’s hands.

    This included not just members of the House committee that investigated the Capitol riot, but also Anthony Fauci, former chief medical advisor to the president during the COVID pandemic, and Gen. Mark Milley, who retired in 2023 after four years as the nation’s most senior military officer, and whom Trump has previously suggested would have been executed for treason in a previous era.

    In December, Biden granted his son Hunter a sweeping pardon, and he extended the same to several other relatives in the final minutes of his presidency. In an accompanying statement he said: “Even when individuals have done nothing wrong — and in fact have done the right thing — and will ultimately be exonerated, the mere fact of being investigated or prosecuted can irreparably damage reputations and finances.”

    Such pardons may be greeted with ambivalence by some recipients. One person who received a pardon was Adam Schiff, now a US Senator and previously a House member who both served on the Jan 6 committee and was lead prosecutor in Trump’s first impeachment. He had previously declared he did not want such a pardon because, first, it was unnecessary since he had done nothing wrong, and, second, it set a bad precedent. We may find out in the months and years ahead whether he was right on either count.

    So how did we get here?

    A year ago, Trump faced a daunting obstacle course of criminal cases. Among them, he faced trial in New York for falsifying business records. Federal prosecutors had indicted him for trying to steal the 2020 election, and for illegally holding onto classified documents after his presidency ended. He also faced state-level election subversion charges in Georgia.

    By the time of his inauguration, however, his legal problems had been almost entirely resolved. He was convicted on the New York charges, but his punishment, an unconditional discharge, is a slap on the wrist. The greatest symbol of Trump’s victory over legal threats, however, is the shelving of the two federal cases against him. Both cases have now been dismissed at the request of the Justice Department because its policy prevents a criminal case against a sitting president. Even if this were not the case, as head of the executive branch Trump would have authority to order them dropped.




    Read more:
    Nixon’s official acts against his enemies list led to a bipartisan impeachment effort


    Trump enters a second term freer of personal legal jeopardy than he has been in years. He is convinced that the cases against him represented a weaponisation of the criminal justice system by his political opponents. Now restored to the highest office, there are widespread fears that he may wield federal power to retaliate against those he believes have wronged him.

    In the run-up to the election he spoke often about “retribution” against “the enemy within”. An NPR investigation of Trump’s rallies and social media posts since 2022 found more than 100 instances of his explicitly or implicitly threatening to “investigate, prosecute, jail or otherwise punish his perceived opponents”.

    He has repeated that he “would have every right” to go after those he believes have waged “lawfare” against him over the last several years.

    If he does decide to try, it is less likely than during his first term that top officials will block or dissuade him. Trump’s current nominee for attorney general, former Florida attorney general Pam Bondi, was part of his defence team during his 2020 impeachment, then an active supporter of his campaign to overturn the 2020 election. During her Senate confirmation hearing she refused to say that she would defy pressure from Trump, but she did say that “politics will not play a part” in deciding who to investigate. Few will have felt completely reassured.

    Even more concerningly, Christopher Wray, director of the FBI, the leading national criminal investigative agency, has resigned before the usual duration of his tenure, after Trump declared he intended to replace him with Kash Patel. Patel, more than any other senior Trump nominee, has spent his career at the heart of the post-2016 Maga movement. He held junior roles late in the first Trump administration, but in the years since he has advocated using criminal and civil prosecution to root out “conspirators” among journalists and government officials.

    Patel even published a book containing a list of “Members of the Executive Branch Deep State” (including both Democrats and Republican appointees), seen by some as an “enemies list”. This is an appointment that some believe suggests restraint is unlikely.




    Read more:
    Trump’s election interference case may be closed, but it still matters for America’s future


    The January 6 rioters and plotters were among the first beneficiaries of the transfer of power. While campaigning Trump had portrayed them as martyrs to his cause and pledged pardons. He made good on that promise on day one by pardoning or commuting sentences. He also ordered the Justice Department to dismiss all pending indictments.

    It remains to be seen what approach the new president will take toward those who have worked prominently against him. He had previously said that some who served on the Congressional committee investigating the attack on the Capitol ““should go to jail”, often singling out former Republican Congresswoman Liz Cheney, who also received a pre-emptive pardon from Biden. Trump has also suggested that Biden should have issued a pardon for himself.

    It is doubtful that targeted investigations could ultimately produce criminal convictions without some plausible case. For the time being at least, US courts and the jury system retain sufficient independence that blatantly groundless and malicious prosecutions would struggle to get that far against targets with the resources to defend themselves.

    But as previous federal probes have illustrated – such as those into the Clintons – even an investigation that ultimately stops short of bringing charges against its top targets can last years, impose significant legal expenses on those embroiled in it, and inflict stress and distraction.

    The aim of this kind of action may be to instil a climate of anticipatory fear in which outspoken criticism in the future seems, to most, more trouble than it is worth. The US is not there yet. But it is closer to such a state than it has been in any of our lifetimes.

    Adam Quinn has previously received research funding from the Economic and Social Research Council (ESRC) and the Charles Koch Foundation (CKF)

    ref. Donald Trump is firing out presidential pardons and warnings of retribution. What happens next? – https://theconversation.com/donald-trump-is-firing-out-presidential-pardons-and-warnings-of-retribution-what-happens-next-247646

    MIL OSI – Global Reports

  • MIL-OSI Global: Netflix’s La Palma’s ‘megatsunami’ has been debunked

    Source: The Conversation – UK – By Hannah Little, Lecturer in Communication and Media, University of Liverpool

    In the Netflix series La Palma, a Norwegian family goes on holiday to the Canary Islands when a young researcher discovers alarming signs of an imminent volcanic eruption. Cumbre Vieja is an active volcano on La Palma, which last erupted in 2021. The series culminates in a “megatsunami” capable of engulfing Europe and reaching as far as the west coast of the US.

    It’s a truly terrifying prospect.

    Disaster stories are hugely popular and La Palma is just the latest hit in the growing genre. In his book Disaster Mon Amour, the film critic David Thomson identifies the filmmakers’ goal of creating “a spectacle of devastation with cozy human interest”. But stories like La Palma can have real world impact.

    The series presents itself as being based on a real hypothesis, which is communicated by newscasters and a scientist in the title sequence of each episode. The tsunami expert Simon Day, whose research inspired the show, is also thanked in the closing credits. However, La Palma does nothing to capture the more up to date and reassuring science.


    Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.


    While volcanic events can trigger tsunamis, as experts in volcanoes and the communication of disaster, we can assure you that the eruption and subsequent rapid collapse of the island depicted in the series isn’t a plausible scenario that scientists are concerned about.

    What should be taken more seriously are localised tsunamis. Such “megatsunami” scenarios have been debunked in recent years you’ll be happy to hear.

    There have been more than 17 eruptions in the Canary Islands since the 1400s, none resulting in a “megatsunami” across the Atlantic.

    Stories have the power to communicate information about environmental risk for audiences. Following the release of the film, some have dug up the megatsunami hypothesis, raising it back into the public awareness.

    The idea of a “megatsunami”, triggered in the way it is in La Palma, first arose in a 2001 paper by the academic the series thanks in its credits, Simon Day and the geophysicist Steven Ward based on one extreme hypothetical scenario. This theory has since been proven false by subsequent studies that show that a Canary Islands eruption and collapse might reach the US with a maximum wave height similar to a storm surge at one to two metres , not the 25-metre waves depicted in La Palma. Newer research has also called into question the scale of the landslide used in the original study which would cause such a tsunami.

    Since the initial work, we understand a lot more about how large landslides and tsunamis occur, and the computer models used to test tsunami scenarios have improved. Research on the underwater landslide deposits has shown that these collapses occur in multiple, smaller steps, not one massive slide into the ocean. Such a large tsunami would leave telltale deposits in North and South America – but they are nowhere to be found.

    The importance of understanding the risk relating to real volcanoes was encapsulated during the 2021 eruption of Cumbre Vieja. As the eruption progressed, volcanologists received messages from concerned and frightened people fearing a megatsunami, which prompted the US Geological Survey to respond outlining why the hypothesis doesn’t carry. This was even before a major Netflix drama had recounted such an imaginary event.

    Volcanogenic tsunamis of all sizes are a real threat around the world and hazards experts want to know what our risks are so we can prepare and protect our communities. This becomes difficult when facts are diluted or distorted by stories like La Palma’s. Volcanologists with limited resources during an eruption end up spending more time debunking information rather than talking to the press about the potential dangers.

    During the 2021 eruption, the people of La Palma suffered greatly and continue to struggle with claiming compensation and rebuilding their homes or accessing their properties. Tourist numbers dropped to a third of pre-pandemic levels after 2021’s volcanic eruption.

    Misinformation about eruptions and their risks can add to the stress of those inhabiting or visiting volcanic islands, not only concerned about their own safety, but the security of an economy that relies heavily on tourism. With the right information, we can empower communities to prepare themselves and to act fast when the time comes.

    A lot of people watch Netflix, but not many people read scientific papers on volcanology. Given this, it might be that the responsibility of getting the science right and accurately representing risk should lie with the people with a captive audience. There is an opportunity to work with scientists to help spread the right information alongside promotion for future stories about such disasters.

    Simon Day was approached for comment but hadn’t responded by the time this article was published.

    Katy Chamberlain received funding to work on the 2021 La Palma eruption from the Natural Environment Research Council (NERC) Urgency grant number: NE/ W007673/1

    Hannah Little and Janine Krippner do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    ref. Netflix’s La Palma’s ‘megatsunami’ has been debunked – https://theconversation.com/netflixs-la-palmas-megatsunami-has-been-debunked-246916

    MIL OSI – Global Reports

  • MIL-OSI Global: Why meteorologists are comparing Storm Éowyn to a bomb

    Source: The Conversation – UK – By Suzanne Gray, Professor of Meteorology, University of Reading

    A satellite image of the British Isles during Storm Éowyn’s descent. ©EUMETSAT (2025), CC BY

    Storm Éowyn is today unleashing strong and damaging winds over the British Isles, and particularly over Ireland and Scotland.

    Air pressure at the centre of the storm plummeted 50 millibars in the 24 hours leading up to midnight on January 24. That’s more than twice what is required in the definition of “explosive cyclogenesis”, in other words, the development of a cyclonic (anticlockwise rotating) storm that is both rapid and severe – like a bomb going off. As a result, Éowyn can be termed a “bomb cyclone”.

    It is not unusual for winter storms in this part of the world to reach bomb cyclone status. However, only very few in recent years have shown a rate of deepening pressure that is comparable to that of Storm Éowyn.

    The exceptional intensity of Storm Éowyn was predicted and it has prompted the Met Office and Met Éireann to issue red warnings covering the whole island of Ireland and central and southern Scotland. This tells the public to expect widespread gusts of 80-90mph and up to 100mph in the most exposed locations. A record-breaking gust of 114 mph has this morning been provisionally reported at Mace Head on Ireland’s west coast.

    Similar intense storms have left widespread damage and tragically claimed lives. Some, such as the infamous Great Storm of 1987, have entered popular culture.

    Éowyn’s place in history

    The maximum gust during the Great Storm was measured as 115mph at Shoreham, on the west Sussex coast. However, the anemometer stopped recording immediately afterwards so the real peak may have been higher.

    A scientific paper has cast doubts on the UK national low-level wind gust record (so, excluding mountain summits) of 142mph. This was recorded at Kinnaird Head Lighthouse at Fraserburgh in Aberdeenshire, Scotland on February 13 1989. The researchers documented brief power supply interruptions to the recording anemograph, which could have given a faulty reading.

    The record-highest wind gust measured in England sits at 122mph. This was recorded at the Needles, a very exposed site at the edge of the Isle of Wight, during Storm Eunice in February 2022. Two gusts of similar strength were recorded less than two years later (November 2023) in Brittany during Storm Ciarán.

    In Ireland, the strongest gust recorded by an inland low-altitude weather station was during ex-Hurricane Debbie in 1961, with 113mph measured at Malin Head, the most northerly point of mainland Ireland. A gust of 97mph was measured in October 2017 at Roche’s Point at the entrance to Cork harbour during ex-Hurricane Ophelia.

    The measurements we’re now seeing during the passage of Storm Éowyn are up there with those recorded during the most infamous storms of recent years and decades.

    What makes a storm ‘explode’

    Like making a cake, there are several key ingredients to cooking up an explosively developing bomb cyclone like Storm Éowyn.

    A strong jet stream – the ribbon of winds about six miles up in the atmosphere over the North Atlantic – is one. Winds here are currently exceeding 200 mph – their strength is linked to the strong temperature contrast between the cold plunge of air across the eastern US and the far warmer air over the western North Atlantic.

    This strong jet has provided the energy for the storm’s development and is also the cause of its race towards the UK across the North Atlantic. Storm Éowyn came to life off the eastern seaboard of the US during Wednesday January 22 and will have covered over 2,000 miles before it arrives off western Scotland by Friday midday.

    The low-pressure centre of Storm Éowyn crossed the jet stream from south to north en route, an ideal track for explosive development.

    Éowyn’s heavy rainfall as it tracks towards the UK is a result of another key ingredient for explosive storm development: deep clouds within the storm that generate energy when their water condenses. These clouds are fed by strong fluxes of heat and moisture from the warm ocean surface, and scientists have been detecting record-warm surface ocean temperatures in the North Atlantic in recent years.

    The role of climate change

    When a storm such as Éowyn occurs, people ponder the role of climate change in fuelling its strength. Our experiences of future storms will depend on what tracks these storms typically take and how that influences their intensity. Stormy weather is, of course, not unusual in the autumn and winter over the British Isles and it requires detailed research to attribute the strength of any specific storm to climate change.

    To date, the observed trends in storminess have not provided a conclusive link with climate change. The latest assessment report from the Intergovernmental Panel on Climate Change, experts relating to all aspects of climate change who are convened by the United Nations, states that there is “low confidence” in the direction of trends in the number and intensity of extratropical storms (those that form outside of the warm band surrounding Earth’s equator) over the last century.

    One reason why it is difficult to make this link is that the position and variability of storminess is very dependent on the jet stream, and its position varies a lot from day to day, week to week, and beyond. Large-scale climate patterns such as the El-Niño Southern Oscillation and North Atlantic Oscillation, and sea surface temperatures and the extent of sea ice are also likely to be important factors.

    Despite this uncertainty, there are indications that in the future, winter storms may become more frequent and more clustered (such that several storms occur within a few days of each other), which can exacerbate their overall impact. The frequency of storms with extreme winds may also increase. Rainfall is highly likely to increase, as a warmer atmosphere can hold more moisture.

    Another thing that could change about intense storms in future is their propensity to develop “sting jets”. Sting jets are descending airstreams that can produce particularly destructive surface winds, as in the Great October storm, Storm Eunice and Storm Ciarán. Sting jets are short-lived and occur over very small areas, making them hard to predict and identify.

    There is speculation over whether a sting jet has descended during Storm Éowyn. Post-event verification will be needed. While the overall impact on wind speed is uncertain, the small number of studies that have considered sting jets in future cyclones have predicted an increase in their likelihood.

    Cyclones that are capable of producing sting jets also typically show more vigorous cloud development, consistent with the hypothesis that the intense storms of the future will be influenced by our hotter and wetter atmosphere.


    Don’t have time to read about climate change as much as you’d like?

    Get a weekly roundup in your inbox instead. Every Wednesday, The Conversation’s environment editor writes Imagine, a short email that goes a little deeper into just one climate issue. Join the 40,000+ readers who’ve subscribed so far.


    Suzanne Gray has previously received or currently has funding from the Natural Environmental Research Council and AXA Research Fund to work on sting jet storms, and storms in the Arctic and Mediterranean regions.

    Ambrogio Volonté has previously received or currently has funding from the Natural Environmental Research Council, AXA Research Fund and the University of Reading to work on sting jet storms, and storms in the Arctic and Mediterranean regions.

    ref. Why meteorologists are comparing Storm Éowyn to a bomb – https://theconversation.com/why-meteorologists-are-comparing-storm-eowyn-to-a-bomb-248203

    MIL OSI – Global Reports

  • MIL-OSI Global: Sexism linked to social ills for men and women, finds largest cross-cultural study of its kind

    Source: The Conversation – UK – By Magdalena Zawisza, Associate Professor in Gender and Consumer Psychology, Director of Groups and Societies Research Centre and Chair of Faculty Athena Swan Committee, Anglia Ruskin University

    Feminism is facing a backlash, with women’s rights being rolled back in many countries and a significant number of people saying feminism has gone far enough or even too far. Yet women still face basic obstacles to education in some countries and are generally paid less than men. They still suffer from male violence and, in some places, face increasing restrictions to reproductive rights. There are even some places where families force midwives to kill their newborn girls.

    Many women are also fed up with doing both a full-time job and the lion’s share of domestic duties and unpaid caring jobs. It’s easy to wonder whether gender equality is simply impossible, especially as many men inaccurately perceive that gains for women equate losses for men.

    But there is hope. Our 62-nation psychological study, which is largest of its kind, suggests that gender equality benefits us all and sexism is harmful to everybody – women, men and nations in many surprising ways. As such, we all have an interest in promoting egalitarianism.

    As our findings show, sexism is linked with several social ills affecting us all. For example, higher sexism predicted lower GDP – indicating lower economic productivity. It also predicted a lower “global peace index”, meaning nation’s higher domestic and international conflict, militarisation and lower safety and security.

    Further, sexism was linked to a greater level of antidemocratic practices in a given country. Lastly, it even predicted shorter healthy lifespans (ones without chronic disease or disability) in women and men as measured with WHO’s Healthy Life Expectancy in Women and Men. For example, our data reveals that one point increase in sexism (measured from 0-5) is linked with a 9.12 months shorter lifespan in men and 8.88 months in women.

    While the type of analysis we did cannot directly prove that sexism causes these issues, the pattern of our findings aligns with theoretically driven predictions and with experiments that directly test such links on a smaller scale. It makes more sense to expect that sexism leads to poor health than that poor health leads to sexism, for example.

    Specifically, other research reports that sexism reduces human capital by restricting women’s education and job opportunities, thus depleting economic productivity. A country where most women work is likely to have much higher productivity than a country where all the women stay at home.

    Research also shows that sexist masculine norms encourage male violence contributing to greater conflict. And we know that sexism is linked to medical discrimination for women, such as less medical research on women and treating women’s complaints as less credible. This may lead to poorer health.

    Sexism prevents men from getting help with their mental and physical health.
    YURII MASLAK/Shutterstock

    For men, sexism discourages seeking help for psychological or medical problems, seeing it as weakness. It also encourages risk-taking, such as aggression or not using seatbelts. This may well cause a reduction in health and wellbeing.

    Two faces of sexism

    Importantly, our study also reveals that affectionate but patronising attitudes to women are also harmful to all – you might not even recognise them as sexist. And you are not alone.

    After 30 years of its conception, our research supports the ambivalent sexism theory. The theory proposes that sexism has two faces: hostile and benevolent. While both are ugly, the latter hides under the veil of superficial positivity. Hostile sexism is an open and overt hostility to non-traditional women and a desire to punish those who break norms, such as female politicians.

    Benevolent sexism, on the other hand, is superficially positive but patronising. It includes attitudes that reward traditional women, such as stay-at-home mums, by idealising them, offering them male protection and provision. This sounds innocent, but such beliefs imply women’s weakness.

    In fact, research has shown that exposure to benevolent sexism increases women’s acceptance of hostile sexism, decreases their work performance, and reduces their support for gender equality action.

    Both ideologies work together to maintain men’s power over women: they form a system of rewards and punishments akin to the iron fist (hostility) in a velvet glove (benevolence). Thus, hostile and benevolent sexism are internalised also by women.

    Our study shows that people who hold benevolent sexist views are also more likely to hold hostile sexist views, as the two correlate positively in 62 countries across five continents. Compared with 2000, when the last such study was done in 19 countries, average national sexism scores dropped a meagre 0.47 points (on a 0-5 scale). See our world map of this and other concepts we measured.

    While men are more sexist than women around the world, women’s beliefs about themselves are also sexist to some extent. Interestingly, as men’s hostile sexism increased, women embraced benevolent sexism more (sometimes outscoring men) – probably attempting to secure the promised protection and provision.

    Unfortunately, this benevolent promise appears false. Across our 62 countries, the higher benevolent sexism, the lower was the gender equality, women’s labour participation and the more time women spend on unpaid domestic chores.

    Taken together, our research suggests that it may well be in the interests of women, men and nations alike to tackle sexism for a better future for us all. In other words, women’s gains mean men’s gains too.

    Dr Magdalena Zawisza received funding for activities related to this study from from National Science Centre, Poland. She volunteers her expertise to Women on Boards CIC Leadership Committee and Think Tank, UK.

    This research was funded by a grant from the National Science Centre in Poland (grant 18 number: 2017/26/M/HS6/00360) awarded to Natasza Kosakowska-Berezecka

    ref. Sexism linked to social ills for men and women, finds largest cross-cultural study of its kind – https://theconversation.com/sexism-linked-to-social-ills-for-men-and-women-finds-largest-cross-cultural-study-of-its-kind-247183

    MIL OSI – Global Reports

  • MIL-OSI Global: Emotions change our perception of time – as demonstrated on The Traitors

    Source: The Conversation – UK – By Ruth Ogden, Professor of the Psychology of Time, Liverpool John Moores University

    In the UK version of the TV show The Traitors, contestants were given five minutes to find as much gold as they could, put it into cages and hoist them before the time ran out. There was a catch though – they weren’t given any information about when the five minutes were up.

    Instead, they had to use their internal sense of time to decide when to end the task. Stopping the task too soon meant they collected fewer gold pieces. Stopping the task too late would mean all their gold would be discarded. Accurate timing was therefore the key to success – but interestingly, they chose to end the task after just three minutes.

    Why are we so spectacularly bad at judging time? Can you time a minute or an hour perfectly without using a clock? You may be surprised to realise you are not as good at this as you think.

    We don’t have a clock in our brains that keeps track of time perfectly. As a result, time can often feel like it is passing more quickly or slowly than normal. This is because our experience of time is shaped by our activities and emotions.

    Emotional bias

    An extreme example of this is what happens when we think we are about to die. If you’ve ever been in a car accident, you have probably experienced the sensation that time is slowing down, and everything is happening in slow motion.

    When we experience extreme threats, flight or fight responses kick in, our heart rate increases and the insula, an area of the brain responsible for emotion processing, becomes activated. This change in our brain activity and bodies also appears to be responsible for distorting our sense of time.

    We actually demonstrated this in recent research where we explored how people perceived time when walking across a virtual crumbing ice bridge. Wearing a VR headset, participants were tasked with walking from one end of a mountain ice-bridge to the other.

    As they walked, the ice blocks beneath them would crack or give way entirely – causing them to “fall” to the ground. Throughout the task we monitored our participants heart-rate and how much they sweated.

    Our results show that people rarely felt like time passed as normal during this task. Instead, they often felt like time was passing more slowly than normal. Critically, those who experienced the biggest change in arousal during the task were the ones who were most likely to report that time was slowing down as they traversed the bridge. Controlling our emotions is therefore key to maintaining a stable and accurate sense of time.

    It’s not just near-death situation which distort our sense of time. Events during normal daily life govern how quickly we feel like time is passing. Research shows that time really does pass more quickly when we are happy, and it crawls at the pace of a snail when we are bored. These distortions to time are caused by changes in how much attention we pay to time.

    Our brains have a limited capacity. We only really attend to time when it is highly relevant to what we are currently doing, or when there is a high degree of uncertainty about time.

    When we are having fun and socialising with friends, time is rarely a priority, and as a result we pay less attention to its passing than normal. As a result, these types of positive events tend to feel like they are passing more quickly than normal.

    However, when we are dreading a future event, or desperate for a current one to end, we have a tendency to obsess over time. This causes us to pay more attention to time than normal, resulting in the sensation that it is passing slowly.

    Uncertainty over time

    Being uncertain about time has the same effect. When waiting for a delayed train, for example, our level of temporal uncertainty is high because we don’t know precisely when (if ever) our wait will end. Not knowing when an event will occur causes us to focus on time, and this fixation on time is the reason that it drags.

    Time drags when waiting for a train.
    zhukovvvlad/Shutterstock

    During The Traitors gold searching task, time seemed to fly for the contestants, making them feel like it has been five minutes when it had actually only been three. This is probably because the stress of finding the gold, while running around on uneven terrain, and constantly trying to keep an eye out for someone stabbing you in the back, took most if not all of their thinking capacity.

    As a result, despite the importance of time to the task, the contestants simply paid too little attention to time to accurately process it. This, coupled with the increased arousal caused by all the running around, and fear of getting the task wrong left them them hopelessly unable to accurately keep track of time. Ultimately changes in their attention and arousal resulted in them ending the task prematurely and missing out on much needed prize money.

    Understanding the ways in which attention and emotion affect our sense of time can help us to overcome the sense of time flying and dragging when we don’t want it to. If you find yourself in a state of distress, and sense that the world is slowing down around you, the best thing to do is to try to stay calm as reduce your level of arousal. This will help time to speed up.

    But when you find yourself clock-watching, perhaps waiting for a shift at work to end, distraction is key to making that time fly. By focusing on things other than time, you can trick yourself into feeling like time is passing more quickly, reducing how long you feel like you are in a state of torment.

    Ruth Ogden receives funding from The British Academy, The Wellcome Trust, the Economic and Social Research Council, CHANSE and Horizon 2020. This piece was written as part of the Wellcome Trust Project “After the End” 225238/Z/22/Z and the ESRC project TIMED (ES/X005321/1).

    ref. Emotions change our perception of time – as demonstrated on The Traitors – https://theconversation.com/emotions-change-our-perception-of-time-as-demonstrated-on-the-traitors-248254

    MIL OSI – Global Reports