Category: Global

  • MIL-OSI Global: Candidate experience matters in elections, but not the way you think

    Source: The Conversation – USA – By Charlie Hunt, Assistant Professor of Political Science, Boise State University

    Previously holding political office is an obvious advantage for candidates seeking votes. SDI Productions/E+/Getty Images

    Ever since he was chosen as Donald Trump’s running mate back in July, U.S. Sen. JD Vance, a Republican from Ohio, has come under a level of scrutiny typical for a vice presidential candidate, including for some of his eyebrow-raising public statements made in the past or during the campaign.

    One line of critique has persisted through the news cycles: that his lack of political experience may make Vance less qualified than others, including his opponent, Gov. Tim Walz of Minnesota, to be vice president.

    Do more politically experienced politicians have advantages in elections? And if they enjoyed such advantages in the past, do they still in such a polarized political moment?

    The answers are complicated, but political science offers some clues.

    Why experience should matter

    Previously holding political office, and for a longer period of time, is in some ways an obvious advantage for candidates making the case to potential voters. If you were applying for a job as an attorney, previous legal experience would be favorably looked upon by an employer. The same is true in elections: If you want to run for office, experience as an officeholder could help you perform better at the job you’re asking for.

    This approach has been taken by a number of high-profile politicians over the years. For example, in Hillary Clinton’s first campaign for president in 2008, the U.S. senator from New York and future secretary of state made “strength and experience” the centerpiece of her argument to the voters.

    Experience also might matter for the same reasons as incumbency – that is, when a candidate is currently holding the office they are seeking in an election. Incumbents typically have much higher name recognition than their challenger opponents, distinct fundraising advantages and, at least in theory, a record of policy achievement on which to base their campaigns. Even for nonincumbents, these advantages are more prevalent for previous officeholders rather than someone who is a newcomer to politics.

    Barack Obama and his family on Nov. 4, 2008, the day he won the presidential election, showing that a lack of political experience can be used as a benefit.
    Emmanuel Dunand/AFP via Getty Images

    Inexperienced, or an ‘outsider’?

    But Hillary Clinton was, of course, unsuccessful in her first bid for the Democratic presidential nomination in 2008. She was beaten by a relatively inexperienced candidate named Barack Obama; like Vance, Obama had served less than a full term in the Senate before running for higher office.

    Obama’s 2008 win shows that a lack of political experience can be leveraged as a benefit.

    One of the few things Obama and Donald Trump have in common is that both benefited from an appeal to voters as a political “outsider” in elections in which Americans were frustrated with the political status quo. As outsiders, they appeared uniquely positioned to fix what voters believed was wrong with politics.

    Does experience equal ‘quality’?

    The “outsider” label isn’t always a ticket to victory.

    In 2020, for example, voters were frustrated with the chaos of having a political outsider in the White House and turned to Joe Biden – possibly the most experienced presidential candidate in modern history at that point, with eight years as vice president and several decades in the Senate under his belt. Voters were hungry for political normalcy in the White House and made that choice for Biden.

    Does U.S. Sen. JD Vance’s lack of political experience make him less qualified than his opponent, Gov. Tim Walz of Minnesota, to be vice president?
    Scott Olson/Getty Images

    Political science has other important lessons about when experience matters and when it doesn’t. In Congress, electoral challengers – those running against incumbents – enjoy more of a boost from prior experience in places such as the state legislature. In fact, the typical indicator for challenger “quality” used in political science research is a simple marker of whether the challenger has prior political experience.

    But even this finding is more complicated than it seems: Political scientists such as Jeffrey Lazarus have found that high-quality – that is, politically experienced – challengers do better in part because they are more strategic in waiting for better opportunities to run in winnable races.

    Experience matters only sometimes – and maybe less than ever

    The usefulness of a lengthy political resume also depends on which stage of the election candidates are in.

    Research has found, for example, that a candidate’s experience matters much more in settings such as party primaries, where differences between the candidates on policy issues are typically much narrower. That leaves nonpolicy differences such as experience to play a bigger role.

    In the general election, voters supportive of one party are unlikely to factor candidate experience in that heavily, even, or especially, when the candidate they support lacks it.

    The political science phenomenon known as negative partisanship means that, more and more, voters are motivated not by positive attributes of their own party’s candidates but rather by the fear of losing to the other side. This has only been exacerbated as the two parties have polarized further.

    Voters are therefore more willing than ever to lower the standards they might have for their favored candidates’ resumes if it means beating the other side. Even if a Democrat is clearly more qualified than a Republican in terms of political experience, that advantage is unlikely to sway many Republican voters, and vice versa.

    What about 2024?

    In 2024, the experience factor is complicated. Trump, of course, has been president before – the ultimate prior experience for someone running for exactly that office.

    But he has continued to run as an outsider from the political establishment, casting Kamala Harris – who, as vice president, has little actual institutional power – as an incumbent who is responsible for the current state of the country. Since polls show consistently that a majority of Americans believe the country is not headed in the right direction, we can see why Trump might try to frame the race in this way.

    Whether Trump’s strategy ends up working will be more apparent after the election is over. For now, Trump and Harris can rest assured that most of their supporters don’t appear to care how much – or how little – experience they have.

    Charlie Hunt does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Candidate experience matters in elections, but not the way you think – https://theconversation.com/candidate-experience-matters-in-elections-but-not-the-way-you-think-240191

    MIL OSI – Global Reports

  • MIL-OSI Global: Color complexity in social media posts leads to more engagement, new research shows

    Source: The Conversation – USA – By Vamsi Kanuri, Associate Professor of Marketing, University of Notre Dame

    If you work in digital marketing, you don’t need to be told a picture’s worth a thousand words. More than half of content marketers say images are crucial for achieving their social media goals, and a staggering 70% of users prefer image-based posts over text, surveys have found.

    But which types of visuals work best? While anecdotal evidence abounds, systematic research on this topic is scarce.

    As a professor of business who knows the issues social media managers face while picking images for their posts – and who collected thousands of Facebook posts from two organizations in different industries – I saw an opportunity.

    Pigments and pixels

    Together with my colleagues Christian Hughes and Brady Hodges, I looked at what researchers call “color complexity.”

    Color complexity is similar to colorfulness, but it’s not quite the same: It’s measured as color variation across pixels in an image, and our brains process it subliminally. The more the brain has to decipher color variations across neighboring pixels, the harder it has to work.

    Fortunately, advanced computer vision technology makes it easier than ever to measure color complexity, and biometric eye-tracking makes it possible to see what images grab people’s attention in real time.

    We conducted four studies, looking at both real-world Facebook posts from two firms and experimental data using biometric eye-tracking. On the whole, we found that more complex images in social media posts tended to capture greater attention.

    However, there were some caveats.

    For instance, posts made later in the day and those with images that took up more screen space tended to benefit more from color complexity. This suggests that the timing and visual prominence of posts play a role in maximizing engagement.

    In addition, when images were paired with negative, feel-bad text, color complexity made less of a difference.

    We also found that pairing images with complex texts can actually strengthen the link between color complexity and user engagement. This surprising finding suggests that more intricate language might encourage people to pay more attention to the images.

    The complexities of color

    The importance of color in marketing, and its influence on everything from brand perception to purchase intentions, has long been well documented. Much less is known, however, about the role of color complexity in social media engagement. Our research is beginning to fill that gap.

    Overall, our findings underscore the importance of strategic image design in social media marketing. They suggest that a nuanced approach to image design, incorporating high color complexity where appropriate, can significantly enhance user engagement.

    For marketers and content creators, the implications are clear: Investing in the careful curation of social media images, especially those with high color complexity, can lead to better user engagement. Just be mindful of the timing and context, too.

    Vamsi Kanuri works for the University of Notre Dame.

    ref. Color complexity in social media posts leads to more engagement, new research shows – https://theconversation.com/color-complexity-in-social-media-posts-leads-to-more-engagement-new-research-shows-240980

    MIL OSI – Global Reports

  • MIL-OSI Global: How dogs were implicated during the Salem witch trials

    Source: The Conversation – USA – By Bridget Marshall, Professor of English, UMass Lowell

    An illustration of a court scene during the late-17th century witch trials in Salem, Mass. Christine_Kohler/iStock via Getty Images Plus

    I teach a course on New England witchcraft trials, and students always arrive with varying degrees of knowledge of what happened in Salem, Massachusetts, in 1692.

    Nineteen people accused of witchcraft were executed by hanging, another was pressed to death and at least 150 were imprisoned in conditions that caused the death of at least five more innocents.

    Each semester, a few students ask me about stories they have heard about dogs.

    In 17th century Salem, dogs were part of everyday life: People kept dogs to protect themselves, their homes and their livestock, to help with hunting, and to provide companionship.

    However, a variety of folklore traditions also associated dogs with the devil – beliefs that long predated what happened in Salem. Perhaps the most famous example of such belief is the case of a poodle named Boy who belonged to Prince Rupert, an English-German cavalry commander on the Royalist side during the English Civil War. Between 1643 and 1644, stories spread across Europe that Boy the poodle had supernatural powers, including shape-shifting and prophecy, that he used to aid his master on the battlefield.

    There is no mention in the official records of Salem’s trials of any dogs being tried or killed for witchcraft. However, dogs appear several times in the testimony, typically because an accused witch was believed to have had a dog as a “familiar” who would do her bidding, or because the devil appeared in the form of a dog.

    Numerous testimonies in the Salem trial records claim that dogs were in league with the devil, adding to the paranoia of this community that was spinning out of control.

    Associating the devil with the dog

    On May 16, 1692, a 45-year-old Amesbury, Massachusetts, man named John Kimball testified against Susanna Martin, a 71-year-old widow, saying, among other things, that she had caused a “black puppy” to appear before him when he was alone in the woods. Kimball testified that he was terrified by the dog, which he thought would tear out his throat. The dog disappeared when he began to pray.

    This, among other testimony, would contribute to Martin’s conviction for witchcraft in June 1692; she was hanged on July 19, 1692.

    In several instances recorded by the courts, accused witches confessed that the devil had appeared to them in the form of a dog. In September 1692, 19-year-old Mercy Wardwell testified that she had been conversing with the devil, and that he had appeared to her in the shape of a dog. Her confession caused her to be jailed, although she was later released when the hysteria died down.

    During the same proceedings that September, 14-year-old William Barker Jr. testified that the “shape of a black dog” appeared to him and provoked anxiety; soon after this, the devil appeared. It’s hard to know if he was suggesting that the dog was the devil himself or his companion.

    Barker confessed that he had “signed the devil’s book,” meaning that he had made a covenant with the devil and was a witch. Barker was jailed, though he would later be acquitted.

    Tituba, a woman of color enslaved in the Rev. Samuel Parris’ household, also testified about a dog. When she was examined by magistrates on March 1, 1692, Tituba recounted how the devil had appeared to her at least four times, “like a great dog” and as “a black dog.” She also said she saw cats, hogs and birds, an entire menagerie of animals working for the devil.

    An accused witch was believed to have a dog or another animal as a ‘familiar’ who would do her bidding,
    © The Trustees of the British Museum, CC BY-NC-SA

    Kimball’s, Wardwell’s, Barker’s and Tituba’s testimonies certainly may have contributed to the ongoing alarm that the residents of Salem were being led astray by a devil who might appear to them in the shape of a dog.

    Sketchy evidence

    Some popular accounts of the trials also suggest that at least two dogs were killed during the trials, but there is no evidence supporting this in the official legal testimony of the time. There is certainly some local legend that supports the claim, and many accounts of Salem have included these two dog deaths as a part of the story.

    According to local historical researcher Marilynne K. Roach’s 2002 book, “The Salem Witch Trials: A Day-by-day Chronicle of a Community Under Siege,” some of the afflicted girls claimed that a man named John Bradstreet had bewitched a dog. Although the dog was a victim, it was killed. Roach’s history also notes that another dog was shot to death when a girl claimed that the dog’s specter had afflicted her.

    Witchcraft belief at the time held that witches could send their “spectres,” or spirits, out to do their bidding.

    While these are compelling stories, neither of these events can be verified in any existing official trial documents. The source that Roach cites for the Bradstreet case is Robert Calef’s book “More Wonders of the Invisible World,” which was published in 1700. Calef, who was a Boston merchant, objected to how the trials were conducted. However, he was not present at the trials, and it is not clear what his source was for the dog stories. Such stories – and Calef’s uncited retelling of it – do not have the same authority as the legal documents in the case.

    The earliest account of a dog being shot for being a witch appears in a commentary on the Salem trials, “Cases of Conscience Concerning Evil Spirits,” published in 1693, in which the clergyman Increase Mather claims that “I am told by credible persons” that a dog was shot for bewitching a person.

    But significantly, Mather did not name the human victim or the person who told him the story. Surprisingly, Mather actually defended the dog, saying that the fact that they had successfully killed it meant that “this dog was no Devil.”

    Nearly every history of Salem recounts how when Samuel Parris’ daughters were having terrible fits that led people to believe they were bewitched, Tituba, the enslaved woman who lived in the household, baked a “witch cake” using urine from the afflicted girls and fed it to the family’s dog.

    Somehow, this was supposed to cause the dog to reveal the identity of the witch. Indeed, Reverend Parris condemned the ritual, which itself seemed to be its own kind of witchcraft.

    Fear and distrust

    All around, Salem’s witch trials seem to have been bad for dogs. Although there is no official legal evidence that dogs were killed for being witches, it’s clear that there were strong associations between dogs and the devil, and that dogs were sometimes treated poorly because of superstition.

    The Salem trials are a horrifying example of what happens when people use terrible logic and leap to indefensible conclusions with shoddy evidence. In an environment of fear and distrust, even man’s best friend could be suspected of dealings with the devil.

    Bridget Marshall does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. How dogs were implicated during the Salem witch trials – https://theconversation.com/how-dogs-were-implicated-during-the-salem-witch-trials-239802

    MIL OSI – Global Reports

  • MIL-OSI Global: Farms to fame: How China’s rural influencers are redefining country life

    Source: The Conversation – USA – By Mitchell Gallagher, Ph.D Candidate in Political Science, Wayne State University

    In the quiet backwaters of Yunnan, Dong Meihua – though her followers know her by the public alias Dianxi Xiaoge – has done something remarkable: She’s taken the pastoral simplicity of rural China and made it irresistible to millions. In her hands, a village kitchen becomes a stage, and the rhythms of farm life become a story as compelling as any novel. She is one of many rural influencers returning to their roots.

    In a digital revolution turning established narratives on their head, China’s countryside is emerging as an unlikely epicenter of viral content. Xiaoge is one of thousands of influencers redefining through social media how the countryside is perceived.

    Upending preconceptions of rural China as a hinterland of poverty and stagnation, this new breed of social media mavens is serving up a feast of bucolic bliss to millions of urbanites. It is a narrative shift encouraged by authorities; the Chinese government has given its blessing to influencers promoting picturesque rural images. Doing so helps downplay urban-rural chasms and stoke national pride. It also fits nicely with Beijing’s rural revitalization strategy.

    Hardship to revival

    To fully appreciate any phenomenon, it’s necessary to first consider the historical context. For decades, China’s countryside was synonymous with hardship and backwardness. The Great Leap Forward of the late 1950s and early 1960s – Communist China’s revered founder Mao Zedong’s disastrous attempt to industrialize a largely agrarian country – devastated rural communities and led to widespread famine that saw tens of millions die.

    The subsequent Cultural Revolution, in which Mao strengthened his grip on power through a broad purge of the nation’s intelligentsia, further disrupted customary rural life as educated youth were sent to the countryside for “reeducation.” These traumatic events inflicted deep scars on the rural psyche and economy.

    Meanwhile, the “hukou” system, which since the late 1950s has tied social benefits to a person’s birthplace and divided citizens into “agricultural ” and “nonagricultural” residency status, has created a stark divide between urban and rural citizens.

    The reform era of Mao’s successor, Deng Xiaoping, beginning in 1978, brought new challenges. As China’s cities boomed, the countryside lagged behind.

    Millions of rural Chinese have migrated to cities for better opportunities, abandoning aging populations and hollowed-out communities. In 1980, 19% of China’s population lived in urban areas. By 2023, that figure had risen to 66%.

    Government policies have since developed extensively toward rural areas. The abolition of agricultural taxes in 2006 heralded a major milestone, demonstrating a renewed commitment to rural prosperity. Most recently, President Xi Jinping’s “rural revitalization” has put countryside development at the forefront of national policy. The launch of the Internet Plus Agriculture initiative and investment in rural e-commerce platforms such as Taobao Villages allow isolated farming communities to connect to urban markets.

    Notwithstanding these efforts, China’s urban-rural income gap remains substantial, with the average annual per capita disposable income of rural households standing at 21,691 yuan (about US$3,100), approximately 40% of the amount for urban households.

    Enter the ‘new farmer’

    Digital-savvy farmers and countryside dwellers have used nostalgia and authenticity to win over Chinese social media. Stars such as Li Ziqi and Dianxi Xiaoge have racked up huge numbers of followers as they paint rural China as both an idyllic escape and a thriving cultural hub.

    The Chinese term for this social media phenomenon is “new farmer.” This encapsulates the rise of rural celebrities who use platforms such as Douyin and Weibo to document and commercialize their way of life. Take Sister Yu: With over 23 million followers, she showcases the rustic charm of northeast China as she pickles vegetables and cooks hearty meals. Or Peng Chuanming: a farmer in Fujian whose videos on crafting traditional teas and restoring his home have captivated millions.

    Since 2016, these platforms have turned rural life into digital gold. What began as simple documentation has evolved into a phenomenon commanding enormous audiences, fueled not just by nostalgia but also economic necessity. China’s post-COVID-19 economic downturn, marked by soaring youth unemployment and diminishing urban opportunities, has driven some to seek livelihoods in the countryside.

    In China’s megacities, where the air is thick with pollution and opportunity, there’s clearly a hunger for something real – something that doesn’t come shrink-wrapped or with a QR code. And rural influencers serve slices of a life many thought lost to China’s breakneck development.

    Compared with their urban counterparts, rural influencers carve out a unique niche in China’s vast social media landscape. Although fashion bloggers, gaming streamers and lifestyle gurus dominate platforms such as Weibo and Douyin, the Chinese TikTok, rural content creators tap into a different cultural romanticism and a yearning for connection to nature. In addition, their content capitalizes on the rising popularity of short video platforms such as Kuaishou and Pinduoduo, augmenting their reach across a wide demographic, from nostalgic retirees to eco-conscious millennials.

    But this is not simply digital escapism for the masses. Tourism is booming in once-forgotten villages. Traditional crafts are finding new markets. In 2020 alone, Taobao Villages reported a staggering 1.2 trillion yuan (around $169.36 billion) in sales.

    The Chinese government, never one to miss a PR opportunity, has spotted potential. Rural revitalization is now the buzzword among government officials. It’s a win-win: Villagers net economic opportunities, and the state polishes its reputation as a champion of traditional values. Government officials have leveraged platforms such as X to showcase China’s rural revitalization efforts to international audiences.

    Authenticity or illusion?

    As with all algorithms, there’s a catch to the new farmer movement. The more popular rural influencers become, the more pressure they face to perform “authenticity.” Or put another way: The more real it looks, the less real it might actually be.

    It raises another question: Who truly benefits? Are we witnessing rural empowerment or a commodification of rural life for urban consumption? With corporate sponsors and government initiatives piling in, the line between genuine representation and curated fantasy blurs.

    Local governments, recognizing the economic potential, have begun offering subsidies to rural content creators, causing skepticism about whether this content is truly grassroots or part of a bigger, state-led campaign to sanitize the countryside’s image.

    Yet, for all the conceivable pitfalls, the new farmer trend is an opportunity to challenge the urban-centric narrative that has dominated China’s development story for decades and rethink whether progress always means high-rises and highways, or if there’s value in preserving ways of life that have sustained communities for centuries.

    More importantly, it’s narrowing the cultural disconnect that has long separated China’s rural and urban populations. In a country where your hukou can determine your destiny, these viral videos foster understanding in ways that no government program ever could.

    Mitchell Gallagher does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Farms to fame: How China’s rural influencers are redefining country life – https://theconversation.com/farms-to-fame-how-chinas-rural-influencers-are-redefining-country-life-239540

    MIL OSI – Global Reports

  • MIL-OSI Global: Religious hate crimes in England and Wales are at a record high – but many still go unreported

    Source: The Conversation – UK – By Peter Hopkins, Professor of Social Geography, Newcastle University

    Shutterstock

    Religious hate crimes in England and Wales are at record levels. New Home Office statistics reveal that although hate crime overall saw an annual decrease of 5% in the year to March 2024, there was a 25% increase in religious hate crimes.

    Hate crimes against Jewish people more than doubled from the previous year, making up 33% of religion-based hate crime in the new figures. Those against Muslims rose by 13%, making up 38% of the total.

    There was a sharp increase in reported incidents against both Jewish and Muslim people after the Israel-Hamas conflict began in October 2023. While the total number of offences has since declined, it is still higher than before the conflict began.

    These figures reflect police-recorded hate crime, but other organisations also track these incidents. The organisation Tell Mama, which tracks anti-Muslim hate, recorded a 335% increase in cases in the months after October 7 2023 compared to the year before. And the Community Security Trust tracked a 147% rise in anti-Jewish hate in 2023 compared to 2022. Of these incidents, 66% were on or after October 7.

    The October 7 attacks are an example of a trigger event that usually precedes a spike in hate crime. These events can “galvanise tensions and sentiments against the suspected perpetrators and groups associated with them”.

    Trigger events can be one-off events or last only a short period of time, but the continuing high levels of hate crime that the UK has seen over the past year is still likely due to the ongoing situation in the Middle East.

    These trends had been increasing worldwide, and not only since the latest conflict. A UN report in 2021 found that Islamophobia had reached “epidemic proportions”. Additionally, as my colleagues and I have found in our research, such racism is also experienced by a diverse range of ethnic groups and not only Muslims. A rise in antisemitism has been recorded around the world too.

    Unreported hate

    Not only are the latest statistics in the UK alarming, they are only the tip of the iceberg. As my work on the inquiry into Islamophobia in Scotland found, many incidents go unreported.

    We found that many did not report incidents due to concerns about institutional racism in the police and a lack of confidence in policing and in the criminal justice system. Added to this were worries about not having enough evidence, the incident not being “serious enough”, and fear of reprisal. Some even felt that it happened so often that there was “no point” in reporting it.

    Anti-Jewish hatred has risen in the UK since October 7 2023.
    Shutterstock

    The long-term impacts of hate crime are deeply concerning. Victims who experience constant discrimination are likely to experience poor health outcomes and premature ageing.

    The rising numbers also promote a culture of fear that can discourage members of ethnic or religious minority groups from participating fully in society.
    My colleagues and I have found in our research that Islamophobia and prejudice has stopped some Muslims from participating in politics and going out to socialise.

    Encouragingly, however, others chose to become more active in their communities in order to challenge stereotypes about Muslims.

    Making prejudice mainstream

    In addition to the trigger event of the Israel-Hamas war, there are a number of factors that contribute to rising hate crime, particularly against Muslims.

    First is the prevalence of organisations and individuals, including media outlets, online influencers, far-right think-tanks and political figures who promote anti-Muslim messaging and hatred.

    The rise of far-right politics around the world plays a role. The election of Donald Trump, as well as
    recent electoral gains by Marine Le Pen in France, the Freedom Party in Austria and Reform UK show how such politics are seeping into the mainstream.

    But even supposedly centrist politicians spread narratives that contribute to Islamophobia and racism. For example, former prime minister David Cameron decried the failure of multiculturalism and this message was repeated by Suella Braverman when she was home secretary.

    This perpetuates the idea that it is not possible for different ethnic and religious groups to live in harmony. I would argue this provides an ideal platform for the promotion of Islamophobia.

    Mainstream media outlets and social media also shape the narratives that contribute to a culture of fear around Muslims. High profile acts of religious hatred, such as the atrocities committed by Anders Breivik in Oslo in 2011 or by Brenton Tarrant in Christchurch in 2019, tend to be put down to a “lone wolf” or to be regarded as “fringe incidents”, rather than part of a wider problem to be addressed. Both Breivik and Tarrant promoted white supremacy and were explicitly anti-Muslim.

    The spread of inaccurate information on social media has stirred up Islamophobia, antisemitism and racism, and led to violence against migrants. This was seen in the far-right riots in summer 2024 following the fatal stabbing of three young girls in Southport, near Liverpool.

    According to a report by the Center for Countering Digital Hate, a false name and disinformation suggesting the attacker was Muslim reached around 1.7 billion people across several platforms.

    The long history of Islamophobia in Britain can be traced back to the response to the 9/11 terror attacks and the “war on terror”. The UK’s counter-terrorism programme Prevent has made life intolerable for Muslims by promoting the idea that all Muslims are potential terrorists and a threat to security.

    The obsession with this approach persists internationally despite the existence of several alternatives, yet it urgently needs to be replaced alongside the thinking that supports it.

    The result of all this is that Islamophobia has flourished in the UK without being called out by those in power. This must be challenged if we want to see a reduction in racially and religiously motivated hate crime.

    Peter Hopkins receives funding from the Leverhulme Trust.

    ref. Religious hate crimes in England and Wales are at a record high – but many still go unreported – https://theconversation.com/religious-hate-crimes-in-england-and-wales-are-at-a-record-high-but-many-still-go-unreported-241071

    MIL OSI – Global Reports

  • MIL-OSI Global: How profits from big pharma’s use of genetic information could revolutionise nature conservation

    Source: The Conversation – UK – By Eleanor Jane Milner-Gulland, Tasso Leventis Professor of Biodiversity, University of Oxford

    The blood of rare horseshoe crabs is sometimes used in the development of vaccines. Sinhyu Photographer/Shutterstock

    The blue blood of threatened horseshoe crabs contains a chemical essential for testing the safety of vaccines. So these ancient creatures are highly sought after by pharmaceutical companies worldwide, contributing to declines in their populations.

    While species are disappearing at alarming rates, with a global biodiversity financing gap of US$600 billion to US$800 billion (£460 billion to £610 billion) annually, the genetic information of rare plants and animals is a commercially valuable resource.

    Advances in technology now allow the rapid sequencing and sharing of genetic data, bringing huge benefits (and profits) for biotechnology and medicine. However, it also opens the door to “biopiracy”: the unethical or unlawful appropriation of biological resources, typically from countries or Indigenous communities in developing countries.

    Even if genetic information is obtained and used appropriately and within the law, important ethical, legal and financial questions still arise: who owns the genetic data derived from nature, and how can we ensure fair sharing of the benefits derived from their use?

    A key debate at Cop16, the upcoming UN biodiversity conference, will be how best to channel funding into protecting valuable biological resources. If done properly, people can benefit from the genetic information that nature contains, while ensuring that those conserving these resources, particularly Indigenous people, are properly compensated financially for their efforts.

    Our recent paper argues that rules of fair allocation, which have been around since the time of Aristotle, offer a potential way forward.

    Genetic information extracted from living organisms can now be easily digitised and shared across borders. This practice, often referred to as digital sequence information (DSI), plays a pivotal role in advancing research in fields such as medicine, agriculture and environmental science.

    For example, the genome of the COVID-19 virus was digitally sequenced and shared globally, enabling researchers worldwide to use that DSI to develop vaccines quickly.

    Yet, this leads to ethical and legal challenges. The genetic codes of plants and animals from all over the world are stored in international databases, often without proper acknowledgement or compensation to the countries or communities where these sequences originated.

    Countries with rich biodiversity, particularly in developing countries, have raised concerns that their genetic resources are being used – and in some cases monetised and commercialised – without approval or fair compensation. Indigenous peoples and local communities have similar concerns.

    So, who owns genetic data? It depends.

    The ownership of genetic data derived from plants and animals has become a grey area. In theory, countries have sovereignty over their biodiversity, as stipulated in an international agreement adopted in 2010 called the Nagoya protocol. This mandates that countries sharing their biological resources should be compensated through access and benefit-sharing agreements.

    Genetic codes of rare plants aren’t currently owned by their country of origin.
    Polonio Video/Shutterstock

    However, the concept of DSI has complicated these agreements. When genetic data is transformed into a digital format and stored in databases, it is not always clear whether the original country still holds any rights over that data.

    Should the digital sequence information of a rare Amazonian plant, for example, belong to the country where it was found, or is it now part of a global commons available to any researcher or commercial entity? Currently, there is no universal agreement on DSI, and with companies and research institutions using genetic data freely, this opens the door to the next wave of biopiracy

    Biopiracy has been a historical problem, long before digital data entered the picture. For decades, pharmaceutical and agricultural companies have sourced plant and animal materials from the Amazon rainforest or African savannas. They patented products based on those materials and profited without compensating source countries or Indigenous peoples and local communities who may have used these species for generations.

    Now this issue extends beyond physical specimens. The real treasure lies in the genetic information itself. When genetic data is digitised and shared globally, it becomes challenging to trace its origins and hold companies accountable for unauthorised use.

    In the absence of benefit-sharing mechanisms (formal ways to share the monetary and non-monetary benefits of using biodiversity with those who bear the costs of conserving it), companies can patent discoveries derived from DSI, with profits flowing to corporations and research institutions in developed countries.

    Meanwhile, low-to-middle-income nations that are home to these resources and the communities that protect them do not benefit. We argue this is unjust and contributes towards the continued undervaluation and therefore degradation of biodiversity.

    A new genetic code

    At Cop16, a potential solution is up for a negotiation: a global system governing the exchange of DSI, including a multilateral fund into which companies which benefit from DSI would contribute.

    This fund would be used to pay for action to conserve biodiversity, with a specific priority given to funding for Indigenous peoples and local communities, women and youth. As well as providing compensation for stewardship of the biodiverse ecosystems that contain these genetic resources, funding can be used for training and capacity-building (such as genetic research), which could start to compensate for longstanding inequalities of opportunity that are built into today’s research and commercialisation systems.

    Many questions remain as to how this fund would work. That will be negotiated at Cop16. One particular challenge is determining how to implement mechanisms to distribute this fund that are fair, enforceable, and do not overburden countries or companies.

    Proposed solutions are grounded in rules of fair allocation. Pharmaceutical companies using DSI could contribute in proportion to their profits or revenues. Beneficiaries could receive payment or other benefits according to criteria such as the levels of biodiversity conserved, threats to biodiversity and financial need.

    This multilateral fund could be a major contributor to conservation finance, and one which is directed at those who actually conserve biodiversity on the ground. It has been described as a potentially “historic breakthrough” by the executive secretary of the convention on biological diversity.

    But there are still major hurdles to overcome. Big pharma companies are resistant due to the potential financial implications. There has been limited engagement from the conservation community, perhaps because fair sharing of the benefits from genetic materials appears much less immediately pressing than the conservation of wild species and their habitats.

    If successful, this could represent a major step towards generating the finance that is desperately needed to support nature conservation. It would set a precedent for similar mechanisms to ensure that those benefiting from using nature pay for the cost of conserving or restoring it – just like bycatch taxes in commercial fisheries or pollution taxes on large agribusinesses.

    We believe that this proposal could be revolutionary if it succeeds in channelling large amounts of biodiversity finance to where it is most needed in a fair and equitable way. Genetic data should not only be seen as a resource that generates new drugs and technologies, but as a shared asset of humanity, with the rights and sovereignty of nature’s stewards properly respected and valued.



    Don’t have time to read about climate change as much as you’d like?

    Get a weekly roundup in your inbox instead. Every Wednesday, The Conversation’s environment editor writes Imagine, a short email that goes a little deeper into just one climate issue. Join the 35,000+ readers who’ve subscribed so far.


    Eleanor Jane Milner-Gulland receives funding from UKRI, Research England Development Fund, Login5 Foundation, IKI, Defra, USFWS, Leverhulme Trust and the Leventis Foundation. She is a member of the UK government’s Defra Biodiversity Evidence Committee, chairs the Darwin Expert Committee, a member of IUCN-SSC, and the Nature Positive Initiative.

    Dale Squires was supported by an Oxford Martin School Visiting Fellowship.

    Hollie Booth receives funding from the UK Darwin Initiative. As well as University of Oxford she is affiliated with The Biodiversity Consultancy and Kebersamaan Untuk Lautan.

    ref. How profits from big pharma’s use of genetic information could revolutionise nature conservation – https://theconversation.com/how-profits-from-big-pharmas-use-of-genetic-information-could-revolutionise-nature-conservation-240565

    MIL OSI – Global Reports

  • MIL-OSI Global: We tend to keep away from midges and – even when in swarms – they tend to keep away from each other

    Source: The Conversation – UK – By Alex Dittrich, Senior Lecturer in Zoology, Nottingham Trent University

    Shutterstock

    We’ve all found ourselves trying to avoid the swarms of midges that are so common in late summer. But as you try to avoid them, what you may not know is that they are equally keen to avoid each other.

    It’s strange behaviour for creatures that typically move around together. But physicist Andrew Reynolds from research centre Rothamsted Research recently investigated swarms of the non-biting midge Chironomus riparius, and found something very strange happening.

    While they may move around in swarms, they do so in a way that ensures they keep their distance from each other. And it might be why, paradoxically, they are so successful at breeding.

    Swarming, where animals form large and dense groups, is common in a lot of animals. A lot of us are familiar with the murmuration of starlings at sunset as they dance in the setting sun, for example. In water, animals form shoals, pods and schools. They may vary in their cohesiveness and the species that they contain, but are all essentially different types of swarms.

    It helps animals evade predators and gives them safety in numbers. Large numbers of animals in these aggregations make it difficult for predators to single out a target. This is known as the selfish herd effect where animals seek positions towards the centre of a herd, shoal or flock where there’s less risk of being attacked.

    Animals sometimes behave differently as part of a bigger system where the animal is interacting with it’s nearest neighbour. Fish for example align themselves and match speed with their nearest neighbour to shoal together and avoid collisions. Birds operate in a similar way.

    Social insects such as ants often swarm in the summer, in mate-finding nuptial flights. Locusts defoliate large patches of land before moving on. Some researchers suggest that this social aggregation behaviour is linked to elevated serotonin the locusts get from close contact.

    However, in the midge C. riparius we see something different.

    Reynold’s research showed that these midges maintain maximum distance from one another. In the lab based models of these midges he studied, the midges are almost, by equal measure attracted to the centre of a swarm, but also away from each other.

    Birds in a flock move in the same direction, staying close to one another (positive correlation). But C. riparius midges position themselves apart, so if one moves left, others tend to move right for example (maximal anticorrelation).

    The swarms of C. riparus are predominantly for reproductive purposes and they are made up of males. Midges maximise their potential to find a mate by collecting at the same time, in the same place. You could argue that’s how bars and pubs work for humans.

    When a female enters the swarm however, and is pursued by a male, the swarm maintains cohesion. The other members of the swarm are still drawn towards her. But this force of attraction is weaker than the negative “impulse” for the males to stay away from each other.

    Staying evenly spaced means there is less competition between males. Which means that, as a group, they spend less energy and have more overall mating success.

    The repellent effect also has other advantages. When midges are spaced apart in an organised and distributed way, the swarm can collectively respond to disruptions, such as changes in weather or predators, without losing its structure. Because each midge’s relative position to each other is defined by the maximal anticorrelation, a disturbance to one part of the swarm can quickly be compensated by the whole group.

    We might learn a thing or two from the midge. In social situations, let’s take a step back, wait our turn, and give each other some space. Don’t interrupt your friend in conversation, don’t barge in at the self-service checkouts in the supermarket… and certainly don’t flirt with your friend’s partner.

    Alex Dittrich does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. We tend to keep away from midges and – even when in swarms – they tend to keep away from each other – https://theconversation.com/we-tend-to-keep-away-from-midges-and-even-when-in-swarms-they-tend-to-keep-away-from-each-other-241055

    MIL OSI – Global Reports

  • MIL-OSI Global: Salem’s Lot: a faithful but shallow adaptation of Stephen King’s classic vampire novel

    Source: The Conversation – UK – By Andrew Dix, Senior Lecturer in American Literature and Film, Loughborough University

    The vampire story dwells among the undead of literary and cinematic genres, ever available for reanimation. This year alone has seen the publication of more than 30 vampire novels in the US (from Rachel Harrison’s So Thirsty to K. M. Enright’s Mistress of Lies), alongside the release of several vampire movies, including Abigail (with Nosferatu, rebooting the silent German classic, due at Christmas).

    Now comes Salem’s Lot. Written and directed by Gary Dauberman, it’s the first feature-film adaptation of the 1975 novel in which Stephen King set himself the thought experiment of transposing Bram Stoker’s Dracula to contemporary New England. The book has been adapted twice before, in 1979 and 2004, but each time as a TV miniseries.

    Of these precursors, the more interesting is the first, directed by Tobe Hooper. Made five years after The Texas Chain Saw Massacre, it signified Hooper’s move towards the mainstream, while retaining some gory scenes and choppy editing reminiscent of his old grindhouse aesthetic.

    The new Salem’s Lot begins with a series of maps that trace how the master vampire, concealed in a chest, has reached Maine. The film’s own passage, stalled for years by the calculations of marketers and schedulers, has been equally arduous. It arrives now rather belatedly and without blockbuster flourish. While UK King fans can enjoy it on the big screen, it is consumable in most other locations only via the streaming service Max.

    The trailer for Salem’s Lot.

    Literary and film scholar Robert Stam offers a profusion of terms to describe the work undertaken by screen adaptations. They may, for example, “rewrite”, “transmute” or even “critique” their source-texts. Indicating a gentler kind of process, however, Stam also allows that an adaptation can offer an “incarnation” or “performance” of the material it is adapting. Performing Salem’s Lot in this sense, responding in audio-visual form to King’s prompts and refusing major reinventions, appears to be Dauberman’s goal.

    King is a successor not only to Stoker and other horror writers such as H. P. Lovecraft, but to the late-19th century “local colorists” in New England, who attentively documented the sights and sounds of their region. On the page, Salem’s Lot is visually abundant. The new adaptation attempts to be similarly conscientious.

    Dauberman takes care in matters of colour and lighting. A church’s doors, shut against the vampiric menace, glow a vivid red. Two boys walk through a wood silhouetted at sunset, their bodies ominously already lacking substance against a sky that is turning from pink to black. There are other visual pleasures, too, representing a shift away from Hooper’s version, where the shots are rougher-edged and decidedly non-pictorial.

    The cast of this Salem’s Lot is likeable and struggles gamely, in the face of regular jump scares, to solicit audience engagement. Unlike Hammer’s Dracula adaptations, say, in which the monster has all the charisma, this is something of a democratic vampire film and devolves interest to members of the opposing force.

    A pleasing modification is also made to the overbearing whiteness of King’s narrative world, with two of the pluckiest vampire hunters reimagined as African American.

    Beyond the scare

    But if this latest adaptation of Salem’s Lot is easy enough on the eye, intellectually it is shallow. This matters, because the best vampire fictions prompt us not merely to be terrified, but to start interpreting – they generate meanings as well as scares.

    What, precisely, is signified by their monstrous protagonists? As expert in Victorian literature, Nina Auerbach, wrote in her still valuable book Our Vampires, Ourselves (1995): “No fear is only personal: it must steep itself in its political and ideological ambience, without which our solitary terrors have no contagious resonance.”

    Writing his novel in 1975, as the progressive dreams of the 1960s faded, King found in the vampire an apt image of power and cruelty in America. In his own words, from the afterword to Salem’s Lot: “I saw a metaphor for everything that was wrong with the society around me, where the rich got richer and the poor got welfare … if they were lucky.” When vampires strike in the book, there is therefore the sense of a nation at risk, not merely a few families or a handful of individuals.

    The new adaptation, by contrast, represses rather than invites such interpretive effort on our part. It carries across the novel’s mid-1970s setting, but is interested more in accurate period detailing – the right model of car, the appropriate hairstyle – than in substantive historical exploration. It also doesn’t use the category of the vampire movie to say something insightful about our own time: the post-COVID moment, for example, or the era of Donald Trump (a figure with rich vampiric possibilities).

    Dauberman’s version of Salem’s Lot is certainly respectful of its source-text (unsurprising, perhaps, with King himself listed among its executive producers). And it functions perfectly well as a showcase for the varied skills of props designer, prosthetic artist and special effects engineer. But, as a work of cultural and social inquiry, this latest vampire story is disappointingly de-fanged.



    Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.


    Andrew Dix does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Salem’s Lot: a faithful but shallow adaptation of Stephen King’s classic vampire novel – https://theconversation.com/salems-lot-a-faithful-but-shallow-adaptation-of-stephen-kings-classic-vampire-novel-241278

    MIL OSI – Global Reports

  • MIL-OSI Global: Music and dementia: researchers are still making discoveries about how songs can help sufferers

    Source: The Conversation – UK – By Rebecca Atkinson, Researcher in Music Therapy, Anglia Ruskin University

    Numerous studies have shown music therapy has many benefits for dementia patients. Unai Huizi Photography/ Shutterstock

    Music is woven into the fabric of our everyday lives. Whether it’s lifting our spirits, pushing us to run faster or soothing us to sleep, we can all recognise its power. So it’s no wonder it is increasingly being used in medical treatment.

    As well as proving very useful in cancer treatment, managing chronic pain and even helping the brain recover after a stroke, researchers have also been making great strides in using music to help patients with dementia.

    It reduces patients’ anxiety and depression, and improves wellbeing both for them and their carers by enhancing everyone’s ability to adapt and cope with adversity or stress.

    Music therapy in the form of playing, singing or listening to music can also have a positive effect on cognitive function – particularly for older adults either with dementia or memory issues.

    So why does music appear to have such a powerful effect for people with dementia?

    Music and the brain

    About a decade ago, researchers discovered that when people listened to music, multiple areas of the brain were involved in processing it. These included the limbic (which processes emotions and memory), cognitive (involved with perception, learning and reaction) and motor areas (responsible for voluntary movement). This challenged preconceptions that music was processed more narrowly in the brain – and helped explain why it has such a unique neurological impact.

    Not only that, research has shown that music might help regenerate the brain and its connections. Many causes of dementia centre around cell death in the brain, raising the possibility that music could help people with dementia by mending or strengthening damaged neural connections and cells.

    Many brain areas are activated when we listen to music.
    Toa55/ Shutterstock

    It’s not just any music that has a regenerative effect on the brain, though. Familiar and favourite music has been shown to have the biggest impact on the way we feel, and is closely linked with memory and emotions. This is because listening to our favourite songs releases feel-good hormones that give us a sense of pleasure. Curated music playlists of favourite music could be the key in helping us deal with the stress of everyday life.

    This is relevant to Alzheimer’s and other forms of dementia because researchers have discovered that parts of the brain linked with musical memories are less affected by these conditions than other areas of the brain. This explains why memories and experiences that are linked to favourite music are often preserved for people with such conditions.

    Listening to music can also help manage their experiences of distress, agitation and “sundowning” – where a person is more confused in the afternoon and evening.

    In a small study conducted by us and our colleagues at the Cambridge Institute for Music Therapy Research, we showed just how great of an effect listening to music can have for people with dementia. We found that when people with dementia repeatedly listened to their favourite music, their heart rate and movements changed in direct response.

    This showed that people’s physical responses were affected by musical features like rhythm and arrangement. Their heart rate also changed when they sang along to music, or when they began reminiscing about old memories or stories while listening to a song or thinking about the music. These changes are important because they show how music affects movement, emotions and memory recall.

    Studies have also shown that during and after listening to music, people with dementia experienced less agitation, aggression and anxiety, and their general mood was improved. They even needed less medication when they had regular music sessions.




    Read more:
    Why researchers are turning to music as a possible treatment for stroke, brain injuries and even Parkinson’s


    Other researchers have even begun testing the effects of music training programmes to support cognition for people with dementia. Results have been promising so far – with adults in the study showing improved executive functioning (problem solving, emotion regulation and attention) compared to those who took part in just physical exercise.

    So, music is likely to continue to be a useful medical treatment for people with dementia. But based on what we know so far, it’s important that it comes from the patient’s own music collection – and is used alongside other management techniques such as using drugs that can slow the progression of dementia or help manage symptoms to support self-care and wellbeing.

    Dr. Rebecca Atkinson is affliated with Chiltern Music Therapy, non-profit organisation.

    Ming-Hung Hsu receives funding from the National Institute for Health and Care Research and Innovate UK.

    ref. Music and dementia: researchers are still making discoveries about how songs can help sufferers – https://theconversation.com/music-and-dementia-researchers-are-still-making-discoveries-about-how-songs-can-help-sufferers-239446

    MIL OSI – Global Reports

  • MIL-OSI Global: People displaced by hurricanes face anxiety and a long road to recovery, US census surveys show − smarter, targeted policies could help

    Source: The Conversation – USA – By Trevor Memmott, Assistant Professor of Policy and Public Affairs, UMass Boston

    Hurricane Helene flooded homes with water and mud in Marshall, N.C. Many people will be out of their homes for months or longer. AP Photo/Jeff Roberson

    The trauma of natural disasters doesn’t end when the storm or wildfire is gone, or even when communities are being put back together and homes have been rebuilt.

    For many people, being displaced by a disaster has long-term consequences that often aren’t obvious or considered in disaster aid decisions.

    We study public policy and disaster response. To get a better understanding of the ongoing challenges disaster victims face – and how officials can respond more effectively – we analyzed U.S. Census Bureau surveys that ask people nationwide about their disaster displacement experiences, as well as their stress and anxiety.

    The results show how recovery from disasters such as hurricanes, wildfires, tornadoes and flooding involves more than rebuilding, and how already vulnerable groups are at the greatest risk of harm.

    Millions are displaced every year

    The Census Bureau’s Household Pulse Survey has been continually collecting data on people’s social and economic experiences since 2020. Since late 2022, it has specifically asked respondents whether they had been displaced from their homes because of natural disasters.

    Nearly 1.4% of the U.S. adult population reported being displaced in the previous year, equating to more than 3 million Americans. The most common cause of those displacements was hurricanes, responsible for nearly one-third of the displacements.

    Some groups faced a higher chance of being displaced by a natural disaster than others.

    The likelihood of displacement was above average for people with incomes of less than $50,000 (1.9% of that population was displaced), disabled people (2.7%), African Americans (2.3%) and Latinos/Hispanics (1.8%), as well as for those who identified their sexual orientation as gay/lesbian, bisexual, something else, or said that they don’t know (2.2%).

    The problems of displacement go beyond immediate evacuation. People may have to stay in temporary shelters such as stadiums, churches or disaster relief areas. During this time, they are likely unable to work and earn income. Others with nowhere else to go may return to still-damaged homes after the storm passes.

    Many people who were displaced by a hurricane faced weeks without power or lacked access to enough food, clean water or other basic necessities. After being displaced, 64% of adults said they lacked electricity some or all of the time, 37% lacked enough food, 29% lacked drinkable water, and 25% indicated that they experienced unsanitary conditions some or all of the time.

    Going without enough clean water or electricity can expose people to diseases and other health risks, on top of the stress of dealing with the damage, displacement and uncertainty about the future.

    About 36% of those displaced were out of their homes for more than one month. Nearly 16% of them indicated that they never were able to return. Vulnerable groups, especially people of color and disabled people, were least likely to return home quickly.

    Impacts on health

    Being displaced also piles on stress and creates instability. People displaced by storms may bounce among family members’ houses, hotel rooms or even vehicles as they wait to return to a home that has been damaged. They may have lost jobs or be unable to find temporary housing nearby, creating feelings of uncertainty about the future.

    People who feel that their safety or security is threatened are more likely to experience mental stress and, potentially, post-traumatic stress disorder. The effects can accumulate over time and have long-term health consequences. Chronic stress can contribute to hypertension and heart disease and make rebuilding lives even harder as people struggle with more than just the damage around them.

    The Household Pulse Survey also collects information on the symptoms of anxiety and depression that individuals experience.

    Among those who have been displaced by a hurricane, 38% indicated experiencing generalized anxiety, a much higher percentage than the 23% of the population who did not experience displacement.

    Similarly, 33% of those who were displaced experienced symptoms of major depressive disorder compared with 18% of the population who did not face displacement.

    Better policies for long-term recovery

    The survey results highlight the need to restore water and power to homes quickly after disasters. The results also point to prioritizing communities that are least able to afford being displaced.

    Studies have shown that low-income communities often wait longest for power to be restored after hurricanes. The survey shows that these communities and other disadvantaged groups also face higher levels of displacement after disasters.

    Beyond the immediate responses to a disaster, the survey suggests that federal, state and local policymakers will have to consider long-term assistance for both housing recovery and for health care.

    A young man stares at what is left of his family’s homes after Hurricane Helene flooded parts of Hendersonville, N.C., in September 2024.
    AP Photo/Brittany Peterson

    Currently, the Federal Emergency Management Agency primarily focuses on providing short-term disaster relief. The large majority of its disaster funding goes toward evacuation, temporary shelter for people displaced, emergency supplies, insurance and rebuilding community infrastructure. While other federal programs provide rebuilding assistance for individuals, they don’t sufficiently address the long-term challenges, in our view.

    Some ways government could help include providing targeted cash transfers to ensure vulnerable households can rebuild, investing in affordable and climate-resilient housing that can limit losses in future disasters, and funding long-term mental health services for disaster survivors at free or reduced cost.

    As the climate warms, extreme storms are becoming more common in every region of the country. That’s raising the risks and the need for policymakers to prepare communities to limit harm from disasters and recover afterward. We believe rebuilding lives will require support long term, both for building more resilient homes and infrastructure and for recovering from the trauma.

    Christian Weller is affiliated with the Center for American Progress (Senior Fellow)

    Trevor Memmott does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. People displaced by hurricanes face anxiety and a long road to recovery, US census surveys show − smarter, targeted policies could help – https://theconversation.com/people-displaced-by-hurricanes-face-anxiety-and-a-long-road-to-recovery-us-census-surveys-show-smarter-targeted-policies-could-help-241189

    MIL OSI – Global Reports

  • MIL-OSI Global: Kenya’s presidents have a long history of falling out with their deputies – Rigathi Gachagua’s impeachment would be no surprise

    Source: The Conversation – Africa – By Gabrielle Lynch, Professor of Comparative Politics, University of Warwick

    The process of removing Kenya’s deputy president Rigathi Gachagua is part of a long history, dating back to independence, of fallouts between the president and his deputy. The difference this time around is the process.

    Historically, presidents have fired their deputies. But the adoption of a new constitution in 2010, saw the introduction of a process for impeachment – for both the president and the deputy – that’s run by the legislature. This is the first time it’s been used.

    On 8 October 2024, members of Kenya’s national assembly voted to impeach Gachagua on grounds that included corruption, insubordination and ethnically divisive politics. The case now moves to the senate where members will hear the charges – and Gachagua’s defence – and vote.

    If at least two-thirds of senate accept the charges, and Gachagua’s legal challenges fail, then Gachagua will make history as Kenya’s first deputy leader to be impeached.

    So far, President William Ruto has stayed silent on the matter, but the process would not be proceeding without his blessing.

    Amid the novelty of the impeachment process, it’s easy to forget that it is the norm for Kenyan presidents to fall out with their deputies. As a political scientist interested in Kenya’s ethnic politics and democratisation, I argue that this is because of how deputies are selected in the first place.

    Deputies are initially selected largely on pragmatic grounds as people who bring something useful to a political alliance. This could be resources, a support base or a reputation for being a good technocrat or administrator.

    They’re not usually people with whom the president has a strong and continuous personal relationship or someone with whom they share a clear political ideology. Neither are they usually someone who has made their way up through a political party.

    This has brought about a long history of tensions and fallout between Kenya’s presidents and their deputies.

    History of fallouts

    Independent Kenya’s first vice president, Oginga Odinga, saw his ministerial portfolio gradually reduced by President Jomo Kenyatta. Kenyatta then replaced Odinga as vice president of the ruling Kenya African National Union (Kanu) in 1966 further undermining his powers. Soon after, Odinga joined the opposition Kenya’s People’s Union.

    His successor, Joseph Murumbi, resigned within months. The official reason given was ill health, but it is widely believed that Murumbi was troubled by corruption and authoritarianism within the Kenyatta regime.

    Kenya’s second president, Daniel arap Moi, elected Mwai Kibaki as his first deputy. Kibaki was dropped after a decade. He went on to form an opposition party as soon as Kenya shifted to multi-party politics in 1992.

    Moi’s second vice president, Josephat Karanja, resigned after a year to avoid a vote of no confidence for allegedly plotting to overthrow the government.

    Moi’s third deputy, George Saitoti was sidelined to pave way for Uhuru Kenyatta’s nomination as the party flagbearer in 2002. Moi’s final deputy, Musalia Mudavadi, fell with the rest of the Kanu government in the 2002 elections.

    As Kenya’s third president, Kibaki similarly oversaw a regular change of guard. His first deputy, Michael Wamalwa, died after a few months in office. His second, Moody Awori, lost his seat in the 2007 election.

    Kibaki’s third deputy, Kalonzo Musyoka, joined the president during Kenya’s post-election violence of 2007-08. He left at the end of his term in 2013 to run with Raila Odinga in the 2013, 2017 and 2022 presidential elections.

    Kenya’s fourth president, Uhuru Kenyatta, was the only leader to have the same deputy, William Ruto, for his full term as president – from 2013 to 2022. However, relations between Kenyatta and Ruto were hardly rosy. The two fell out after the 2017 elections as Kenyatta teamed up with long-standing opposition leader, Raila Odinga. Ruto beat Odinga, Kenyatta’s favoured candidate in the 2022 elections.

    Lessons to learn

    Because deputies are selected for their practical value, the person who made a good deputy at one point in time can come to be seen as a liability or threat as the political context changes.

    For example, at independence, Oginga Odinga made an excellent ally for Jomo Kenyatta. He had some resources and was a proven mobiliser. He brought a support base. However, within a few years, Odinga became a problem for the president as a more radical faction within the ruling party coalesced around him.

    Similarly, Ruto made an excellent ally for Uhuru Kenyatta when they both faced charges for crimes against humanity at the International Criminal Court. The two fell out once Kenyatta had won his second and final term, and Kenyatta turned to his succession.

    Gachagua was useful to Ruto in 2022. He had personal wealth, was an effective mobiliser and hailed from central Kenya where the election looked to be won or lost. However, once elected, Gachagua’s populist statements and reputation for ethnic bias became more of a liability.

    Second, as contexts change, someone else can soon come to be seen as more useful as second in command.

    For Jomo Kenyatta, Moi had shown his utility and loyalty during the “little general elections” of 1966, which effectively sidelined the Kenya People’s Union and Oginga Odinga.

    Kithure Kindiki, Kenya’s interior cabinet secretary, is the current frontrunner to replace Gachagua. He is seen as better able to negotiate with the international community, especially during a critical economic period for Kenya as it seeks new International Monetary Fund loans.

    Third, being the country’s vice or deputy president comes with a lot of opportunities to network. These interactions have often led individuals to be seen as a growing threat, or as actively plotting against the president. They may also be seen as a future challenger.

    History has shown that there is no ideal way of dealing with such a potential challenger, leading subsequent presidents to try different approaches.

    Current context

    Ruto and Gachagua have clearly fallen out. Their differences became apparent soon after the 2022 elections. However, they came into sharp relief in the face of anti-tax protests in June 2024. There were subsequent allegations that Gachagua and some of his allies had helped to finance the protests.

    The question, therefore, isn’t why they have fallen out but why Gachagua is being impeached now.

    Ultimately the answer to this can only be known by a few individuals. But perhaps an indication of the answer lies in the emotions the fallout has stirred: a desire to distract the public and show that the government is taking action to deal with Kenya’s ongoing economic crisis. There may also be a desire to undercut Gachagua before he can build national networks.

    Ruto has the numbers in the senate to see the impeachment process through. But this is a dangerous game. Those sidelined have a habit of coming back to haunt their former allies.

    At the moment, most Kenyans are supportive of the impeachment process, but many also feel that Gachagua is being unfairly targeted especially in central Kenya, where a majority oppose the process.

    While a successful impeachment might see Gachagua barred from holding public office, this wouldn’t necessarily mean an end to his career as an effective political mobiliser.

    The next few months – and the narratives that emerge about why Ruto and Gachagua fell out – will be critical in determining both their futures.

    Gabrielle Lynch does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Kenya’s presidents have a long history of falling out with their deputies – Rigathi Gachagua’s impeachment would be no surprise – https://theconversation.com/kenyas-presidents-have-a-long-history-of-falling-out-with-their-deputies-rigathi-gachaguas-impeachment-would-be-no-surprise-241139

    MIL OSI – Global Reports

  • MIL-OSI Global: Scientists around the world report millions of new discoveries every year − but this explosive research growth wasn’t what experts predicted

    Source: The Conversation – USA – By David P. Baker, Professor of Sociology, Education and Demography, Penn State

    The number of research studies published globally has risen exponentially in the past decades. AP Photo/Frank Augstein, file

    Millions of scientific papers are published globally every year. These papers in science, technology, engineering, mathematics and medicine present discoveries that range from the mundane to the profound.

    Since 1900, the number of published scientific articles has doubled about every 10 to 15 years; since 1980, about 8% to 9% annually. This acceleration reflects the immense and ever-growing scope of research across countless topics, from the farthest reaches of the cosmos to the intricacies of life on Earth and human nature.

    Derek de Solla Price wrote an influential book about the growth rate of science.
    The de Solla Price family/Wikimedia Commons

    Yet, this extraordinary expansion was once thought to be unsustainable. In his influential 1963 book, “Little Science, Big Science… And Beyond,” the founder of scientometrics – or data informetrics related to scientific publicationsDerek de Solla Price famously predicted limits to scientific growth.

    He warned that the world would soon deplete its resources and talent pool for research. He imagined this would lead to a decline in new discoveries and potential crises in medicine, technology and the economy. At the time, scholars widely accepted his prediction of an impending slowdown in scientific progress.

    Faulty predictions

    In fact, science has spectacularly defied Price’s dire forecast. Instead of stagnation, the world now experiences “global mega-science” – a vast, ever-growing network of scientific discovery. This explosion of scientific production made Price’s prediction of collapse perhaps the most stunningly incorrect forecast in the study of science.

    Unfortunately, Price died in 1983, too early to realize his mistake.

    So, what explains the world’s sustained and dramatically increasing capacity for scientific research?

    We are sociologists who study higher education and science. Our new book, “Global Mega-Science: Universities, Research Collaborations, and Knowledge Production,” published on the 60th anniversary of Price’s fateful prediction, offers explanations for this rapid and sustained scientific growth. It traces the history of scientific discovery globally.

    Factors such as economic growth, warfare, space races and geopolitical competition have undoubtedly spurred research capacity. But these factors alone cannot account for the immense scale of today’s scientific enterprise.

    The education revolution: Science’s secret engine

    In many ways, the world’s scientific capacity has been built upon the educational aspirations of young adults pursuing higher education.

    Funding from higher education supports a large part of the modern scientific enterprise.
    AP Photo/Paul Sancya

    Over the past 125 years, increasing demand for and access to higher education has sparked a global education revolution. Now, more than two-fifths of the world’s young people ages 19-23, although with huge regional differences, are enrolled in higher education. This revolution is the engine driving scientific research capacity.

    Today, more than 38,000 universities and other higher-education institutions worldwide play a crucial role in scientific discovery. The educational mission, both publicly and privately funded, subsidizes the research mission, with a big part of students’ tuition money going toward supporting faculty.

    These faculty scientists balance their teaching with conducting extensive research. University-based scientists contribute 80% to 90% of the discoveries published each year in millions of papers.

    External research funding is still essential for specialized equipment, supplies and additional support for research time. But the day-to-day research capacity of universities, especially academics working in teams, forms the foundation of global scientific progress.

    Even the most generous national science and commercial research and development budgets cannot fully sustain the basic infrastructure and staffing needed for ongoing scientific discovery.

    Likewise, government labs and independent research institutes, such as the U.S. National Institutes of Health or Germany’s Max Planck Institutes, could not replace the production capacity that universities provide.

    Collaboration benefits science and society

    The past few decades have also seen a surge in global scientific collaborations. These arrangements leverage diverse talent from around the world to enhance the quality of research.

    International collaborations have led to millions of co-authored papers. International research partnerships were relatively rare before 1980, accounting for just over 7,000 papers, or about 2% of the global output that year. But by 2010 that number had surged to 440,000 papers, meaning 22% of the world’s scientific publications resulted from international collaborations.

    This growth, building on the “collaboration dividend,” continues today and has been shown to produce the highest-impact research.

    Universities tend to share academic goals with other universities and have wide networks and a culture of openness, which makes these collaborations relatively easy.

    Today, universities also play a key role in international supercollaborations involving teams of hundreds or even thousands of scientists. In these huge collaborations, researchers can tackle major questions they wouldn’t be able to in smaller groups with fewer resources.

    Supercollaborations have facilitated breakthroughs in understanding the intricate physics of the universe and the synthesis of evolution and genetics that scientists in a single country could never achieve alone.

    The IceCube collaboration, a prime example of a global megacollaboration, has made big strides in understanding neutrinos, which are ghostly particles from space that pass through Earth.
    Martin Wolf, IceCube/NSF

    The role of global hubs

    Hubs made up of universities from around the world have made scientific research thoroughly global. The first of these global hubs, consisting of dozens of North American research universities, began in the 1970s. They expanded to Europe in the 1980s and most recently to Southeast Asia.

    These regional hubs and alliances of universities link scientists from hundreds of universities to pursue collaborative research projects.

    Scientists at these universities have often transcended geopolitical boundaries, with Iranian researchers publishing papers with Americans, Germans collaborating with Russians and Ukrainians, and Chinese scientists working with their Japanese and Korean counterparts.

    The COVID-19 pandemic clearly demonstrated the immense scale of international collaboration in global megascience. Within just six months of the start of the pandemic, the world’s scientists had already published 23,000 scientific studies on the virus. These studies contributed to the rapid development of effective vaccines.

    With universities’ expanding global networks, the collaborations can spread through key research hubs to every part of the world.

    Is global megascience sustainable?

    But despite the impressive growth of scientific output, this brand of highly collaborative and transnational megascience does face challenges.

    On the one hand, birthrates in many countries that produce a lot of science are declining. On the other, many youth around the world, particularly those in low-income countries, have less access to higher education, although there is some recent progress in the Global South.

    Sustaining these global collaborations and this high rate of scientific output will mean expanding access to higher education. That’s because the funds from higher education subsidize research costs, and higher education trains the next generation of scientists.

    De Solla Price couldn’t have predicted how integral universities would be in driving global science. For better or worse, the future of scientific production is linked to the future of these institutions.

    David Baker receives funding from the U.S. National Science Foundation, U.S. National Institutes of Health, Fulbright, FNR
    Luxembourg, and the Qatar Nation Research Fund.

    Justin J.W. Powell has received funding for research on higher education and science from Germany’s BMBF, DFG, and VolkswagenStiftung; Luxembourg’s FNR; and Qatar’s QNRF.

    ref. Scientists around the world report millions of new discoveries every year − but this explosive research growth wasn’t what experts predicted – https://theconversation.com/scientists-around-the-world-report-millions-of-new-discoveries-every-year-but-this-explosive-research-growth-wasnt-what-experts-predicted-237274

    MIL OSI – Global Reports

  • MIL-OSI Global: No country still uses an electoral college − except the US

    Source: The Conversation – USA – By Joshua Holzer, Associate Professor of Political Science, Westminster College

    Every four years, Congress gathers to count electoral votes. AP Photo/J. Scott Applewhite

    The United States is the only democracy in the world where a presidential candidate can get the most popular votes and still lose the election. Thanks to the Electoral College, that has happened five times in the country’s history. The most recent examples are from 2000, when Al Gore won the popular vote but George W. Bush won the Electoral College after a U.S. Supreme Court ruling, and 2016, when Hillary Clinton got more votes nationwide than Donald Trump but lost in the Electoral College.

    The Founding Fathers did not invent the idea of an electoral college. Rather, they borrowed the concept from Europe, where it had been used to pick emperors for hundreds of years.

    As a scholar of presidential democracies around the world, I have studied how countries have used electoral colleges. None have been satisfied with the results. And except for the U.S., all have found other ways to choose their leaders.

    The Holy Roman Empire had seven electors: Three were members of the Catholic Church and four were significant members of the nobility. This image depicts, from left, the archbishop of Cologne, the archbishop of Mainz, the archbishop of Trier, the count palatine of the Rhine, the duke of Saxony, the margrave of Brandenburg and the king of Bohemia.
    Codex Balduini Trevirorum, c. 1340, Landeshauptarchiv Koblenz via Wikimedia Commons

    The origins of the US Electoral College

    The Holy Roman Empire was a loose confederation of territories that existed in central Europe from 962 to 1806. The emperor was not chosen by heredity, like most other monarchies. Instead, emperors were chosen by electors, who represented both secular and religious interests.

    As of 1356, there were seven electors: Four were hereditary nobles and three were chosen by the Catholic Church. By 1803, the total number of electors had increased to 10. Three years later, the empire fell.

    When the Founding Fathers were drafting the U.S. Constitution in 1787, the initial draft proposal called for the “National Executive,” which we now call the president, to be elected by the “National Legislature,” which we now call Congress. However, Virginia delegate George Mason viewed “making the Executive the mere creature of the Legislature as a violation of the fundamental principle of good Government,” and so the idea was rejected.

    Pennsylvania delegate James Wilson proposed that the president be elected by popular vote. However, many other delegates were adamant that there be an indirect way of electing the president to provide a buffer against what Thomas Jefferson called “well-meaning, but uninformed people.” Mason, for instance, suggested that allowing voters to pick the president would be akin to “refer(ring) a trial of colours to a blind man.”

    For 21 days, the founders debated how to elect the president, and they held more than 30 separate votes on the topic – more than for any other issue they discussed. Eventually, the complicated solution that they agreed to was an early version of the electoral college system that exists today, a method where neither Congress nor the people directly elect the president. Instead, each state gets a number of electoral votes corresponding to the number of members of the U.S. House and Senate it is apportioned. When the states’ electoral votes are tallied, the candidate with the majority wins.

    James Madison, who was not fond of the Holy Roman Empire’s use of an electoral college, later recalled that the final decision on how to elect a U.S. president “was produced by fatigue and impatience.”

    After just two elections, in 1796 and 1800, problems with this system had become obvious. Chief among them was that electoral votes were cast only for president. The person who got the most electoral votes became president, and the person who came in second place – usually their leading opponent – became vice president. The current process of electing the president and vice president on a single ticket but with separate electoral votes was adopted in 1804 with the passage of the 12th Amendment.

    Some other questions about how the Electoral College system should work were clarified by federal laws through the years, including in 1887 and 1948.

    After the 2020 presidential election exposed additional flaws with the system, Congress further tweaked the process by passing legislation that sought to clarify how electoral votes are counted.

    James Madison disliked the idea of an electoral college.
    Chester Harding, via National Portrait Gallery

    Other electoral colleges

    After the the U.S. Constitution went into effect, the idea of using an electoral college to indirectly elect a president spread to other republics.

    For example, in the Americas, Colombia adopted an electoral college in 1821. Chile adopted one in 1828. Argentina adopted one in 1853.

    In Europe, Finland adopted an electoral college to elect its president in 1925, and France adopted an electoral college in 1958.

    Over time, however, these countries changed their minds. All of them abandoned their electoral colleges and switched to directly electing their presidents by votes of the people. Colombia did so in 1910, Chile in 1925, France in 1965, Finland in 1994, and Argentina in 1995.

    The U.S. is the only democratic presidential system left that still uses an electoral college.

    A ‘popular’ alternative?

    There is an effort underway in the U.S. to replace the Electoral College. It may not even require amending the Constitution.

    The National Popular Vote Interstate Compact, currently agreed to by 17 U.S. states, including small states such as Delaware and big ones such as California, as well as the District of Columbia, is an agreement to award all of their electoral votes to whichever presidential candidate gets the most votes nationwide. It would take effect once enough states sign on that they would represent the 270-vote majority of electoral votes. The current list reaches 209 electoral votes.

    A key problem with the interstate compact is that in races with more than two candidates, it could lead to situations where the winner of the election did not get a majority of the popular vote, but rather more than half of all voters chose someone else.

    When Argentina, Chile, Colombia, Finland and France got rid of their electoral colleges, they did not replace them with a direct popular vote in which the person with the most votes wins. Instead, they all adopted a version of runoff voting. In those systems, winners are declared only when they receive support from more than half of those who cast ballots.

    Notably, neither the U.S. Electoral College nor the interstate compact that seeks to replace it are systems that ensure that presidents are supported by a majority of voters.

    Editor’s note: This story includes material from a story published on May 20, 2020.

    Joshua Holzer does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. No country still uses an electoral college − except the US – https://theconversation.com/no-country-still-uses-an-electoral-college-except-the-us-240281

    MIL OSI – Global Reports

  • MIL-OSI Global: What is a communist, and what do communists believe?

    Source: The Conversation – USA – By Aminda Smith, Associate Professor of History, Michigan State University

    Seeking social change often requires collective action. champc/iStock / Getty Images Plus

    Curious Kids is a series for children of all ages. If you have a question you’d like an expert to answer, send it to CuriousKidsUS@theconversation.com.


    What is a communist? – Artie, age 10, Astoria, New York


    Simply put, a communist is someone who supports communism. I study the history of communism, which is a political and economic view.

    Communism has long been controversial, and in the U.S. today, reputable sources disagree about it. Some experts argue that communist views are well supported by historical evidence about the way societies have developed over time. Others suggest that history has shown communism not to work.

    Many of those appraisals are based on examples of people who tried to establish communism. Communists have launched revolutions in many places including Russia and China. In five countries – China, North Korea, Laos, Cuba and Vietnam – communist parties control the current governments. The economic and political systems in those countries are not fully communist, but some might be working to transition from capitalism to communism.

    In part because the U.S. has difficult relationships with these countries, many Americans have negative views of communists and communism. To evaluate those countries and to decide your own opinions about communism in general, it is important to first be clear about what the principles of communism are.

    Communists believe that people should share wealth so that no one is too poor, no one is too rich, and everyone has enough to survive and have a good life.

    A communist might be a member of a Communist party, which is a political party, or a member of a group of people who want to play a role in government.

    The opening of the 2014 convention of the Communist Party of the United States of America.

    In communism, people work together to produce and distribute the things they need to live, such as food, clothing and entertainment. That does not mean that everything is shared at all times.

    In a communist society, individuals might still live in their own homes and have their own food, clothing and personal items such as televisions and cellphones. However, the places where these items were produced, such as factories and farms, would be owned by everyone.

    Similarly, a person might still create artistic products such as works of literature or craftsmanship on their own. The goal would not be to make money, though, but instead to share for everyone to enjoy.

    Communists support some form of collective ownership. Ownership by everyone would ensure that all members of society have equal rights to the products from the factories and farms because they would all be part owners of the enterprises.

    In such a society, everyone would also have equal political rights and would participate in governance together. Theoretically, communism should entail some form of democracy.

    What is Marxism?

    German philosopher Karl Marx.
    John Jabez Edwin Mayal via Wikimedia Commons

    Throughout history, there have been many different views on what communism is, how it should be organized and how it might be achieved. The most famous theories about communism are probably the ones that were developed by a German philosopher named Karl Marx. His ideas are often called Marxism.

    Marx studied history and observed that the way people produced goods and services was closely related to who held power. For example, in farming societies, those who owned the land had more power than those who did not.

    Marx also noticed that people with less power had often risen up, usually violently, to overthrow the powerful people. He called this concept class struggle. He believed this process was how societies developed from one system of government and economy to another. He claimed that class struggle led societies through a progression toward greater efficiency in the production of goods and services, higher levels of technology and wider distribution of social and political power.

    When Marx was alive in the 1800s, an economic and political system called capitalism had developed in many countries. In capitalist societies, the economy centered on factories. Factory owners had significant political and economic influence.

    Marx observed that in countries such as Germany, England and the United States, factory owners hired laborers who worked long hours producing goods such as shirts or tables. While the factory owners sold these products at high prices, they paid the workers very little. As a result, the factory owners became richer, while many workers struggled to afford the goods they produced or even to provide food for their families.

    Marx believed that this inequality would eventually lead to a worker uprising. During their revolution, Marx predicted, the workers would seize control of the factories, begin running them more fairly, and this would lead to a new political system, known as socialism.

    Where does socialism fit in?

    A campaign poster from 1976, spotlighting the candidates from the Communist Party of the United States of America.
    Library of Congress

    Of course, if the workers staged a revolution, the factory owners would fight back. Marx thought that, immediately after the revolution, the workers would first need to create a strong government to prevent the owners from reestablishing capitalism. During that phase, which Marx called socialism, the workers would run the government while they continued moving away from capitalism and trying to create a more equal society.

    Marx thought people would eventually see that socialism was much better than capitalism because socialism would end exploitation while still allowing a society to continue moving toward better economic and political practices, but without inequality. Once that happened, a government would no longer be necessary.

    The society would become communist. There would still be governance, but not a government that was separated from the people. Rather, in a communist society, the people would govern together, and everyone would do some of the work and receive what they needed.

    There are Communist parties in many places, and many are currently working to move their countries toward communism. At this time, no country has yet made the transition to full communism, but many people still hope that transition will happen somewhere, sometime. Those people are communists. Communists are optimistic that humans can one day create a more fair and equal society.


    Hello, curious kids! Do you have a question you’d like an expert to answer? Ask an adult to send your question to CuriousKidsUS@theconversation.com. Please tell us your name, age and the city where you live.

    And since curiosity has no age limit – adults, let us know what you’re wondering, too. We won’t be able to answer every question, but we will do our best.

    Aminda Smith does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. What is a communist, and what do communists believe? – https://theconversation.com/what-is-a-communist-and-what-do-communists-believe-234255

    MIL OSI – Global Reports

  • MIL-OSI Global: A devastating hurricane doesn’t dramatically change how people vote – but in a close election, it can matter

    Source: The Conversation – USA – By Boris Heersink, Associate Professor of Political Science, Fordham University

    Residents walk on a damaged street in Sarasota, Fla., on Oct. 10, 2024. Eva Marie Uzcategui for The Washington Post via Getty Images

    North Carolina and Florida are changing administrative rules and, in some cases, issuing emergency funding that is intended to make it easier for people in areas damaged by Hurricanes Helene and Milton to vote.

    The recovery in both states is expected to extend far beyond the November 2024 election period. The majority of the people in the affected communities in North Carolina and Florida voted for Republican presidential nominee Donald Trump in 2020, making some election analysts wonder if some Trump supporters will be able to cast their ballots.

    Amy Lieberman, a politics and society editor at The Conversation U.S., spoke with Boris Heersink, a scholar of voters’ behavior after a natural disaster, to better understand if and how the recent hurricanes could shift the results of the 2024 presidential election.

    How can hurricanes create complications ahead of an election?

    A massive hurricane disrupts people’s lives in many important ways, including affecting people’s personal safety and where they can live. Ahead of an election, there are a lot of practical limitations about how an election can be executed – like if a person can still receive mail-in ballots at home or elsewhere, or if it is possible to still vote in person at their polling location if that building was destroyed or damaged.

    Another issue is whether people who have just lived through a natural disaster and will likely be dealing with the aftermath for weeks to come are focused on politics right now. Some might sit out the election because they simply have more important things to worry about.

    Beyond practical concerns, how else can a natural disaster influence an election?

    The other side of the equation, which is what political scientists like myself are mostly focusing on, is whether people take the fact that a natural disaster happened into consideration when they vote.

    Two scholars, Christopher Achen and Larry Bartels, have argued that sometimes voters are not great at figuring out how to incorporate bad things that happened to them into a voting position. In some cases, it is entirely fair to hold an elected official responsible for bad outcomes that affect people’s lives. But at other moments, bad things can happen to us without that being the fault of an incumbent president or governor. And voters should ideally be able to balance out these different types of bad things – those it is fair to punish elected officials for, and those for which it isn’t fair to hold them responsible.

    After all, a devastating hurricane is terrible, but it is not Kamala Harris’ fault that it happened. But Achen and Bartels argue that voters frequently still punish elected officials for random bad events like this.

    Their most famous example is the consequences of a series of shark attacks off the New Jersey coast in the summer of 1916. As a result of those attacks, the New Jersey tourism industry saw a major decline. While these findings are still being debated, Achen and Bartels argue that Jersey shore voters subsequently voted against Woodrow Wilson in the 1916 presidential election at a higher rate than they would have had the shark attacks not happened. They argue that voters did this even though Wilson had no involvement in the shark attacks.

    Kamala Harris visits a Hurricane Helene donation drop-off site for emergency supplies in Charlotte, N.C., on Oct. 5, 2024.
    Mario Tama/Getty Images

    How else do voters consider bad events when they vote?

    Scholars like John Gasper and Andrew Reeves argue that voters mostly care whether elected officials respond appropriately to a disaster. So, if the president does a good job reacting, voters do not actually punish them at all in the next election. However, voters can punish elected officials if they feel like the response is not correct.

    The fact that Hurricane Katrina hit Louisiana in 2005 was not the fault of then-President George W. Bush. But the perceived slowness of the government response is something a voter could have held him responsible for.

    How do voters’ political affiliations affect where and how they lay the blame?

    Colleagues and I have shown that how people interpret the combination of a disaster and the government response is likely colored by their own partisanship.

    We looked at both the effects of Superstorm Sandy on the 2012 presidential election and natural disasters’ impact on elections more broadly from 1972 through 2004. One core finding is that when presidents reject state officials’ disaster declaration requests, they lose votes in affected counties – but only if those counties were already more supportive of the opposite party.

    If there is a strong positive government response, the incumbent president or their party can actually gain votes or lose voters affected by a disaster. So, Republicans affected by the hurricanes could become more inclined to vote against Harris if they feel like they are not getting the help they need. But it could also help Harris if affected Democrats feel like they are getting enough aid.

    The major takeaway is that if the government responds really effectively to a natural disaster or other emergency, there is not a huge electoral penalty – and there could even be a small reward.

    That is not irrelevant in a close election. If Republicans in affected areas in North Carolina feel the government response has been poor and it inspires them to turn out in higher numbers to punish Harris, that could matter. But if they feel like the response has been adequate, research suggests either no real effect on their support for Harris – or possibly even an increase in Harris voters.

    Donald Trump speaks with owners of a furniture store that was damaged during Hurricane Helene on Sept. 30, 2024, in Valdosta, Ga.
    Michael M. Santiago/Getty Images

    How much influence can a politician have on people assessing a government response?

    Scholars mostly assume that people affected can tell whether the government response was good or not. Trump and other Republicans are falsely saying that the response is slow and falsely claiming that Federal Emergency Management Agency money is being spent on immigrants who are not living in the country legally. There does not appear to be a slow government response to the hurricane in North Carolina, and there’s no evidence the response is insufficient in Florida, either.

    So, the question now is whether voters affected by these hurricanes will respond based on their actual lived experiences, or how they are told they are living their experience.

    Boris Heersink receives funding from the Russell Sage Foundation.

    ref. A devastating hurricane doesn’t dramatically change how people vote – but in a close election, it can matter – https://theconversation.com/a-devastating-hurricane-doesnt-dramatically-change-how-people-vote-but-in-a-close-election-it-can-matter-241179

    MIL OSI – Global Reports

  • MIL-OSI Global: What does Springfield, Illinois, in 1908 tell us about Springfield, Ohio, in 2024?

    Source: The Conversation – USA – By Joseph Patrick Kelly, Professor of Literature and Director of Irish and Irish American Studies, College of Charleston

    Supporters gather at a campaign rally for Donald Trump in Butler, Pa., on Oct. 5, 2024. Jeff Swensen/Getty Images

    Lying about Black people is nothing new in political campaigning.

    Despite the thorough debunking of false rumors that Haitian immigrants were eating cats and dogs in Springfield, Ohio, former President Donald Trump and his GOP allies insist on repeating the lies.

    “If I have to create stories,” admitted JD Vance, Trump’s running mate, “that’s what I’m going to do.”

    While many political observers believe that these lies have, as The New York Times columnist Lydia Polgreen described, finally “crossed a truly unacceptable line,” in fact, white politicians have told brazen, fearmongering, racist lies about Black people for over the past 100 years.

    One of the more notorious lies occurred in 1908 in another Springfield, this one in Illinois. As a historian who studies the impact of racism on democracy, it’s my belief that what happened there and in other cities helps to clarify what Trump and Vance are trying to do in Springfield, Ohio, today.

    Lying when everyone knows you’re lying seems to be the point.

    New target, old message

    Springfield, Illinois, Abraham Lincoln’s home town, was, in 1908, a working-class city of just under 50,000 people – about the same size as its modern counterpart in Ohio.

    Because of the city’s manufacturing industries, Springfield was also an attractive place to live and work for Black men and women escaping the social oppression of the Deep South.

    The Black population of Springfield had been growing by about 4% annually, and by 1908, roughly 2,500 Black people were living there to work in the city’s manufacturing plants. As the wealth of some Black families rose, so too did racist fears among whites that Black migrants were taking their jobs.

    Rumors spread through false newspaper reports among white residents that a Black man had raped a white woman.

    As the story went, a Black man broke through the screen door of a modest house in a white neighborhood. He supposedly dragged a 21-year-old white woman by her throat into the backyard, where he raped her. Or so the woman said.

    A couple of weeks after the incident, the woman admitted she lied. There was no Black man. There was no rape. But by then, telling the truth was too late. The rumor had triggered a wave of anti-Black violence.

    William English Walling, a white, liberal journalist from Kentucky, reported that Springfield’s white folks launched “deadly assaults on every negro they could lay their hands on, to sack and plunder their houses and stores, and to burn and murder.”

    For two days, the violence raged, while white “prosperous businessmen looked on” in complicit approval, Walling wrote. Several blocks in Black neighborhoods were burned, and at least eight Black men were killed.

    One of the men killed was William K. Donnegan. The 84-year-old died after his white attackers slit his throat and then hanged him with a clothesline from a tree near his home.

    As a dozen different rioters told Walling: “Why, the n—–s came to think they were as good as we are!”

    Telling the truth about racist tropes

    At the turn of the 19th century, racial tensions were most often expressed in sexual terms – Black men having sex with white women.

    That sexual anxiety was part of what cultural historians call a “master narrative,” a symbolic story that dramatizes white nationalism and the belief that citizenship and its benefits were preserved for one racial group at the expense of all others.

    One of the first to debunk this rape fantasy was Ida B. Wells, the Black editor and owner of the weekly “Memphis Free Press.”

    In 1892, a white mob lynched one of her good friends, Thomas Moss, and two others associated with his cooperative Peoples’ Grocery store. The Appeal Avalanche, a white Memphis newspaper, wrote that the lynching “was done decently and in order.”

    Ida B. Wells was among the NAACP’s founders.
    Library of Congress

    In her May 21, 1892, editorial about Moss’ death, Wells told a different story about “the same old racket – the new alarm about raping a white woman.”

    Wells explained that she worried that people who lived outside of the Deep South might believe the lies about Black people.

    “Nobody in this section of the country,” she wrote, not even the demagogues spreading rumors, “believe the old thread bare lie that Negro men rape white women.”

    Political fearmongering

    What happened in Wilmington, North Carolina, in 1898 was based on a deliberate, cynical election strategy of lies.

    At the turn of the 20th century, North Carolina’s disaffected, poor working-class white Populists joined forces with Black Republicans to form what were known as the Fusionists.

    In Wilmington, then the largest city in North Carolina, the Fusionists were able to vote out the white-nationalist Democratic Party in the early 1890s and became a symbol of hope for a democratic South and racial equality.

    They also became a target for Democrats seeking to regain power and restore white nationalism.

    A political cartoon from the Raleigh News & Observer, Aug. 13, 1898.
    North Carolina Collection, UNC Chapel Hill

    The spark came in the summer of 1898 when Rebecca Felton, the wife of a Georgia congressman and a leading women’s rights advocate, gave an address to Georgia’s Agricultural Society on Aug. 11 that sought to protect the virtue of white women.

    “If it needs lynching,” she said, “to protect a woman’s dearest possession from the ravening of beasts – then I say lynch; a thousand times a week if necessary.”

    In response, Alexander Manly, the Black editor of The Daily Record, in Wilmington, followed the lead of Ida B. Wells and attacked the myths of Black men. Manly pointed out in his August 1898 editorial that poor white women “are not any more particular in the matter of clandestine meetings with colored men than are the white men with colored women.”

    Democrats bent on stoking racial fears circulated Manly’s editorial throughout North Carolina before the November 1898 elections, decrying the “Outrageous Attack on White Women!” by “the scurrilous negro editor.”

    If that wasn’t enough to stir up North Carolina Democrats, party officials sent the Red Shirts, their white nationalist militia, to Wilmington to overthrow the city’s biracial government, install all white officials and restore white rule.

    To that end, a white mob destroyed Manly’s newspaper office, chased him and other Black leaders into exile, rampaged through Black neighborhoods and killed an untold number of Black men.

    It was a white nationalist coup d’etat.

    The great white protector

    In his modern-day attempt to divide working-class white people from working-class Black people, Vance has urged his supporters to ignore “the crybabies” in the mainstream media.

    “Keep the cat memes flowing!” he posted on X.

    An estimated 67 million people watched the U.S. presidential debate on ABC and heard Trump angrily proclaim: “They’re eating the dogs. They’re eating the cats. They’re eating … the pets of the people that live there.”

    Once again, the old narrative is resurrected.

    Joseph Kelly is not affiliated with any political party. In the past, he has been a volunteer with the Charleston County (SC) Democratic Party.

    ref. What does Springfield, Illinois, in 1908 tell us about Springfield, Ohio, in 2024? – https://theconversation.com/what-does-springfield-illinois-in-1908-tell-us-about-springfield-ohio-in-2024-239074

    MIL OSI – Global Reports

  • MIL-OSI Global: From Swift to Springsteen to Al Jolson, candidates keep trying to use celebrities to change voters’ songs

    Source: The Conversation – USA – By Matt Harris, Associate Professor of Political Science, Park University

    It’s 2016 all over again. And 2020, for that matter. Democrats are staring at what looks to be another coin flip election between their party’s nominee and Donald Trump.

    In an election that could come down to a few hundred thousand votes in a handful of states, every voter matters – no matter how you reach them. With that in mind, Democrats are communicating not just on matters of policy, but matters of pop culture.

    Specifically, Democrats are embracing football and Taylor Swift. The Harris-Walz campaign trotted out endorsements from 15 Pro Football Hall of Famers and sells Swiftie-style friendship bracelets on its campaign website, among other overtures. Swift herself has endorsed Kamala Harris.

    Tim Walz cited his experience as a football coach and mentioned Swift in the vice presidential debate.

    Democratic challenger and former NFLer Colin Allred, who is running to unseat GOP Sen. Ted Cruz of Texas, has put out ads in which he appears moments from taking to the gridiron.

    But how much does pop culture campaigning, if you will, matter? Does trying to link a campaign to a sport, or a culture, or a style of music actually influence elections? Looking to five different election campaigns in the past can give a sense of the effects, or lack thereof, of such campaigning.

    An ad for Texas Democrat Rep. Colin Allred, a former NFL player, stresses his football past in his bid to unseat GOP Sen. Ted Cruz.

    Reagan and Springsteen

    Any discussion of the embrace of pop culture by candidates should probably start with Ronald Reagan’s Bruce Springsteen era.

    Reagan, attempting to reach beyond his base, viewed 1984 as a vibes-based election and cited Springsteen as an exemplar of the hope his campaign wished to inspire. Springsteen rejected a request from Reagan’s camp to use his often-misunderstood “Born in the U.S.A.” on the campaign trail. The song’s lyrics describe a down-on-his-luck Vietnam War veteran, but if you don’t listen carefully to the lyrics, the song can sound like a celebration of veterans and being American.

    While Reagan went on to win 49 states in that year’s election, perhaps the biggest long-term impact of his courtship of Springsteen fans was to turn Springsteen from a relatively apolitical performer to a staunch supporter of the Democratic Party.

    In this way, Springsteen’s transformation mirrors that of Taylor Swift, with Marsha Blackburn, the Tennessee Republican senator, serving as her Reagan – the person who pushed the performer into the political arena after years on the sidelines.

    Springsteen and Kerry

    Springsteen’s foray into politics eventually led him to back Democratic presidential nominee John Kerry in 2004 with a series of concerts called the “Vote for Change”“ tour.

    Democratic presidential candidate John Kerry greets the crowd with musician Bruce Springsteen while campaigning in Columbus, Ohio, on Oct. 28, 2004.
    AP Photo/Laura Rauch

    Kerry, meanwhile, undertook his own efforts at cultural turf claiming. His attempts to demonstrate his bona fides as a sports-loving everyman went awry at times, when he flubbed the name of “Lambeau Field,” home of Wisconsin’s Green Bay Packers, and referred to a nonexistent Boston Red Sox player, “Manny Ortez.” The ill-fated sports references arguably didn’t hurt his campaign – he won Wisconsin and Massachusetts – but he was ridiculed for a photo-op hunting trip late in the campaign and went on to lose rural Midwestern voters decisively – as well as the election.

    Kerry’s dabbling with hunting imagery was perhaps an attempt to dull President George W. Bush’s advantage in perceived strength of leadership, which was in part burnished by his adoption of a cowboy persona.

    Harding, Jolson and the Cubs

    While Reagan’s attempt to woo 1980s rock fans is one of the best-known attempts to campaign on a mantra of popular culture, it was far from the first.

    Sen. Warren Harding’s 1920 front porch campaign for president was given a jolt of enthusiasm by a visit from singer and actor Al Jolson. Harding was also visited in his hometown, Marion, Ohio, by other actors and celebrities and the Chicago Cubs.

    Harding’s strategy probably better serves as a template for things to come than a decisive move in the 1920 election: His victory with over 60% of the popular vote suggests no celebrity could have saved Democrat James Cox.

    Bill Clinton and MTV

    As the Harris-Walz campaign tries to draw votes from Swift’s young fans, parallels can be drawn to Democratic Arkansas Gov. Bill Clinton’s attempts to embrace youth culture in the 1992 presidential election. Among other appearances, Clinton took questions from young voters on MTV and played saxophone on “The Arsenio Hall Show.”

    While the direct effect of Clinton’s forays into youth culture is difficult to measure, he did surge among young voters relative to Democrat Michael Dukakis’ 1988 presidential campaign.

    In his 1992 campaign, Bill Clinton went on MTV to answer young people’s questions, which included ‘If you had it to do over again, would you inhale?’

    Ford and football

    Any discussion of politicians embracing football culture would be incomplete without a discussion of the American president best at playing football, Gerald Ford, the vice president who became the nation’s 38th president in 1974, when Richard Nixon resigned during the Watergate scandal.

    Ford played center on two national championship teams at the University of Michigan. While not using his football player background to the same level as former football coach Walz did at the Democratic National Convention, Ford did make use of his football credentials on the stump during the 1976 presidential campaign and was joined on the campaign trail by Alabama football coach Paul “Bear” Bryant.

    But the votes of football fans were apparently not enough to keep Ford in the White House for long. He lost the 1976 election to Democrat Jimmy Carter.

    Potentially fruitful pickups

    Will the Harris-Walz strategy of recruiting voters through pop culture be successful? Swift’s fans are largely young, suburban women, and NFL fans are strewn across the political spectrum. There are potentially fruitful pickups in both camps. The candidates certainly think it matters: Walz said he “took football back” from Republicans, a claim disputed by Trump.

    Stressing pop culture credentials can also provide attention to a campaign, regardless of persuasion. Clinton’s pop culture appearances generated coverage beyond the appearances themselves and were cost-effective for a campaign short on funds.

    This type of pop culture campaigning generates coverage, then, even if voters aren’t moved by thinking a candidate shares their love of football or pop music.

    Matt Harris does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. From Swift to Springsteen to Al Jolson, candidates keep trying to use celebrities to change voters’ songs – https://theconversation.com/from-swift-to-springsteen-to-al-jolson-candidates-keep-trying-to-use-celebrities-to-change-voters-songs-239381

    MIL OSI – Global Reports

  • MIL-OSI Global: As OpenAI attracts billions in new investment, its goal of balancing profit with purpose is getting more challenging to pull off

    Source: The Conversation – USA – By Alnoor Ebrahim, Thomas Schmidheiny Professor of International Business, Tufts University

    What’s in store for OpenAI is the subject of many anonymously sourced reports. AP Photo/Michael Dwyer

    OpenAI, the artificial intelligence company that developed the popular ChatGPT chatbot and the text-to-art program Dall-E, is at a crossroads. On Oct. 2, 2024, it announced that it had obtained US$6.6 billion in new funding from investors and that the business was worth an estimated $157 billion – making it only the second startup ever to be valued at over $100 billion.

    Unlike other big tech companies, OpenAI is a nonprofit with a for-profit subsidiary that is overseen by a nonprofit board of directors. Since its founding in 2015, OpenAI’s official mission has been “to build artificial general intelligence (AGI) that is safe and benefits all of humanity.”

    By late September 2024, The Associated Press, Reuters, The Wall Street Journal and many other media outlets were reporting that OpenAI plans to discard its nonprofit status and become a for-profit tech company managed by investors. These stories have all cited anonymous sources. The New York Times, referencing documents from the recent funding round, reported that unless this change happens within two years, the $6.6 billion in equity would become debt owed to the investors who provided that funding.

    The Conversation U.S. asked Alnoor Ebrahim, a Tufts University management scholar, to explain why OpenAI’s leaders’ reported plans to change its structure would be significant and potentially problematic.

    How have its top executives and board members responded?

    There has been a lot of leadership turmoil at OpenAI. The disagreements boiled over in November 2023, when its board briefly ousted Sam Altman, its CEO. He got his job back in less than a week, and then three board members resigned. The departing directors were advocates for building stronger guardrails and encouraging regulation to protect humanity from potential harms posed by AI.

    Over a dozen senior staff members have quit since then, including several other co-founders and executives responsible for overseeing OpenAI’s safety policies and practices. At least two of them have joined Anthropic, a rival founded by a former OpenAI executive responsible for AI safety. Some of the departing executives say that Altman has pushed the company to launch products prematurely.

    Safety “has taken a backseat to shiny products,” said OpenAI’s former safety team leader Jan Leike, who quit in May 2024.

    Open AI CEO Sam Altman, center, speaks at an event in September 2024.
    Bryan R. Smith/Pool Photo via AP

    Why would OpenAI’s structure change?

    OpenAI’s deep-pocketed investors cannot own shares in the organization under its existing nonprofit governance structure, nor can they get a seat on its board of directors. That’s because OpenAI is incorporated as a nonprofit whose purpose is to benefit society rather than private interests. Until now, all rounds of investments, including a reported total of $13 billion from Microsoft, have been channeled through a for-profit subsidiary that belongs to the nonprofit.

    The current structure allows OpenAI to accept money from private investors in exchange for a future portion of its profits. But those investors do not get a voting seat on the board, and their profits are “capped.” According to information previously made public, OpenAI’s original investors can’t earn more than 100 times the money they provided. The goal of this hybrid governance model is to balance profits with OpenAI’s safety-focused mission.

    Becoming a for-profit enterprise would make it possible for its investors to acquire ownership stakes in OpenAI and no longer have to face a cap on their potential profits. Down the road, OpenAI could also go public and raise capital on the stock market.

    Altman reportedly seeks to personally acquire a 7% equity stake in OpenAI, according to a Bloomberg article that cited unnamed sources.

    That arrangement is not allowed for nonprofit executives, according to BoardSource, an association of nonprofit board members and executives. Instead, the association explains, nonprofits “must reinvest surpluses back into the organization and its tax-exempt purpose.”

    What kind of company might OpenAI become?

    The Washington Post and other media outlets have reported, also citing unnamed sources, that OpenAI might become a “public benefit corporation” – a business that aims to benefit society and earn profits.

    Examples of businesses with this status, known as B Corps., include outdoor clothing and gear company Patagonia and eyewear maker Warby Parker.

    It’s more typical that a for-profit businessnot a nonprofit – becomes a benefit corporation, according to the B Lab, a network that sets standards and offers certification for B Corps. It is unusual for a nonprofit to do this because nonprofit governance already requires those groups to benefit society.

    Boards of companies with this legal status are free to consider the interests of society, the environment and people who aren’t its shareholders, but that is not required. The board may still choose to make profits a top priority and can drop its benefit status to satisfy its investors. That is what online craft marketplace Etsy did in 2017, two years after becoming a publicly traded company.

    In my view, any attempt to convert a nonprofit into a public benefit corporation is a clear move away from focusing on the nonprofit’s mission. And there will be a risk that becoming a benefit corporation would just be a ploy to mask a shift toward focusing on revenue growth and investors’ profits.

    Many legal scholars and other experts are predicting that OpenAI will not do away with its hybrid ownership model entirely because of legal restrictions on the placement of nonprofit assets in private hands.

    But I think OpenAI has a possible workaround: It could try to dilute the nonprofit’s control by making it a minority shareholder in a new for-profit structure. This would effectively eliminate the nonprofit board’s power to hold the company accountable. Such a move could lead to an investigation by the office of the relevant state attorney general and potentially by the Internal Revenue Service.

    What could happen if OpenAI turns into a for-profit company?

    The stakes for society are high.

    AI’s potential harms are wide-ranging, and some are already apparent, such as deceptive political campaigns and bias in health care.

    If OpenAI, an industry leader, begins to focus more on earning profits than ensuring AI’s safety, I believe that these dangers could get worse. Geoffrey Hinton, who won the 2024 Nobel Prize in physics for his artificial intelligence research, has cautioned that AI may exacerbate inequality by replacing “lots of mundane jobs.” He believes that there’s a 50% probability “that we’ll have to confront the problem of AI trying to take over” from humanity.

    And even if OpenAI did retain board members for whom safety is a top concern, the only common denominator for the members of its new corporate board would be their obligation to protect the interests of the company’s shareholders, who would expect to earn a profit. While such expectations are common on a for-profit board, they constitute a conflict of interest on a nonprofit board where mission must come first and board members cannot benefit financially from the organization’s work.

    The arrangement would, no doubt, please OpenAI’s investors. But would it be good for society? The purpose of nonprofit control over a for-profit subsidiary is to ensure that profit does not interfere with the nonprofit’s mission. Without guardrails to ensure that the board seeks to limit harm to humanity from AI, there would be little reason for it to prevent the company from maximizing profit, even if its chatbots and other AI products endanger society.

    Regardless of what OpenAI does, most artificial intelligence companies are already for-profit businesses. So, in my view, the only way to manage the potential harms is through better industry standards and regulations that are starting to take shape.

    California’s governor vetoed such a bill in September 2024 on the grounds it would slow innovation – but I believe slowing it down is exactly what is needed, given the dangers AI already poses to society.

    Alnoor Ebrahim does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. As OpenAI attracts billions in new investment, its goal of balancing profit with purpose is getting more challenging to pull off – https://theconversation.com/as-openai-attracts-billions-in-new-investment-its-goal-of-balancing-profit-with-purpose-is-getting-more-challenging-to-pull-off-240602

    MIL OSI – Global Reports

  • MIL-OSI Global: Godzilla at 70: The monster’s warning to humanity is still urgent

    Source: The Conversation – USA – By Amanda Kennell, Assistant Professor of East Asian Languages and Cultures, University of Notre Dame

    The monster in the 2023 movie “Godzilla Minus One.” Toho Co. Ltd., CC BY-ND

    The 2024 Nobel Peace Prize has been awarded to Nihon Hidankyo, the Japan Confederation of A- and H-bomb Sufferers Organizations. Many of these witnesses have spent their lives warning of the dangers of nuclear war – but initially, much of the world didn’t want to hear it.

    “The fates of those who survived the infernos of Hiroshima and Nagasaki were long concealed and neglected,” the Nobel committee noted in its announcement. Local groups of nuclear survivors created Nihon Hidankyo in 1956 to fight back against this erasure.

    Atomic bomb survivor Masao Ito, 82, speaks at the park across from the Atomic Bomb Dome in Hiroshima in May 15, 2023.
    Richard A. Brooks/AFP via Getty Images

    Around the same time that Nihon Hidankyo was formed, Japan produced another warning: a towering monster who topples Tokyo with blasts of irradiated breath. The 1954 film “Godzilla” launched a franchise that has been warning viewers to take better care of the Earth for the past 70 years.

    We study popular Japanese media and business ethics and sustainability, but we found a common interest in Godzilla after the 2011 earthquake, tsunami and meltdown at Japan’s Fukushima Daiichi nuclear plant. In our view, these films convey a vital message about Earth’s creeping environmental catastrophe. Few survivors are left to warn humanity about the effects of nuclear weapons, but Godzilla remains eternal.

    Into the atomic age

    By 1954, Japan had survived almost a decade of nuclear exposure. In addition to the bombings of Hiroshima and Nagasaki, the Japanese people were affected by a series of U.S. nuclear tests in the Bikini Atoll.

    When the U.S. tested the world’s first hydrogen bomb in 1954, its devastation reached far outside the expected damage zone. Though it was far from the restricted zone, the Lucky Dragon No. 5 Japanese fishing boat and its crew were doused with irradiated ash. All fell ill, and one fisherman died within the year. Their tragedy was widely covered in the Japanese press as it unfolded.

    The Castle Bravo hydrogen bomb test on March 1, 1954, produced an explosion equivalent to 15 megatons of TNT, more than 2.5 times what scientists had expected. It released large quantities of radioactive debris into the atmosphere.

    This event is echoed in a scene at the beginning of “Godzilla” in which helpless Japanese boats are destroyed by an invisible force.

    “Godzilla” is full of deep social debates, complex characters and cutting-edge special effects for its time. Much of the film involves characters discussing their responsibilities – to each other, to society and to the environment.

    This seriousness, like the film itself, was practically buried outside of Japan by an alter ego, 1956’s “Godzilla, King of the Monsters!” American licensors cut the 1954 film apart, removed slow scenes, shot new footage featuring Canadian actor Raymond Burr, spliced it all together and dubbed their creation in English with an action-oriented script they wrote themselves.

    This version was what people outside of Japan knew as “Godzilla” until the Japanese film was released internationally for its 50th anniversary in 2004.

    From radiation to pollution

    While “King of the Monsters!” traveled the world, “Godzilla” spawned dozens of Japanese sequels and spinoffs. Godzilla slowly morphed from a murderous monster into a monstrous defender of humanity in the Japanese films, which was also reflected in the later U.S.-made films.

    In 1971, a new, younger creative team tried to define Godzilla for a new era with “Godzilla vs. Hedorah.” Director Yoshimitsu Banno joined the movie’s crew while he was promoting a recently completed documentary about natural disasters. That experience inspired him to redirect Godzilla from nuclear issues to pollution.

    World War II was fading from public memory. So were the massive Anpo protests of 1959 and 1960, which had mobilized up to one-third of the Japanese people to oppose renewal of the U.S.-Japan security treaty. Participants included housewives concerned by the news that fish caught by the Lucky Dragon No. 5 had been sold in Japanese grocery stores.

    At the same time, pollution was soaring. In 1969, Michiko Ishimure published “Paradise in the Sea of Sorrow: Our Minamata Disease,” a book that’s often viewed as a Japanese counterpart to “Silent Spring,” Rachel Carson’s environmental classic. Ishimure’s poetic descriptions of lives ruined by the Chisso Corp.’s dumping of methyl mercury into the Shiranui Sea awoke many in Japan to their government’s numerous failures to protect the public from industrial pollution.

    The Chisso Corp. released toxic methylmercury into Minamata Bay from 1932 to 1968, poisoning tens of thousands of people who ate local seafood.

    “Godzilla vs. Hedorah” is about Godzilla’s battles against Hedorah, a crash-landed alien that grows to monstrous size by feeding on toxic sludge and other forms of pollution. The film opens with a woman singing jazzily about environmental apocalypse as young people dance with abandon in an underground club.

    This combination of hopelessness and hedonism continues in an uneven film that includes everything from an extended shot of an oil slick-covered kitten to an animated sequence to Godzilla awkwardly levitating itself with its irradiated breath.

    After Godzilla defeats Hedorah at the end of the film, it pulls a handful of toxic sludge out of Hedorah’s torso, gazes at the sludge, then turns to stare at its human spectators – both those onscreen and the film’s audience. The message is clear: Don’t just lazily sing about imminent doom – shape up and do something.

    Official Japanese trailer for ‘Godzilla vs. Hedorah’

    “Godzilla vs. Hedorah” bombed at the box office but became a cult hit over time. Its positioning of Godzilla between Earth and those who would harm it resonates today in two separate Godzilla franchises.

    One line of movies comes from the original Japanese studio that produced “Godzilla.” The other line is produced by U.S. licensors making eco-blockbusters that merge the environmentalism of “Godzilla” with the spectacle of “King of the Monsters.”

    A meltdown of public trust

    The 2011 Fukushima disaster has now become part of the Japanese people’s collective memory. Cleanup and decommissioning of the damaged nuclear plant continues, amid controversies around ongoing releases of radioactive water used to cool the plant. Some residents are allowed to visit their homes but can’t move back there while thousands of workers remove topsoil, branches and other materials to decontaminate these areas.

    Before Fukushima, Japan derived one-third of its electricity from nuclear power. Public attitudes toward nuclear energy hardened after the disaster, especially as investigations showed that regulators had underestimated risks at the site. Although Japan needs to import about 90% of the energy it uses, today over 70% of the public opposes nuclear power.

    The first Japanese “Godzilla” film released after the Fukushima disaster, “Shin Godzilla” (2016), reboots the franchise in a contemporary Japan with a new type of Godzilla, in an eerie echo of the damages of and governmental response to Fukushima’s triple disaster. When the Japanese government is left leaderless and in disarray following initial counterattacks on Godzilla, a Japanese government official teams up with an American special envoy to freeze the newly named Godzilla in its tracks, before a fearful world unleashes its nuclear weapons once again.

    Their success suggests that while national governments have an important role to play in major disasters, successful recovery requires people who are empowered to act as individuals.

    The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    ref. Godzilla at 70: The monster’s warning to humanity is still urgent – https://theconversation.com/godzilla-at-70-the-monsters-warning-to-humanity-is-still-urgent-237934

    MIL OSI – Global Reports

  • MIL-OSI Global: The return of 90s culture echoes a backlash to feminism that we’ve seen throughout history

    Source: The Conversation – UK – By Julie Whiteman, Lecturer in Marketing, University of Birmingham

    I came of age in the 1990s and lived through the heavily gendered pop culture of Spice Girls and All Saints, Oasis and Blur, of lads and ladettes outdoing each other in heavy drinking and sexual exploits.

    Now in my 40s, I thought this brash and overtly sexist culture had faded out. It appeared to have been replaced by a socially progressive and inclusive generation focused on body and sex positivity, gender and sexual fluidity. And so I was surprised to see my generation Z research participants romanticise the 1990s as a belle epoque.

    First it was Sex and the City, then lad’s mag Loaded and now Oasis. Popular culture from the 1990s is having a moment in the mid-2020s. The 90s have been a stylistic and cultural influence on youth culture for the best part of a decade, with large amounts of money invested in big-name reboots and reunions.

    I began researching young adults’ sexual politics and their relationship to popular culture back in 2016. It was clear from my observations of the clothing, social media and references back then that the 90s were a major cultural influence. I remember being surprised by the popularity of the TV shows like Friends and musicians including Shaggy, Oasis and Suede from my own youth.

    Every generation has a romanticised nostalgia for the fashion, music and attitudes of the previous. When I was a teenager, my friends and I held a romanticised nostalgia for the music, fashion and sense of freedom we believed characterised the 60s and 70s. This view, however, did not align with my parents’ and their peers’ recollections of that time.




    Read more:
    Sick of reboots? How ‘nostalgia bait’ profits off Millennial and Gen Z’s childhood memories


    What is most interesting here is the apparent contradiction in values. The objectification of women at the heart of 90s pop culture does not gel with what we think of as the sexually open, progressive politics of generation Z. But having studied the intersection of pop culture and gender, I see this current resurgence as part of a misogynistic backlash to feminist progress – something that feminist scholars have highlighted as a typical pattern for years.

    Much of 90s popular culture is inherently misogynistic. Loaded and other now-defunct lads’ mags were infamous for their brutal objectification of women, including advice on how to get women into bed by almost any means. The celebrated lad culture epitomised by the likes of Oasis encouraged “men to be men”, with all the macho aggression and limited emotional range that implied.

    A damning 2012 National Union of Students report on sexual harassment and assault on university campuses made explicit links to the prevalence of lad culture in UK higher education. It argued lad culture at best objectifies and is dismissive of women, and at worst glamorises sexual assault.




    Read more:
    Sexual strangulation has become popular – but that doesn’t mean it’s wanted


    Gen Z is widely considered a generation of social activists, having grown up in the shadow of movements like #MeToo and the Women’s March that emerged in protest of the election of Donald Trump as US president. These cultural touchpoints in this generation’s upbringing highlight intersections of sex and power.

    Some young consumers have acknowledged this mismatch, describing Sex and the City as “outdated” and “cringey”. And incoming Loaded editor Danni Levy seems aware of it too, saying the relaunch is necessary because of the “world gone PC mad”.

    Why is 90s culture popular now?

    I argue the resurgence of 90s popular culture is actually part of a backlash against the progressive understandings of gender and sexuality associated with generation Z.

    Research indicates that gen Z men are less likely to support feminism than baby boomers. Young men and boys are increasingly being influenced by figures like self-proclaimed misogynist Andrew Tate, who faces charges of rape and human trafficking among other offences.

    While enjoying 90s television of course doesn’t mean you hold the same misogynistic views as Tate, I believe some popular culture is central to a continuum of backlash against feminist progress.

    To explain this, I suggest turning to feminist scholars – including one of my own 90s favourites, Susan Faludi’s excellent 1992 book Backlash: The undeclared war against women. In this work, Faludi details multiple periods of backlash against women’s liberation dating from 195BC. Each of these is linked to repeated “crises of masculinity”.

    Much feminist writing details how the very notion of masculinity depends on a subordinate femininity. And so, Faludi argues, advances in feminism equal a crisis of masculinity. Progress begets backlash, and popular culture is a key site where this takes place.

    Through my research I work to detail the subtle and nuanced ways this happens. I am currently researching how popular culture interprets and remixes progressive ideas like sex and gender positivity.

    At first glance, songs, films and shows may seem to be supportive of women’s sexual liberation, but on closer inspection they can reinforce traditional ideas of what it is to be a woman, or what it is to be attractive. Katy Perry’s recent music video Woman’s World is a classic example of this. Its lyrical appropriation of feminist messages of empowerment is delivered in an outdated visual style that adheres to the male gaze.

    Perry and her dancers strut around in swimwear costumes adapted to mimic various “masculine” professions. Critiqued for its lack of authenticity, Perry’s video represents a male sexual dreamworld that is inconsistent with the feminist politics it links itself to.

    There is often, in examples like this, a blurring of feminist and anti-feminist ideas – where it seems as though feminism is so commonsense it is no longer necessary, and is therefore neutralised.

    A multitude of literature on female sexual desire has emerged in the last few years. It is wide-ranging and imaginative. And yet, much of 90s popular culture flattens this complexity, painting female desire as only a desire to be desired by men.

    It prioritises male pleasure and advocates for their sexual dominance over women, reverting to understandings of “acceptable” sex as heterosexual, monogamous and male-led. Despite years of feminist progress, popular culture continues to teach us that women are objects of male sexual fantasy.

    Julie Whiteman does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. The return of 90s culture echoes a backlash to feminism that we’ve seen throughout history – https://theconversation.com/the-return-of-90s-culture-echoes-a-backlash-to-feminism-that-weve-seen-throughout-history-238162

    MIL OSI – Global Reports

  • MIL-OSI Global: How Mpox anti-vaxx conspiracies target and stigmatise LGBTQ+ people

    Source: The Conversation – UK – By Helen McCarthy, Doctoral Researcher in Criminology and Sociology, York St John University

    According to some conspiracy theorists posting on alternative, uncensored social media networks, Mpox is another “scamdemic”, created by a powerful elite to cull populations and generate profit for “big pharma”. According to these social media users, anyone who takes the Mpox vaccine inevitably faces heart attack and death.

    Other Mpox conspiracies target hate at LGBTQ+ people.

    Through my PhD research into anti-vaccination misinformation, I’ve collected thousands of social media posts, videos, images and links from anti-vaccination Telegram channels, Substack newsletters and Gab groups. Gab Social is a social networking site known for hosting right-wing political content. These platforms are unique in their permissive approach to moderation. Users can post virtually anything they want without restraint.

    According to 2023 research, platforms like Gab have become the home of many “alt-right” content creators who have been de-platformed from mainstream social media channels like Facebook and Instagram. Mpox misinformation is thriving in these online locations.

    Sexuality and stigma

    In the early days of the COVID pandemic, a study identified that misinformation on social media platforms like Facebook, X (formerly known as Twitter) and YouTube frequently blamed specific social groups for infection surges. Now, it’s MPox’s turn.

    One Substack creator, for example, considers gay and bisexual men engaging in “high-risk sexual behaviour” a threat to the heterosexual population. He argues abstinence is the only solution – but only for men who have sex with men.

    As well as accusing gay and bisexual men of having a “perverted lifestyle that goes against nature and God’s laws”, some anti-vaxx content creators stigmatise people with Mpox as a hidden enemy, who could be “teaching in schools and indoctrinating children”.

    One common anti-vaxx conspiracy theory is “vaccine shedding”. This is the idea that vaccinated people can harm the unvaccinated through any kind of contact. One online conspiracy states the Mpox vaccine is particularly prone to shedding. Gay and bisexual men, then, are portrayed as dangerous whether they’re vaccinated or not.

    Mpox is routinely characterised by conspiracy theorists as a virus for immoral people. As a result, some anti-vaxx perspectives are shockingly callous – one commenter claims they wouldn’t care at all if “the gays and communists” died from the Mpox vaccine.

    Misinformation surrounding Mpox and the vaccine is peppered with such homophobic narratives of infection and contamination – and it’s familiar territory. People suffering from HIV and Aids in the 1980s and 1990s were relentlessly stigmatised as a dangerous other.

    While online conspiracy theories present those with Mpox as a menace, in reality, there have only been a small number of mild Mpox cases identified in the UK since 2022. Though the majority of confirmed cases of Mpox in the UK have been in gay and bisexual men – and Mpox can be transmitted through close sexual contact – people can also become infected if they’re exposed to coughing and sneezing, or share clothing, bedding and towels with an infected person.

    Moderation and misinformation

    In August 2024, a new strain of Mpox was identified in the Democratic Republic of the Congo and some neighbouring countries. An estimated 10 million vaccines are needed to meet demand in affected African nations. In September 2024, the UK government ordered 150,000 doses of an Mpox vaccine to be distributed among gay and bisexual men and healthcare and humanitarian workers who may be exposed.

    Just as many of us might check a reliable, verified medical source to find out more about Mpox, so alternative social media users look to the sources they trust. This commonly includes doctors blowing the whistle on alleged vaccine injury, conspiracy theory “news” sites and prominent right wing figures like Tucker Carlson. People selling alternative remedies and products promising miraculous detox are never far away to profit from vaccine misinformation.

    Users share these sources across Gab groups, comment threads and Telegram channels, layering their own beliefs on top. This generates even more views and shares, which is one of the reasons why social media is such a good incubator for conspiracy theories and misinformation.

    Another reason is the lack of content moderation on alternative social media sites. Substack describes itself as “a place for independent writing”. Users are not supposed to share any content which incites violence, contains sex or nudity, or illegal activity. Telegram takes a similar approach. Gab also draws the line at illegal content, but mainly encourages users to hide content they don’t want to see or ignore it.

    The arguments for or against unrestrained free speech on the internet are complex. But sites like Gab reveal what an unmoderated internet can look like – hate of every variety can find a home here if that’s what the users choose to post. Mpox is just another topic to generate even more shareable content.

    Helen McCarthy does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. How Mpox anti-vaxx conspiracies target and stigmatise LGBTQ+ people – https://theconversation.com/how-mpox-anti-vaxx-conspiracies-target-and-stigmatise-lgbtq-people-239981

    MIL OSI – Global Reports

  • MIL-OSI Global: How Sally Rooney came to be dubbed the ‘voice of a generation’

    Source: The Conversation – UK – By Ellen Wiles, Senior Lecturer in Creative Writing, University of Exeter

    Sally Rooney’s new novel, Intermezzo, is finally here – and nearly everyone I know seems to be reading it. It’s almost like the pre-streaming days, when everyone would settle on the sofa at the same time to watch the new hit TV series. The sense that we were all part of the same unfolding experience of a story was part of the joy.

    Not many authors can achieve that in this era of the digital kaleidoscope, when myriad creative experiences can be accessed at the touch of a button. Rooney’s cult status has led to her being described as the “voice of a generation”. The label generally refers to an author whose work particularly resonates with people in their 20s and 30s. But why have Rooney’s books had this effect? And who were the literary voices of previous generations?

    Logically, of course, the phrase is inaccurate when applied to any single writer. Generations include vastly different cohorts and people from diverse backgrounds, and no authorial voice can actually represent them all. Rooney couldn’t, even if everyone on the planet were reading Intermezzo right now – which they’re not. At least, not quite. And yet, as a phrase used to describe a writer whose work has had a notably greater impact than most others, it is worth interrogating.


    No one’s 20s and 30s look the same. You might be saving for a mortgage or just struggling to pay rent. You could be swiping dating apps, or trying to understand childcare. No matter your current challenges, our Quarter Life series has articles to share in the group chat, or just to remind you that you’re not alone.

    Read more from Quarter Life:


    To be the person crowned with this label – to have to embody “the voice of a generation” – must feel simultaneously like an honour and a burden. Rooney herself has outwardly rejected it. In 2018 she told the Guardian: “I certainly never intended to speak for anyone other than myself. Even myself I find it difficult to speak for.”

    And yet she invariably speaks persuasively and cogently in public events about her books: an ability which no doubt stems from her background as a champion debater.

    Rooney speaks about Palestine during the launch of her new novel, Intermezzo.

    This ability also brings a rare clarity to her writing. Rooney has a knack for describing with precision, and also with lyricism, the textural experience of being a young person in the world, particularly an intelligent yet lonely young person. Her characters feel almost as strongly about big ideas as they do about their animal desires.




    Read more:
    How does someone become the ‘voice of a generation’? A brief history of the concept


    It’s a hard time to be young. Rooney understands and engages with the high cost of living, precarious jobs, stark social inequality and the climate crisis in her novels. Yet these ideas and political concerns never subsume the specific human characters, in specific Irish settings, that lie at the heart of each story. These are surely some of the intersecting reasons why her fiction has resonated so widely with the under 30s.

    Intermezzo can be distinguished from Rooney’s previous two novels in its interrogation of intimate relationships that are perceived to be highly unconventional, and exploring how the characters negotiate that social tension. I like to think that’s why it has sparked so much interest – but I may well be biased, since my forthcoming novel, The Unexpected, does the same thing, albeit with a co-parenting angle.

    Voices of generations past

    Looking back a generation, Zadie Smith’s novel White Teeth, published in 2000 when she was in her early 20s, sparked a comparable reading fever, and prompted the same “voice of a generation” label.

    Smith broke new ground back then with the fresh, funny and profound quality of her writing about the multicultural community of north-west London, particularly through her sparkling dialogue. Like Rooney’s fiction, Smith’s addresses pressing political issues, notably relating to race, class and migration, and yet those concerns never overpower the vivid individuality of her characters.

    Like Rooney, Smith is a compelling public speaker, articulating her ideas with directness and wit. Her clear public “voice” surely helped the “voice of a generation” label to adhere. Yet Smith similarly rejected the idea that she had ever sought to represent any generation or group through her fiction. Conversely, she has denied even having a singular “voice” that might be linked to arbitrary aspects of her autobiography. Instead, she describes always having had multiple voices in her head, arguing that good fiction actually stems from a productive self-doubt, combined with a sense of compassion and curiosity about other people and the world.

    Turning the dial back further, into the 20th century, the so-called “voices of a generation” that come to mind are mostly white men. Brett Easton Ellis and J.D. Salinger, for instance, in the US; and Martin Amis and Ian McEwan in the UK.

    It is heartening that fiction is no longer so dominated by male writers, especially when fiction readers remain predominantly female. And over the last two decades, it has been great to witness the championing of more diverse authors in the publishing industry: a shift which has been long overdue.

    Still, as the real world appears to become increasingly divided through social media bubbles and extremist politics, it seems more important than ever to hold onto the vital role of fiction. Not as a loudspeaker for authorial “voices” that are assumed to represent neatly defined groups of people, but as a portal to imagined voices that reveal how unique yet interconnected we all are. Fiction is a force that can draw us together, regardless of our backgrounds, and increase our empathy for one another.

    If a single writer can spark as many people as Rooney has to engage collectively in deep appreciation for their works of fiction, then it seems important to find a shorthand to capture that. If “the voice of a generation” is too exclusive, perhaps “a voice for a generation” is a more nuanced alternative.



    Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.


    Ellen Wiles is the author of the new novel, The Unexpected – out on 21 November 2024 from HQ (HarperCollins).

    ref. How Sally Rooney came to be dubbed the ‘voice of a generation’ – https://theconversation.com/how-sally-rooney-came-to-be-dubbed-the-voice-of-a-generation-240063

    MIL OSI – Global Reports

  • MIL-OSI Global: How does someone become the ‘voice of a generation’? A brief history of the concept

    Source: The Conversation – UK – By Helen Kingstone, Senior Lecturer in English Literature, Royal Holloway University of London

    Sally Rooney, author of Normal People and now Intermezzo, keeps being called “the voice of a generation”. And she’s just the latest in a sequence of authors to get this accolade.

    In 1991, Douglas Coupland’s novel Generation X supposedly made him the “voice of” that generation. Looking further back, J.D. Salinger’s first and only novel, Catcher in the Rye (1951), seemed to capture the voice of a generation at the time, and has resonated with successive generations of awkward and disaffected teenagers ever since.

    What’s behind this phenomenon is generational thinking. It seems to be everywhere at the moment, providing the media with easy taglines, spreading cliches and unnecessarily sowing division. But its history goes back far beyond even the baby-boomers.

    In the 19th century, after the radical upheavals of the Enlightenment , the “age of revolutions” and the Industrial Revolution, some people wondered if perhaps they could reject tradition completely. Groups of young artists began to rebel against a model of discipleship that required them to learn from their elders.

    Instead of following the art world’s top-down, paternalistic apprenticeship model, these fraternities and brotherhoods (yes, they were mainly men) declared that were innovating a new dawn in art.

    The Pre-Raphaelite Brotherhood, for example, now viewed as quaint, were definitely Victorian radicals, as were the impressionists 25 years later. These tight-knit groups of artists had a strong sense of generational identity, rebelling against their predecessors.

    In one important way, however, they were different from the modern “voice of a generation” figures because these groups also saw themselves as rebelling against their own peers. We now might see them as iconic of their generation, but at the time, they were rejects, though elite ones – bohemian in the original sense. Crucially, they were honest about their oddity. They knew they were unusual, so they didn’t claim to be speaking for everyone.

    This paradox highlights one of the challenges of history: that we’re understandably most captivated by people who were “ahead of their time”, but these people are therefore probably not representative of their time.




    Read more:
    How Sally Rooney came to be dubbed the ‘voice of a generation’


    The origins of generational thinking

    The idea of generations as self-conscious group identities came into being with the trauma and upheaval of the first world war. Over the next couple of decades, writers who had come of age during the war narrated how it had decimated and traumatised their generation.

    Examples include Erich Maria Remarque’s novel All Quiet on the Western Front (1928), R.C. Sherriff’s play Journey’s End (1928) and Vera Brittain’s autobiography, Testament of Youth (1933).

    These stories all express an angry sense of having been “lions led by donkeys”. They envisage an unbridgeable divide between their own front-line generation, sacrificing its youth, and an older generation of complacent army commanders.

    They also trace a second divide between themselves and the slighter younger generation who came of age after the war’s end and didn’t want to think about it. Brittain poignantly describes how this new fresh-faced generation experienced her grief as passé.

    These first world war writers did consciously speak as the voice of a specific “lost generation”. But like any such label, this also obscures a more complex reality.

    Not all first world war soldiers were in the first flush of youth like Wilfred Owen, Robert Graves, Remarque and Sherriff. In fact, men were recruited up to the age of 41 in Britain, 43 in Russia, 48 in France and 50 in Austria-Hungary.

    As a result, between 3 million and 4 million women were widowed by the war, and between 6 million and 8 million children were left fatherless. On this reckoning, there is probably more than one first world war generation.

    This complexity highlights one of the tricky things about the generations concept. It refers both to relationships within families (parents and children) and to commonalities beyond the family, among contemporaries across society. Sometimes these two dimensions align neatly, as in the “lost generation”, but sometimes they don’t, like for those older soldiers who don’t fit inside that label.

    Why generational labels matter

    My research has shown that generational ideas are real and do matter – but need to be handled with care.

    Generation talk all too often slips into generalisation, which can then be used to sow division. The word “generationalism” has been coined by researchers to highlight this issue.

    To counteract this, a network of researchers and third sector colleagues, led by myself and sociologist Jennie Bristow, have worked together to produce a guide entitled Talking About Generations: 5 Questions to Ask Yourself, which encourages people working with the concept of generation to pause and check their motivations and meaning before using the term.

    Labels like “the voice of a generation” always depend on speculating about what other people are thinking and feeling. This risks flattening and homogenising generational experience – not all millennials are Sally Rooneys, after all.

    Rooney herself has said in an interview: “I certainly never intended to speak for anyone other than myself.” Any “voice of a generation” needs, in practice, to be plural “voices”.



    Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.


    Helen Kingstone has received funding from Wellcome: it funded the research behind the guide for ‘Talking about Generations’.

    ref. How does someone become the ‘voice of a generation’? A brief history of the concept – https://theconversation.com/how-does-someone-become-the-voice-of-a-generation-a-brief-history-of-the-concept-240495

    MIL OSI – Global Reports

  • MIL-OSI Global: When AI plays favourites: How algorithmic bias shapes the hiring process

    Source: The Conversation – Canada – By Mehnaz Rafi, PhD Candidate, Haskayne School of Business, University of Calgary

    Given the rapid integration of AI into human resource management across many organizations, it’s important to raise awareness about the complex ethical challenges it presents. (Shutterstock)

    A public interest group filed a U.S. federal complaint against artificial intelligence hiring tool, HireVue, in 2019 for deceptive hiring practices. The software, which has been adopted by hundreds of companies, favoured certain facial expressions, speaking styles and tones of voice, disproportionately disadvantaging minority candidates.

    The Electronic Privacy Information Center argued HireVue’s results were “biased, unprovable and not replicable.” Though the company has since stopped using facial recognition, concerns remain about biases in other biometric data, such as speech patterns.

    Similarly, Amazon stopped using its AI recruitment tool, as reported in 2018, after discovering it was biased against women. The algorithm, trained on male-dominated resumes submitted over 10 years, favoured male candidates by downgrading applications that included the word “women’s” and penalizing graduates of women’s colleges. Engineers tried to address these biases, but could not guarantee neutrality, leading to the project’s cancellation.

    These examples highlight a growing concern in recruitment and selection: while some companies are using AI to remove human bias from hiring, it can often reinforce and amplify existing inequalities. Given the rapid integration of AI into human resource management across many organizations, it’s important to raise awareness about the complex ethical challenges it presents.

    Ways AI can create bias

    As companies increasingly rely on algorithms to make critical hiring decisions, it’s crucial to be aware of the following ways AI can create bias in hiring:

    1. Bias in training data. AI systems rely on large datasets — referred to as training data — to learn patterns and make decisions, but their accuracy and fairness are only as good as the data they are trained on. If this data contains historical hiring biases that favour specific demographics, the AI will adopt and reproduce those same biases. Amazon’s AI tool, for example, was trained on resumes from a male-dominated industry, which led to gender bias.

    2. Flawed data sampling. Flawed data sampling occurs when the dataset used to train an algorithm is not representative of the broader population it’s meant to serve. In the context of hiring, this can happen if training data over-represents certain groups —typically white men — while under-representing marginalized candidates.

    As a result, the AI may learn to favour the characteristics and experiences of the over-represented group while penalizing or overlooking those from underrepresented groups. For example, facial analysis technologies have shown to have higher error rates for racialized individuals, particularly racialized women, because they are underrepresented in the data used to train these systems.




    Read more:
    Artificial intelligence can discriminate on the basis of race and gender, and also age


    3. Bias in feature selection. When designing AI systems, developers choose certain features, attributes or characteristics to be prioritized or weighed more heavily when the AI is making decisions. But these selected features can lead to unfair, biased outcomes and perpetuate pre-existing inequalities.

    For example, AI might disproportionately value graduates from prestigious universities, which have historically been attended by people from privileged backgrounds. Or, it might prioritize work experiences that are more common among certain demographics.

    This problem is compounded when the features selected are proxies for protected characteristics, such as zip code, which can be strongly related to race and socioeconomic status due to historical housing segregation.

    Bias in hiring algorithms raises serious ethical concerns and demands greater attention toward the mindful, responsible and inclusive use of AI.
    (Shutterstock)

    4. Lack of transparency. Many AI systems function as “black boxes,” meaning their decision-making processes are opaque. This lack of transparency makes it difficult for organizations to identify where bias might exist and how it affects hiring decisions.

    Without insight into how an AI tool makes decisions, it’s difficult to correct biased outcomes or ensure fairness. Both Amazon and HireVue faced this issue; users and developers struggled to understand how the systems assessed candidates and why certain groups were excluded.

    5. Lack of human oversight. While AI plays an important role in many decision-making processes, it should augment, rather than replace, human judgment. Over-reliance on AI without adequate human oversight can lead to unchecked biases. This problem is exacerbated when hiring professionals trust AI more than their own judgment, believing in the technology’s infallibility.

    Overcoming algorithmic bias in hiring

    To mitigate these issues, companies must adopt strategies that prioritize inclusivity and transparency in AI-driven hiring processes. Below are some key solutions for overcoming AI bias:

    1. Diversify training data. One of the most effective ways to combat AI bias is to ensure training data is inclusive, diverse and representative of a wide range of candidates. This means including data from diverse racial, ethnic, gender, socioeconomic and educational backgrounds.

    2. Conduct regular bias audits. Frequent and thorough audits of AI systems should be conducted to identify patterns of bias and discrimination. This includes examining the algorithm’s outputs, decision-making processes and its impact on different demographic groups.

    It is important to actively involve human judgment in AI-driven decisions, particularly when making final hiring choices.
    (Shutterstock)

    3. Implement fairness-aware algorithms. Use AI software that incorporates fairness constraints and is designed to consider and mitigate bias by balancing outcomes for underrepresented groups. This can include integrating fairness metrics such as equal opportunity, modifying training data to show less bias and adjusting model predictions based on fairness criteria to increase equity.

    4. Increase transparency. Seek AI solutions that offer insight into their algorithms and decision-making processes to make it easier to identify and address potential biases. Additionally, make sure to disclose any use of AI in the hiring process to candidates to maintain transparency with your job applicants and other stakeholders.

    5. Maintain human oversight. To maintain control over hiring algorithms, managers and leaders must actively review AI-driven decisions, especially when making final hiring choices. Emerging research highlights the critical role of human oversight in safeguarding against the risks posed by AI applications. However, for this oversight to be effective and meaningful, leaders must ensure that ethical considerations are part of the hiring process and promote the responsible, inclusive and ethical use of AI.

    Bias in hiring algorithms raises serious ethical concerns and demands greater attention toward the mindful, responsible and inclusive use of AI. Understanding and addressing the ethical considerations and biases of AI-driven hiring is essential to ensuring fairer hiring outcomes and preventing technology from reinforcing systemic bias.

    Mehnaz Rafi does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. When AI plays favourites: How algorithmic bias shapes the hiring process – https://theconversation.com/when-ai-plays-favourites-how-algorithmic-bias-shapes-the-hiring-process-239471

    MIL OSI – Global Reports

  • MIL-OSI Global: Transparency and trust: How news consumers in Canada want AI to be used in journalism

    Source: The Conversation – Canada – By Nicole Blanchett, Associate Professor, Journalism, Toronto Metropolitan University

    Developing clear policies and principles that are communicated with audiences should be an essential part of any newsroom’s AI practice. (Shutterstock)

    When it comes to artificial intelligence (AI) and news production, Canadian news consumers want to know when, how and why AI is part of journalistic work. And if they don’t get that transparency, they could lose trust in news organizations.

    News consumers are so concerned about how the use of AI could impact the accuracy of stories and the spread of misinformation, a majority favour government regulation of how AI is used in journalism.

    These are some of our preliminary findings after surveying a representative sample of 1,042 Canadian news consumers, most of whom accessed news daily.

    This research is part of the Global Journalism Innovation Lab which researches new approaches to journalism. Those of us on the team at Toronto Metropolitan University are particularly interested in looking at news from an audience perspective in order to develop strategies for best practice.

    The industry has high hopes that the use of AI could lead to better journalism, but there is still a lot of work to be done in terms of figuring out how to use it ethically.

    Not everyone, for example, is sure the promise of time saved on tasks that AI can do faster will actually translate into more time for better reporting.

    We hope our research will help newsrooms understand audience priorities as they develop standards of practice surrounding AI, and prevent further erosion of trust in journalism.

    AI and transparency

    We found that a lack of transparency could have serious consequences for news outlets that use AI. Almost 60 per cent of those surveyed said they would lose trust in a news organization if they found out a story was generated by AI that they thought was written by a human, something also reflected in international studies.

    The overwhelming majority of respondents in our study, more than 85 per cent, want newsrooms to be transparent about how AI is being used. Three quarters want that to include labelling of content created by AI. And more than 70 per cent want the government to regulate the use of AI by news outlets.

    Organizations like Trusting News, which helps journalists build trust with audiences, now offer advice on what AI transparency should look like and say it’s more than just labelling a story — people want to know why news organizations are using AI.

    Audience trust

    Our survey also showed a significant contrast in confidence in news depending on the level of AI used. For example, more than half of respondents said they had high to very high trust in news produced just by humans. However, that level of trust dropped incrementally the more AI was involved in the process, to just over 10 per cent for news content that was generated by AI only.

    In questions where news consumers had to choose a preference between humans and AI to make journalistic decisions, humans were far preferred. For example, more than 70 per cent of respondents felt humans were better at determining what was newsworthy, compared to less than six per cent who felt AI would have better news judgement. Eighty-six per cent of respondents felt humans should always be part of the journalistic process.

    As newsrooms struggle to retain fractured audiences with fewer resources, the use of AI also has to be considered in terms of the value of the products they’re creating. More than half of our survey respondents perceived news produced mostly by AI with some human oversight as less worth paying for, which isn’t encouraging considering the existing reluctance to pay for news in Canada.

    This result echoes a recent Reuters study, where an average of 41 per cent of people across six countries saw less value in AI-generated news.

    Concerns about accuracy

    In terms of negative impacts of AI in a newsroom, about 70 per cent of respondents were concerned about accuracy in news stories and job losses for journalists. Two-thirds of respondents felt the use of AI might lead to reduced exposure to a variety of information. An increased spread of mis- and disinformation, something recognized widely as a serious threat to democracy, was of concern for 78 per cent of news consumers.

    Using AI to replace journalists was what made respondents most uncomfortable, and there was also less comfort with using it for editorial functions such as writing articles and deciding what stories to develop in the first place.

    There was far more comfort with using it for non-editorial tasks such as transcription and copy editing, echoing findings in previous research in Canada and other markets.

    We also gathered a lot of data unrelated to AI to get a sense of how Canadians are tapping into news and the news they’re tapping into. Politics and local news were the two most popular types of news, chosen by 67 per cent of respondents, even though there is less local news to consume due to extensive cuts, mergers and closures.

    A lot of people in our sample of Canadians, around 30 per cent, don’t actively look for news. They let it find them, something called passive consumption. And although this is proportionally higher in news consumers under 35, this isn’t just a phenomenon seen in the younger demographic. More than half of those who reported letting news find them were over 35 years old.

    Although smartphones are increasingly becoming the likely access points of news for many consumers, including almost 70 per cent for those 34 and under and about 60 per cent of those between 35 and 44, television is where most news consumers in our study reported getting their journalism.

    Respondents in our survey were asked to select all of their points of news access. More than 80 per cent of participants chose some form of TV, with some respondents picking two TV formats, for example, cable TV and smart TV. Surprisingly to us, half of 18-24 year olds reported TV as an access point for news. For those 44 and under, it was more often through a smart TV, though. As shown in other Canadian studies, TV news still plays an important role in the media landscape.

    This is just a broad look at the data we have collected. Our analysis is just beginning. We’re going to dig deeper into how different demographics feel about the use of AI in journalism and how the use of AI might impact audience trust.

    We will also soon be launching our survey with research partners in the United Kingdom and Australia to find out if there are differences in perceptions of AI in the three countries.

    Even these initial results provide a lot of evidence that, as newsrooms work to survive in a destabilized market, using AI could have detrimental effects on the perceived value of their journalism. Developing clear policies and principles that are communicated with audiences should be an essential part of any newsroom’s AI practice in Canada.

    Nicole Blanchett receives funding from the Social Sciences and Humanities Research Council of Canada (SSHRC) and The Creative School at Toronto Metropolitan University.

    Charles H. Davis receives funding from the Social Sciences and Humanities Research Council of Canada (SSHRC) and has received funding from Toronto Metropolitan University.

    Mariia Sozoniuk works with the Explanatory Journalism Project which is supported by funding from The Creative School at Toronto Metropolitan University and the Social Sciences and Humanities Research Council of Canada (SSHRC).

    Sibo Chen receives funding from the Social Sciences and Humanities Research Council of Canada (SSHRC) and The Creative School at Toronto Metropolitan University.

    ref. Transparency and trust: How news consumers in Canada want AI to be used in journalism – https://theconversation.com/transparency-and-trust-how-news-consumers-in-canada-want-ai-to-be-used-in-journalism-240527

    MIL OSI – Global Reports

  • MIL-OSI Global: Too many kids face bullying rooted in social power imbalances — and educators can help prevent this

    Source: The Conversation – Canada – By Deinera Exner-Cortens, Associate Professor of Psychology and Tier 2 Canada Research Chair (Childhood Health Promotion), University of Calgary

    Educators can help kids understand the difference between using power negatively and positively, and encourage its positive use to build respectful environments. (Allison Shelley/The Verbatim Agency for EDUimages), CC BY-NC

    Being at school among peers and friends can be exciting and positive for many children and youth. But, too many kids in Canada face the reality of being bullied because of some aspect of who they are.

    This type of bullying — known as identity-based or bias-based bullying — is extremely harmful to kids’ sense of belonging at school, and has negative effects on their physical and mental health, their academic achievement and their social well-being.

    As psychology researchers and directors of the Promoting Relationships and Eliminating Violence Network (PREVNet), we developed accessible learning modules for educators so they can learn to recognize identity-based bullying, and intervene to stop it.

    While explicitly developed with education settings in mind, these may also be helpful for parents or other caring adults in situations of influence for children’s peer relations. These modules will be available in French by the end of the year.

    Harmful to kids’ well-being

    Bullying has several key elements that make it so harmful to kids’ well-being.

    Bullying is unwanted, aggressive behaviour that is often repeated over time. These behaviours can be verbal, social, physical, sexual and/or cyber in nature.

    It happens in relationships where there is a power imbalance. In other words, the child who bullies holds more power than the child who experiences the bullying. In the case of identity-based bullying, this power imbalance is rooted in the types of power differences we see at a larger societal level.

    Bullying behaviours can be verbal, social, physical or sexual, and can take place in person or online.
    (Shutterstock)

    Social power dynamics, identity-based bullying

    It is well-documented that Indigenous youth, Black youth, 2SLGBTQIA+ youth and youth with disabilities experience discrimination in Canada.

    But why? Put simply, these experiences of discrimination are rooted in Canada’s settler-colonial history, which institutionalized racialized, class-based and colonial norms and forms of social privilege. These institutionalized forms of privilege resulted in greater political, social and economic power being granted to groups as they more closely aligned with these norms, with the greatest power allotted to those at the top of this “civilized” ideal: people who are white (western European), Christian, wealthy, cisgender, heterosexual, settler men.




    Read more:
    Rethinking masculinity: Teaching men how to love and be loved


    Groups who have been granted unearned power and privilege through these systems work to maintain their power through things like stigma, discrimination and other forms of oppression, while groups marginalized as “other” — less aligned with these dominant norms — continue to experience and hold less power across the socio-political-economic spectrum.

    And, youth who hold more than one socially marginalized identity often experience even greater discrimination.

    Schools as societal institutions

    Since schools are societal institutions, the discrimination and other forms of oppression that are used by dominant groups to maintain power in larger society are mirrored within schools through identity-based bullying.

    With identity-based bullying, the power imbalance that is a key feature of bullying behaviour is rooted in these larger social power imbalances.

    Because we all hold multiple social identities, a social power perspective also explains how these identities interact. Take, for example, a situation where a white, queer student is bullying a Black, queer student. Although both students are marginalized based on their queer identities, the white student still benefits from the power and privilege afforded to whiteness. So, this situation still reflects a power dynamic based on social identities.




    Read more:
    Racism contributes to poor attendance of Indigenous students in Alberta schools: New study


    Educator interventions

    Identity-based bullying is likely an issue in your neighbourhood school. In data we collected from 1,200 youth across Canada in 2023, one in three reported identity-based bullying because of their body weight, race or skin colour, disability, religion, sexual orientation and/or gender identity.

    Second, identity-based bullying impacts kids’ experiences at school. For example, a recent study from the United States found that youth who experienced multiple forms of identity-based bullying were the most likely to report avoiding class or activities. This study also found that if these same students felt more supported by adults at their school, they reported less school avoidance. This means caring educators are a protective factor for youth experiencing identity-based bullying.

    Our research has proposed ways educators specifically can prevent identity-based bullying in their schools:

    1) Educators (or other adults engaged in a school community) could examine their school board policy on bullying, and make sure it specifically mentions the role of social identities. If it doesn’t, educators can work to change it. A great example of naming identities when defining bullying can be seen in the Northwest Territories’ Education Act.

    2) Be self-reflective and aware. As a first step, educators can explore their own unconscious biases and reflect on how they may be influencing the classroom climate.

    3) Be a positive role model. Students look to adults about how to behave. Celebrate the strengths of all students and role model how to be respectful and inclusive. Also role model how to helpfully intervene when harmful behaviour occurs.

    4) Actively create opportunities for positive peer dynamics in the classroom. Be intentional about creating groups to ensure that students who are excluded are given the opportunity to interact and work with students who are kind and prosocial, and who may have similar interests and abilities.

    Educators can teach strategies that help all students learn how to be positive allies.
    (Shutterstock)

    5) Empower all students to intervene safely and effectively. Actively educate students on how to recognize identity-based bullying and provide strategies that will help all students to be positive allies.

    6) Work at classroom, school and community levels to create a welcoming, inclusive environment for all children. For educators, this can include things like conducting curriculum review, actively incorporating learning about power, privilege and oppression, creating and supporting clubs like gay-straight alliances and working to create a trauma-informed classroom.

    These strategies can be consolidated and deepened through engaging with our new anti-bullying training modules, which focus specifically on identity-based bullying.

    In these ways, educators and other caring adults can help kids understand the difference between using power negatively and positively, and encourage its positive use to build inclusive, respectful and safe environments for all.

    Deinera Exner-Cortens receives funding from the Public Health Agency of Canada and the Canada Research Chairs program. She is also the director of PREVNet Inc, a registered charitable organization in Canada.

    Elizabeth (Liz) Baker receives funding from the Public Health Agency of Canada and Alberta’s Children Services. She is affiliated with PREVNet as Executive Director.

    Wendy Craig receives funding from Public Health Agency of Canada. She is the Scientific Co-Director of PREVNet (Promoting Healthy Relationships and Eliminating Violence Network.

    ref. Too many kids face bullying rooted in social power imbalances — and educators can help prevent this – https://theconversation.com/too-many-kids-face-bullying-rooted-in-social-power-imbalances-and-educators-can-help-prevent-this-237613

    MIL OSI – Global Reports

  • MIL-OSI Global: What you need to know about cold and flu season

    Source: The Conversation – Canada – By Jennifer Guthrie, Assistant Professor of Microbiology and Immunology, Western University

    Flu shots are recommended for most Canadians over six months old. (Shutterstock)

    As the fall months settle in, Canadians are being urged to take precautions against the upcoming flu season.

    Flu season in Canada typically peaks between December and February, but the virus can circulate much earlier. Public health officials are advocating for early vaccination, emphasizing that the annual flu vaccine is the most effective way to protect against infection and reduce the severity of illness.

    Clinics across Canada offer flu shots free of charge.

    Influenza

    Influenza, commonly known as the flu, is a respiratory illness caused by influenza viruses that spread easily from person to person. These viruses mainly affect the nose, throat and lungs. Flu symptoms typically include fever, chills, muscle aches, cough, congestion, runny nose, headaches and fatigue.

    Unlike the common cold, which often develops slowly, the flu tends to hit suddenly and can lead to severe complications like pneumonia, bronchitis and even death, particularly in high-risk groups such as young children, seniors over 65, pregnant individuals, and those with chronic conditions like asthma, diabetes or heart disease.

    Influenza spreads mainly through droplets when an infected person coughs, sneezes or talks. These droplets can land in the mouths or noses of people nearby, or they can linger on surfaces where the virus can survive for up to 48 hours. Preventive measures such as handwashing, mask-wearing and staying home when symptomatic help reduce the spread of the virus.

    How the flu vaccine works

    Each year, flu vaccines are updated to protect against the influenza viruses that research indicates will be most common during the upcoming season. The flu shot contains inactivated or weakened influenza viruses, which cannot cause the flu but help the immune system develop antibodies. These antibodies protect against infection when exposed to live flu viruses.

    The vaccine typically takes about two weeks after administration for immunity to build up, which is why public health officials recommend getting vaccinated in the fall, before flu rates start to rise. This gives individuals enough time to develop immunity before influenza becomes more widespread.

    Can you get flu and COVID-19 vaccines together?

    Each year, flu vaccines are updated to protect against the influenza viruses that research indicates will be most common during the upcoming season.
    (Shutterstock)

    Public health experts have confirmed that it is safe to receive the flu vaccine and the COVID-19 vaccine at the same time. Doing so can provide protection against both illnesses and reduce the chances of severe complications from either virus. Administering both vaccines during the same visit is a convenient way to ensure you’re protected for the season, especially as COVID-19 continues to circulate alongside influenza.

    Benefits of the flu shot

    One of the key benefits of flu vaccination is that it significantly reduces the risk of severe illness, hospitalization and death from the flu. While flu vaccines aren’t 100 per cent effective at preventing infection, they greatly lessen the severity of the illness and reduce the spread of the virus in the community. This is especially important for protecting high-risk groups like seniors, children, pregnant people and individuals with chronic health conditions.

    Additionally, widespread flu vaccination helps prevent the health-care system from becoming overwhelmed, especially in a year when other respiratory viruses like respiratory syncytial virus (RSV) and COVID-19 are still circulating. By reducing the overall number of flu-related hospitalizations, vaccines also free up health-care resources for other urgent needs.

    Why get vaccinated every year?

    One of the unique challenges of influenza is that the virus mutates constantly. Because of these frequent changes, immunity from last year’s vaccine won’t provide full protection this season. This is why the flu vaccine is updated annually to match the most prevalent strains of the virus.

    Even if a person received a flu shot the previous year, it’s important to get vaccinated again to stay protected against new viral strains circulating in the population. Flu vaccines are reformulated each year based on global surveillance data collected by organizations like the World Health Organization (WHO) and the U.S. Centers for Disease Control and Prevention (CDC).

    Misconceptions about the flu vaccine

    Despite clear benefits, misconceptions about the flu shot continue to contribute to low vaccination rates.

    Some people believe that the flu vaccine can cause the flu, but this is a myth. The inactivated viruses in the flu vaccine cannot cause illness. After receiving the vaccine, some people may experience mild side-effects like soreness at the injection site or a low-grade fever, but these symptoms are short-lived and far less severe than a full-blown flu infection.

    Another misconception is that the flu shot is not necessary for healthy adults. While healthy people may have a lower risk of severe flu complications, they can still spread the virus to more vulnerable individuals, such as young children, seniors or immunocompromised family members. Getting vaccinated helps protect both the individual and the community through herd immunity.

    Jennifer Guthrie does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. What you need to know about cold and flu season – https://theconversation.com/what-you-need-to-know-about-cold-and-flu-season-240962

    MIL OSI – Global Reports

  • MIL-OSI Global: Evacuating in disasters like Hurricane Milton isn’t simple – there are reasons people stay in harm’s way

    Source: The Conversation – USA – By Carson MacPherson-Krutsky, Research Associate, Natural Hazards Center, University of Colorado Boulder

    Evacuation is more difficult for people with health and mobility issues. Ted Richardson/For The Washington Post via Getty Images

    As Hurricane Milton roared ashore near Sarasota, Florida, tens of thousands of people were in evacuation shelters. Hundreds of thousands more had fled coastal regions ahead of the storm, crowding highways headed north and south as their counties issued evacuation orders.

    But not everyone left, despite dire warnings about a hurricane that had been one of the strongest on record two days earlier.

    As Milton’s rain and storm surge flooded neighborhoods late on Oct. 9, 2024, 911 calls poured in. In Tampa’s Hillsborough County, more than 500 people had to be rescued, including residents of an assisted living community and families trapped in a flooding home after a tree crashed though the roof at the height of the storm.

    In Plant City, 20 miles inland from Tampa, at least 35 people had been rescued by dawn, City Manager Bill McDaniel said. While the storm wasn’t as extreme as feared, McDaniel said his city had flooded in places and to levels he had never seen. Traffic signals were out. Power lines and trees were down. The sewage plant had been inundated, affecting the public water supply.

    Evacuating might seem like the obvious move when a major hurricane is bearing down on your region, but that choice is not always as easy as it may seem.

    Evacuating from a hurricane requires money, planning, the ability to leave and, importantly, a belief that evacuating is better than staying put.

    I recently examined years of research on what motivates people to leave or seek shelter during hurricanes as part of a project with the Federal Emergency Management Agency and the Natural Hazards Center. I found three main reasons that people didn’t leave.

    Evacuating can be expensive

    Evacuating requires transportation, money, a place to stay, the ability to take off work days ahead of a storm and other resources that many people do not have.

    With 1 in 9 Americans facing poverty today, many have limited evacuation options. During Hurricane Katrina in 2005, for example, many residents did not own vehicles and couldn’t reach evacuation buses. That left them stranded in the face of a deadly hurricane. Nearly 1,400 people died in the storm, many of them in flooded homes.

    When millions of people are under evacuation orders, logistical issues also arise.

    Two days ahead of landfall, Milton was a Category 5 hurricane. About 5 million people were under evacuation orders, and highways were crowded.

    Gas shortages and traffic jams can leave people stranded on highways and unable to find shelter before the storm hits. This happened during Hurricane Floyd in 1999 as 2 million Floridians tried to evacuate.

    People who experienced past evacuations or saw news video of congested highways ahead of Hurricane Milton might not leave for fear of getting stuck.

    Health, pets and being physically able to leave

    The logistics of evacuating are even more challenging for people who are disabled or in nursing homes. Additionally, people who are incarcerated may have no choice in the matter – and the justice system may have few options for moving them.

    Evacuating nursing homes, people with disabilities or prison populations is complex. Many shelters are not set up to accommodate their needs. In one example during Hurricane Floyd, a disabled person arrived at a shelter, but the hallways were too narrow for their wheelchair, so they were restricted to a cot for the duration of their stay. Moving people whose health is fragile, and doing so under stressful conditions, can also worsen health problems, leaving nursing home staff to make difficult decisions.

    At least 700 people stayed in chairs or on air mattresses at River Ridge Middle/High School in New Port Richey, Fla., during Hurricane Milton.
    AP Photo/Mike Carlson

    But failing to evacuate can also be deadly. During Hurricane Irma in 2017, seven nursing home residents died in the rising heat after their facility lost power near Fort Lauderdale, Florida. In some cases, public water systems are shut down or become contaminated. And flooding can create several health hazards, including the risk of infectious diseases.

    In a study of 291 long-term care facilities in Florida, 81% sheltered residents in place during the 2004 hurricane season because they had limited transportation options and faced issues finding places for residents to go.

    Some shelters allow small pets, but many don’t. This high school-turned-shelter in New Port Richey, Fla., had 283 registered pets.
    AP Photo/Mike Carlson

    People with pets face another difficult choice – some choose to stay at home for fear of leaving their pet behind. Studies have found that pet owners are significantly less likely to evacuate than others because of difficulties transporting pets and finding shelters that will take them. In destructive storms, it can be days to weeks before people can return home.

    Risk perception can also get in the way

    People’s perceptions of risk can also prevent them from leaving.

    A series of studies show that women and minorities take hurricane risks more seriously than other groups and are more likely to evacuate or go to shelters. One study found that women are almost twice as likely than men to evacuate when given a mandatory evacuation order.

    If people have experienced a hurricane before that didn’t do significant damage, they may perceive the risks of a coming storm to be lower and not leave.

    Video from across Florida after Hurricane Milton shows flooding around homes, trees down and other damage. At least 12 people died in the storm, and more than 3 million homes lost power.

    In my review of research, I found that many people who didn’t evacuate had reservations about going to shelters and preferred to stay home or with family or friends. Shelter conditions were sometimes poor, overcrowded or lacked privacy.

    People had fears about safety and whether shelter environments could meet their needs. For example, religious minorities were not sure whether shelters would be clean, safe, have private places for religious practice, and food options consistent with faith practices. Diabetics and people with young children also had concerns about finding appropriate food in shelters.

    How to improve evacuations for the future

    There are ways leaders can reduce the barriers to evacuation and shelter use. For example:

    • Building more shelters able to withstand hurricane force winds can create safe havens for people without transportation or who are unable to leave their jobs in time to evacuate.

    • Arranging more shelters and transportation able to accommodate people with disabilities and those with special needs, such as nursing home residents, can help protect vulnerable populations.

    • Opening shelters to accommodate pets with their owners can also increase the likelihood that pet owners will evacuate.

    • Public education can be improved so people know their options. Clearer risk communication on how these storms are different than past ones and what people are likely to experience can also help people make informed decisions.

    • Being prepared saves lives. Many areas would benefit from better advance planning that takes into account the needs of large, diverse populations and can ensure populations have ways to evacuate to safety.

    This article has been updated with additional details about Hurricane Milton’s damage.

    Carson MacPherson-Krutsky works for the Natural Hazards Center (NHC) at the University of Colorado Boulder. She receives grant and contract funding for her work at NHC through the National Science Foundation, the U.S. Army Corps of Engineers, the Federal Emergency Management Agency, and other funders.

    ref. Evacuating in disasters like Hurricane Milton isn’t simple – there are reasons people stay in harm’s way – https://theconversation.com/evacuating-in-disasters-like-hurricane-milton-isnt-simple-there-are-reasons-people-stay-in-harms-way-240869

    MIL OSI – Global Reports

  • MIL-OSI Global: Gangs’stories: Soraya, the ‘real’ Queen of the South in Nicaragua

    Source: The Conversation – France – By Dennis Rodgers, Research Professor, Anthropology and Sociology, Graduate Institute – Institut de hautes études internationales et du développement (IHEID)

    For the past five years, the GANGS project, a European Research Council-funded project led by Dennis Rodgers, has been studying global gang dynamics in a comparative perspective. When understood in a nuanced manner that goes beyond the usual stereotypes and Manichean representations, gangs and gangsters arguably constitute fundamental lenses through which to think about and understand the world we live in.

    Dennis Rodgers describes how “Soraya” became involved in drug trafficking in Luis Fanor Hernández, a poor neighbourhood in Managua, the capital of Nicaragua. Known locally as “la Reina del Sur” – the “Queen of the South” – her story shows how drug trafficking is a highly gendered activity, that reinforces macho violence and patriarchal dynamics of domination.


    Seated on a slightly tatty, overstuffed sofa, I watch as Soraya meticulously manicures Wanda’s fingernails. Her face a picture of tense concentration, she begins by carefully tracing red and white stripes along the distal bands of four out of five fingers on each hand, before then delicately dotting small flowers on each index.

    Wanda’s nails.
    D. Rodgers, Fourni par l’auteur

    We are in the barrio Luis Fanor Hernández, a poor urban neighbourhood in Managua, the capital city of Nicaragua, in Central America. I’ve been carrying out longitudinal ethnographic research on gang dynamics there since 1996. I returned in February 2020 to, among other things, interview Wanda about the way that the local drug trade – and particularly her husband Bismarck’s involvement – had impacted her life over the years. Wanda is one of my key research interlocutors in the barrio, whom I’ve known for over 25 years.

    “I can come back to do our interview later,” I say to Wanda.

    “No, no, it’s fine, Dennis,” she replies. “Soraya’s almost finished, and in any case, she’s de confianza, so why don’t we just get started? It’s not as if she doesn’t know about Bismarck and his drug dealing… But you know what? If you want a female perspective on drugs, you should really interview her, not me – I’m just the wife of an ex-drug dealer, but she’s la Reina del Sur!”

    “The Queen of the South?”, I ask, throwing Soraya a querying glance. Looking up from her manicuring labours, she smirks sardonically before saying, “You know, Dennis, like in the telenovela, about that Mexican woman who becomes a narcotraficante (drug dealer).”

    “Yes, I get that, I know the series, but she became a powerful drug dealer, and from what I know you’re not a big-time narco, are you?”

    “Nah, I was just a mulera (street dealer), but people call me la Reina del Sur, because I’m strong-willed and independent, just like the real Reina.”

    Chuckling, I reply, “You do know the Reina isn’t real, yes?”, before then asking her more earnestly, “but would you be willing to do an interview with me about all that, though?” Soraya ponders my request for a few seconds before replying brusquely, “dale, but not today, I’ve got an errand to run. I’ll meet you here at the same time tomorrow”.

    Without waiting for an answer, Soraya then dots a final petal on Wanda’s left index nail, packs up her files and polish, and leaves Wanda and me to our interview.

    The gendered nature of drug dealing in Latin America

    Drug trafficking has become an searing topic in Latin America over the last two decades.

    Every year, this criminal activity results in thousands of violent deaths and tens of thousands of health-related mortalities. Drug trafficking also has profoundly negative effects on economies, political systems, and ecologies in the region.

    Numerous studies have traced the forms of production, the actors involved, the routes and flows, the nature of local and international markets, and the profound but variable social impact that drugs can have.

    One point on which most studies agree is that drug trafficking is a predominantly male activity. Fewer women than men are involved, and they are generally seen through the prism of particular categorisations: either as victims, suffering direct and indirect forms of violence as a result of being the mothers, wives or girlfriends of drug traffickers, or as emancipated and liberated individuals whose involvement in trafficking challenges gender-based structures of power and inequality.

    These kinds of binary representations have long seemed simplistic to me. The interviews conducted with Wanda during the course of my years of research in barrio Luis Fanor Hernández have highlighted how the image of the drug dealer’s wife as a victim of her husband’s trafficking is a caricature. The same was also true of the interview I conducted with Soraya about her career as a drug dealer, which challenged the notion that drug dealing could be emancipating for a woman.

    “Pac-Man” in the barrio

    Soraya was born in barrio Luis Fanor Hernández in 1987. Her mother, Gladys, was from the neighbourhood, while her father, Jorge, was from Villa Cuba, a neighbourhood in the north-east of Managua. They had an on-and-off relationship for the first decade of Soraya’s life, meaning that she moved several times between her father’s home in Villa Cuba, and her maternal family home in barrio Luis Fanor Hernández. Gladys and Jorge split up definitely when Soraya was 13 years old, after Gladys stabbed Jorge with a kitchen knife while defending Soraya, whom he was beating.

    “My mother and I moved back [to barrio Luis Fanor Hernández] after we left my father. There were five of us in the house – me, my mother, my aunt, my cousin, and my cousin’s husband. You know him, Dennis, he’s the one they call ‘Pac-man’ [because of his great appetite], so you know he’s a narcotraficante [drug dealer]. My aunt and my cousin would help him from time to time with his bisnes, but this was when the drug trade was increasing, and he had lots to do, and they started asking me to ‘do them a favour’, to help them. At first it was small things, you know, moving drugs or money from one place to another, or helping them ‘cook’ cocaine into crack, but after a while I started selling for him as a mulera, in the streets, which I could do well because the police were less suspicious of me, as a young girl, you know.”

    Crack doses ready for sale.
    Dennis Rodgers, Fourni par l’auteur

    Neither the way nor the reasons why Soraya became involved in trafficking can be described as particularly emancipatory. Rather, they highlight the way in which drug trafficking in fact responds to very gendered and “intimate” logics. On the one hand, Soraya’s status as a young woman made her useful to her cousin’s husband in carrying out certain drug trafficking operations without attracting suspicion in a wider macho Nicaraguan context, but on the other hand, her family ties to “Pac-Man” also made it difficult for her to refuse to help him.

    Enduring gendered oppression

    Soraya’s involvement in drug trafficking was also profoundly affected by her relationship with Elvis Gomez, with whom she became involved at the age of 15 (when Elvis was 23). Elvis was a failed drug dealer. He had tried unsuccessfully to become involved in drug trafficking several times in the past, and once he was in a relationship with Soraya, he forced her to let him work with her so that he could benefit from the financial windfall that this activity generated for those involved in barrio Luis Fanor Hernández.

    The kind of house a successful drug dealer such as Soraya might have lived in in barrio Luis Fanor Hernández in 2003 (not her real house).
    Dennis Rodgers, Fourni par l’auteur

    One of the reasons Elvis had failed to establish himself as a drug dealer was that he was a drug user, and Soraya often had to cover for him when he consumed the drugs that “Pac-Man” gave him instead of selling them, repaying his loss of earnings through the profits of her own drug dealing.

    In 2010, Elvis used the savings that Soraya had accumulated from her drug dealing to finance his emigration to the United States. He told her he would bring her over later, but he left with another woman, Yulissa, with whom he had been involved simultaneously, along with their daughter. He also took Ramses, the son he had with Soraya in 2007, and cut off all contact with Soraya. She told me poignantly, “I was going crazy, texting him every day, telling him to let me talk to my son, and telling him to bring him back to Nicaragua, that I wanted him to live with me”. He only got back in touch in 2016, to insist that Soraya divorce him and formally transfer legal custody of Ramses to him, which she eventually did, in exchange for being able to be in regular contact with her son.

    This episode clearly illustrates how Soraya’s trafficking activities inscribed themselves within wider structures and practices of gender inequality and male domination. Nicaragua remains a country marked by patriarchy and machismo, something that was strikingly reflected in the law banning abortion under all circumstances passed in 2008, or the adoption of law 779 on gender violence in 2012, which defines all such instances as “domestic violence” that must be resolved through mediation rather than the penal system.

    In the end, although she was known as la Reina del Sur, this nickname had nothing to do with Soraya having a position of dominance in the drug trade in barrio Luis Fanor Hernández. Indeed, the vast majority of (few) women drug dealers in the neighbourhood were at the bottom of the business pyramid.

    Beautician

    Soraya says she stopped selling drugs in 2012, and that she is now a full-time beautician. Several current drug dealers in barrio Luis Fanor Hernández have, however, told me that she continues to deal and that her manicure business provides a convenient cover.

    The fact that Soraya earns no more than 15 to 20 dollars a week from her manicure business could clearly be interpreted as suggesting that this might be the case. Soraya firmly denies it, however, and I believe her. Not only does she take on various odd jobs to make ends meet for herself and her ageing mother, she also lives in very humble conditions. Her current home, in particular, is much less flamboyant than any of those in which she lived in the past.

    The type of house that Soraya lives in today (not her real house).
    Dennis Rodgers, Fourni par l’auteur

    When compared to the trajectories of male traffickers in the barrio – many of whom have greatly benefited, and continue to benefit, from their involvement in trafficking even after they have stopped dealing – it can be argued that Soraya’s involvement in drug trafficking has enhanced patriarchal and macho constraints, contributing to her current situation.

    At the same time, while Soraya’s life has unquestionably been marked by a constant struggle in the face of different forms of domination and oppression, she also frequently and persistently seeks to confront and challenge her predicament. This is perhaps partly linked to her involvement in the drug trade, as the WhatsApp exchange I had with Soraya on 8 March 2021 clearly suggested. She had uploaded a picture of herself drinking at a nightclub, overlaying it with the following text:

    “Today is International Women’s Day, and we celebrate the power of independent and autonomous women! We are beautiful, we are strong, and we can do whatever we want!”.

    I wrote to Soraya to wish her a happy International Women’s Day, and also to tell her that I’d started to write her biography “about when she was la Reina del Sur”. A few minutes later she replied – “por siempre La Reina!” (“forever the Queen!”).

    Dennis Rodgers a reçu une bourse ERC Advanced Grant (no. 787935) du Conseil Européen de la Recherche (https://erc.europa.eu) pour un projet intitulé “Gangs, Gangsters, and Ganglands: Towards a Global Comparative Ethnography” (GANGS).

    ref. Gangs’stories: Soraya, the ‘real’ Queen of the South in Nicaragua – https://theconversation.com/gangsstories-soraya-the-real-queen-of-the-south-in-nicaragua-233837

    MIL OSI – Global Reports

  • MIL-OSI Global: Neanderthal remains found in France reveals there were not one, but at least two lineages of late Neanderthals in Europe

    Source: The Conversation – France – By Ludovic Slimak, Archéologue, penseur et chercheur au CNRS, Université de Toulouse III – Paul Sabatier

    31 out of 34 of Thorin’s teeth were found, making it the most complete dentition ever found from a Neanderthal. Ludovic Slimak, Fourni par l’auteur

    The prevailing narrative of how humanity came about seemed straightforward enough: in Europe, the last Neanderthals bowed out as Homo sapiens began arriving on the continent around 40,000 to 45,000 years ago. Neanderthals were thought to be part of a single, genetically homogeneous population, spread across Spain, France, Croatia, Belgium, and Germany. Genetic studies supported this view, suggesting a uniform population that would eventually give way to the newcomers, Homo sapiens. In just a few millennia — between 45,000 and 42,000 years ago — the brief cohabitation of these two species in Europe ended with the replacement of Neanderthals. The explanation was elegant and simple – perhaps a little too simple.

    A new lineage of Neanderthals

    Our research published in Cell Genomics on 11 September complicates this picture, revealing that there was not one, but at least two lineages of Neanderthals, following genetic analysis of body remains found in the Mandrin Cave, southeastern France. The study in Cell Genomics, which I co-lead with Tharsika Vimala and Martin Sikora, population geneticists at the University of Copenhagen in Denmark, as well as Andaine Seguin-Orlando, a paleogenomicist at the University of Toulouse, is the culmination of nearly ten years of research leading to the discovery of France’s first Neanderthal body since 1978. We have chosen to call him Thorin after the writings of J.R.R. Tolkien, since Thorin was one of the last dwarf kings in Tolkien’s lore. Fittingly, the Thorin of the Mandrin Cave is believed to be one of the last Neanderthals.

    He is among the most recent occupations of the Mandrin Cave. We discovered his first teeth in 2015, lying on the ground at the cave’s entrance, barely covered by a few leaves. Although the teeth were initially exposed, they were embedded in fragile sand, making the excavation delicate. The slightest brush stroke risked displacing the precious remains, making it difficult to determine their precise position in the ground. As the head of research at Mandrin Cave, I decided we proceed to excavate the body with tweezers. Grain by grain, our team worked painstakingly for two to three months each – a process that has lasted for nine years and is still ongoing. This Herculean field effort allowed the recovery of the tiniest remains, which were carefully documented in their original positions. Through three-dimensional mapping, the team has reconstructed the exact location of the remains in the ground.

    Meet Thorin

    So far, 31 teeth (Thorin had 34, representing the first Neandertal ever found with surnumerary molars), along with the jawbone, fragments of the skull, phalanges and thousands of tiny bones have been discovered. The excavation process here requires remarkable patience; after nine years of effort, we have only managed to clear a small window of about 50 cm by 30 cm wide. Numerous remains of this body are likely to gradually emerge in the coming years.

    Our study shows that Thorin’s population diverged significantly from other Neanderthals in Europe over more than 50,000 years. Unlike most late Neanderthals, who display genetic homogeneity, Thorin’s lineage remained genetically distinct from 105,000 years ago until their extinction.

    This raises the question: How could human populations remain isolated for tens of thousands of years, despite living within a two-week walking distance of each other? This is the challenge Thorin presents us with. Evolutionary, cultural, and social processes that seem unimaginable if we try to apply them to Sapiens populations, as we understand them through cultural anthropology, history, and archaeology. Something appears to profoundly differentiate the ways of being in the world of Neanderthals and Sapiens, something far deeper than mere cultural or territorial issues. It confronts us directly with the enigma of Neanderthal and, quite possibly, our own inability to understand these ancient species.

    Thorin’s peers and other ghosts

    Strikingly, we found that Thorin is not the only one in his lineage, with genetic analysis revealing links to another Neanderthal discovered over 1,700 kilometers away, in Gibraltar. This Neanderthal, nicknamed Nana, was thought to be an ancient individual who lived 80 to 100,000 years ago. However, the study in Cell Genomics reveals that Nana and Thorin lived during the same period — within the last millennia of Neanderthal existence. This close genetic proximity suggests that Nana and Thorin belonged to the same population of late Neanderthals, a population that would no longer have any exchanges with the classic European Neanderthals after the 105th millennium and up until their astonishing extinction 42,000 years ago.

    Our study also suggests the existence of a “ghost” Neanderthal lineage — another population that roamed Europe at the same time, yet remains unknown. This implies that there were other Neanderthal populations in Europe in relatively recent periods that belonged neither to the classic Neanderthals nor to Thorin’s population, but genetics is then able to identify moments when Thorin’s ancestors could episodically exchange genes with these ghost populations that remain largely unknown to archaeology and genetics. A fascinating story then slowly begins to emerge in which Neanderthal is not a monolithic block but is represented by different populations that nevertheless developed only rare (and sometimes no) exchanges among themselves.

    Rewriting everything we know about early humanity

    The revelations of additional lineages of Neanderthal are the latest discovery to prompt us to radically rethink our understanding of early humanity. In 2022, after 32 years of archaeological research, our team had already revealed the existence of a first Sapiens migration to European territory 10,000 to 12,000 before the first migrations previously recognized. In the following year, we released three papers questioning our conceptions of this singular moment in human history, redefining not only the timing of these populations’ arrival but also that they had mastered advanced technology such as the bow and arrow, tracing back their steps to the Mediterranean Levant, and proposing a profound redefinition of the entire historical structure of this singular moment in European history.

    The latest discovery of Thorin’s remains, which I began to unveil in The Naked Neanderthal, poses countless questions. Did Neanderthal die out like the dinosaurs following a natural upheaval carrying away his entire world? Around Neanderthal, theories related to climate change, volcanic explosions, cosmic radiation, or devastating epidemics have flourished in recent years. To understand Sapiens replacing Neanderthal, we must, above all, understand what Neanderthal was. And what Sapiens is. And it is my conviction that the nature of the two creatures deeply eludes us.

    The research continues, and, as more discoveries are made, the story only deepens.

    Ludovic Slimak ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’a déclaré aucune autre affiliation que son organisme de recherche.

    ref. Neanderthal remains found in France reveals there were not one, but at least two lineages of late Neanderthals in Europe – https://theconversation.com/neanderthal-remains-found-in-france-reveals-there-were-not-one-but-at-least-two-lineages-of-late-neanderthals-in-europe-238606

    MIL OSI – Global Reports