Category: Universities

  • MIL-OSI Submissions: AI policies in Africa: lessons from Ghana and Rwanda

    Source: The Conversation – Africa (2) – By Thompson Gyedu Kwarkye, Postdoctoral Researcher, University College Dublin

    Artificial intelligence (AI) is increasing productivity and pushing the boundaries of what’s possible. It powers self-driving cars, social media feeds, fraud detection and medical diagnoses. Touted as a game changer, it is projected to add nearly US$15.7 trillion to the global economy by the end of the decade.

    Africa is positioned to use this technology in several sectors. In Ghana, Kenya and South Africa, AI-led digital tools in use include drones for farm management, X-ray screening for tuberculosis diagnosis, and real-time tracking systems for packages and shipments. All these are helping to fill gaps in accessibility, efficiency and decision-making.

    However, it also introduces risks. These include biased algorithms, resource and labour exploitation, and e-waste disposal. The lack of a robust regulatory framework in many parts of the continent increases these challenges, leaving vulnerable populations exposed to exploitation. Limited public awareness and infrastructure further complicate the continent’s ability to harness AI responsibly.

    What are African countries doing about it?
    To answer this, my research mapped out what Ghana and Rwanda had in place as AI policies and investigated how these policies were developed. I looked for shared principles and differences in approach to governance and implementation.

    The research shows that AI policy development is not a neutral or technical process but a profoundly political one. Power dynamics, institutional interests and competing visions of technological futures shape AI regulation.

    I conclude from my findings that AI’s potential to bring great change in Africa is undeniable. But its benefits are not automatic. Rwanda and Ghana show that effective policy-making requires balancing innovation with equity, global standards with local needs, and state oversight with public trust.

    The question is not whether Africa can harness AI, but how and on whose terms.

    How they did it

    Rwanda’s National AI Policy emerged from consultations with local and global actors. These included the Ministry of ICT and Innovation, the Rwandan Space Agency, and NGOs like the Future Society, and the GIZ FAIR Forward. The resulting policy framework is in line with Rwanda’s goals for digital transformation, economic diversification and social development. It includes international best practices such as ethical AI, data protection, and inclusive AI adoption.

    Ghana’s Ministry of Communication, Digital Technology and Innovations conducted multi-stakeholder workshops to develop a national strategy for digital transformation and innovation. Start-ups, academics, telecom companies and public-sector institutions came together and the result is Ghana’s National Artificial Intelligence Strategy 2023–2033.

    Both countries have set up or plan to set up Responsible AI offices. This aligns with global best practices for ethical AI. Rwanda focuses on local capacity building and data sovereignty. This reflects the country’s post-genocide emphasis on national control and social cohesion. Similarly, Ghana’s proposed office focuses on accountability, though its structure is still under legislative review.

    Ghana and Rwanda have adopted globally recognised ethical principles like privacy protection, bias mitigation and human rights safeguards. Rwanda’s policy reflects Unesco’s AI ethics recommendations and Ghana emphasises “trustworthy AI”.

    Both policies frame AI as a way to reach the UN’s Sustainable Development Goals. Rwanda’s policy targets applications in healthcare, agriculture, poverty reduction and rural service delivery. Similarly, Ghana’s strategy highlights the potential to advance economic growth, environmental sustainability and inclusive digital transformation.

    Key policy differences

    Rwanda’s policy ties data control to national security. This is rooted in its traumatic history of identity-based violence. Ghana, by contrast, frames AI as a tool for attracting foreign investment rather than a safeguard against state fragility.

    The policies also differ in how they manage foreign influence. Rwanda has a “defensive” stance towards global tech powers; Ghana’s is “accommodative”. Rwanda works with partners that allow it to follow its own policy. Ghana, on the other hand, embraces partnerships, viewing them as the start of innovation.

    While Rwanda’s approach is targeted and problem-solving, Ghana’s strategy is expansive, aiming for large-scale modernisation and private-sector growth. Through state-led efforts, Rwanda focuses on using AI to solve immediate challenges such as rural healthcare access and food security. In contrast, Ghana looks at using AI more widely – in finance, transport, education and governance – to become a regional tech hub.

    Constraints and solutions

    The effectiveness of these AI policies is held back by broader systemic challenges. The US and China dominate in setting global standards, so local priorities get sidelined. For example, while Rwanda and Ghana advocate for ethical AI, it’s hard for them to hold multinational corporations accountable for breaches.

    Energy shortages further complicate large-scale AI adoption. Training models require reliable electricity – a scarce resource in many parts of the continent.

    To address these gaps, I propose the following:

    Investments in digital infrastructure, education and local start-ups to reduce dependency on foreign tech giants.

    African countries must shape international AI governance forums. They must ensure policies reflect continental realities, not just western or Chinese ones. This will include using collective bargaining power through the African Union to bring Africa’s development needs to the fore. It could also help with digital sovereignty issues and equitable access to AI technologies.

    Finally, AI policies must embed African ethical principles. These should include communal rights and post-colonial sensitivities.

    Thompson Gyedu Kwarkye does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. AI policies in Africa: lessons from Ghana and Rwanda – https://theconversation.com/ai-policies-in-africa-lessons-from-ghana-and-rwanda-253642

    MIL OSI

  • MIL-OSI Submissions: How the UK became dependent on asylum hotels

    Source: The Conversation – UK – By Jonathan Darling, Professor in Human Geography, Durham University

    Hotels housing asylum seekers have become hotspots of protest. Jory Mundy/Shutterstock

    Chancellor Rachel Reeves’s pledge to “end the costly use of asylum hotels in this parliament” is a rare thing in British politics: a policy supported by all major political parties and a range of refugee charities.

    Reeves says ending the use of asylum hotels will save the Treasury £1 billion a year. But for a government rapidly losing support, ending “hotel Britain” is also central to their popular appeal to regain control over the asylum system.

    At a time of financial instability and declining living standards, the use of hotels to house asylum seekers has increased substantially. Hotels are associated with escape, luxury or business. This explains why the use of hotels has become such a flashpoint for political controversy and fuelled resentment and tensions in some communities.

    How did we get here?

    Under the UN refugee convention, Britain has a legal obligation to house people while they are waiting for a decision on their claim to refugee status. Responsibility for housing asylum seekers lies with the Home Office, which has contracts with three private companies to offer accommodation. Hotels have historically been a small part of this housing, only used for short-term emergency cover when housing in the private rental sector is unavailable.

    Hotel use rose sharply during the COVID-19 pandemic. Private contractors responsible for housing asylum seekers were unable to find enough space in more routine “dispersal accommodation”.

    Dispersal accommodation involves housing asylum seekers in shared properties across the country. These are usually shared houses or flats that private providers procure from the private rental sector, or from subcontracted housing associations. Local authority properties are not used. Asylum seekers have no choice where they are housed.

    Once someone receives a decision on their asylum application (granted or refused refugee status), the Home Office stops providing them with housing and support. But during the pandemic, the Home Office temporarily stopped this practice, to avoid making people homeless during lockdown. But this meant more people were staying longer in asylum housing. Hotels provided emergency housing during this period.

    Following the pandemic, the number of asylum applications to the UK increased, peaking at 108,138 in 2024. Decision making on asylum claims had slowed dramatically since 2016, leaving people in the asylum process and in accommodation for longer periods of time. This increased pressure on housing and made it difficult for contractors to move people out of hotels.

    At the height of hotel use, in June 2023, 51,000 asylum seekers were housed in more than 400 hotels across the UK, costing the Home Office £8 million a day. By March 2025, this had fallen to 32,345 asylum seekers in 218 hotels.

    The use of hotels on this scale indicates that the system for housing asylum seekers in Britain is failing. While hotels can provide adaptable emergency accommodation, they are not sustainable housing solutions, nor do they offer the security of a home.

    The costs of ‘hotel Britain’

    In 2024, hotel accommodation for asylum seekers cost on average £158 per night. Dispersal accommodation, on the other hand, cost on average £20 per night. The total asylum accommodation system cost £4.7 billion, £3.1 billion of which went on hotels.

    While costly to taxpayers, this was highly profitable for those offering accommodation.

    In May 2025, the three providers contracted by the government to deliver housing were reported to have made £380 million in profit from their accommodation contracts. The Britannia Hotels chain alone reportedly made over £150 million in profit since first accommodating asylum seekers in 2014.




    Read more:
    The UK spent a third of its international aid budget on refugees in the UK – what it’s paying for, and why it’s a problem


    The costs have been more than financial. Asylum seekers have repeatedly raised the negative effects on mental and physical health associated with confinement and isolation in hotels, a lack of privacy and personal space and the limited access to support services.

    Reports of hotels infested with insects, collapsing ceilings and rude and abusive staff, reflect a model of accommodation that is ill-suited to supporting the needs of vulnerable residents. It is a far cry from the “luxury” conditions often described in media coverage.

    Hotels have also become focal points for community tensions. Local residents were rarely informed of the use of a hotel in advance, and hotels were often closed to other guests at short notice, with reports of weddings and other events being cancelled.

    These cases created a damaging sense of community powerlessness. Following a decade of austerity, the use of a town’s hotel to indefinitely accommodate asylum seekers was often described as another resource being “taken away” from communities. Far-right groups were quick to exploit these tensions, circulating details of hotels accommodating asylum seekers and organising protests.

    Communities not camps

    To end the use of hotels, government proposals have focused on expanding the use of large-scale accommodation sites. This suggests that lessons from the last government have been ignored in the rush to end hotel accommodation.

    Mass accommodation sites, such as Wethersfield camp in Essex, are not able to provide sustainable and dignified accommodation. Using former military sites has been found to be more expensive than hotels and can further isolate and stigmatise asylum seekers.

    Sustainable accommodation that meets the needs of asylum seekers and the public requires long-term strategy to replace short-term profiteering. Part of that strategy should involve using local authority expertise to provide dispersal housing in communities. Experience shows that this is the best way to reduce the costs of asylum while supporting those seeking refuge. The government’s resettlement scheme for refugees fleeing the conflict in Syria shows that engaging local authorities in housing and support is key to the success of integration.

    Any changes to asylum housing will create pressures for a UK housing sector in crisis. Yet the financial and social costs of the current system cannot be ignored. Supporting local authorities in the development and delivery of social housing must be a priority for the government, and housing asylum seekers should not be seen as an issue separate to that commitment.

    Jonathan Darling has received funding from the Economic and Social Research Council. He is affiliated with the No Accommodation Network as a trustee.

    ref. How the UK became dependent on asylum hotels – https://theconversation.com/how-the-uk-became-dependent-on-asylum-hotels-258767

    MIL OSI

  • MIL-OSI Submissions: First fossil pangolin tracks discovered in South Africa

    Source: The Conversation – Africa (2) – By Charles Helm, Research Associate, African Centre for Coastal Palaeoscience, Nelson Mandela University

    A team of scientists who study vertebrate fossil tracks and traces on South Africa’s southern Cape coast have identified the world’s first fossil pangolin trackway, with the help of Indigenous Master Trackers from Namibia. Ichnologists Charles Helm, Clive Thompson and Jan De Vynck tell the story.

    What did you find?

    A fossil trackway east of Still Bay in South Africa’s Western Cape province was found in 2018 by a colleague and was brought to our attention. It was found on the surface of a loose block of aeolianite rock (formed from hardened sand) that had come to rest near the high-tide mark in a private nature reserve.

    We studied it but our cautious approach required that we could not confidently pin down what had made the track. It remained enigmatic.

    How did you eventually identify it?

    In 2023, we were working with two Ju/’hoansi San colleagues from north-eastern Namibia, #oma Daqm and /uce Nǂamce, who have been interpreting tracks in the Kalahari all their lives. They are certified as Indigenous Master Trackers and we consider them to be among the finest trackers in the world today. We’d called on their expertise to help us understand more about the fossil tracks on the Cape south coast. One example of the insights they provided was of hyena tracks, and we have published on this together.




    Read more:
    First fossil hyena tracks found in South Africa – how expert animal trackers helped


    We showed them the intriguing trackway, which consisted of eight tracks and two scuff marks made, apparently, by the animal’s tail. They examined the track-bearing surface at length, conversed with one another for some time, and then made their pronouncement: the trackway had been registered by a pangolin.

    This was an astonishing claim, as no fossilised pangolin tracks had previously been recorded anywhere in the world.

    It also confirms that pangolins were once distributed across a larger range than they are now.

    We then created three-dimensional digital models of the trackway, using a technique called photogrammetry.

    We shared these images with other tracking and pangolin experts in southern Africa (like CyberTracker, Tracker Academy, the African Pangolin Working Group, wildlife guides and a pangolin researcher at the Tswalu Foundation). There were no dissenting voices: not surprisingly, it was agreed that our San colleagues were highly likely correct in their interpretation.

    There is something really special about a fossil trackway, compared with fossil bones – it seems alive, as if the animal could have registered the tracks yesterday, rather than so long ago.

    What are the characteristics of pangolin tracks?

    Pangolins are mostly bipedal (walking on two legs), with a distinctive, relatively ponderous gait. Track size and shape, the distance between the tracks, and the width of the trackway all provide useful clues, as do the tail scuff marks and the absence of obvious digit impressions. A pangolin hindfoot track, in the words of our Master Tracker colleagues, looks as if “a round stick had been poked into the ground”. And being slightly wider at the front end, it has a slightly triangular shape.

    Pangolin walking (video in slow motion)

    Our Master Tracker colleagues are familiar with the tracks of Temminck’s pangolin (Smutsia temminckii) in the Kalahari, which was the probable species that registered the tracks that are now evident in stone on the Cape coast. Other trackmaker candidates, such as a serval with its slim straddle, were considered, but could be excluded or regarded as far less likely.

    How old is the fossil track and how do you know?

    The surface would have consisted of loose dune sand when the pangolin walked on it. Now it’s cemented into rock. We work with a colleague, Andrew Carr, at the University of Leicester in the UK. He uses a technique known as optically stimulated luminescence to obtain the age of rocks in the area.

    The results he provided for the region suggest that these tracks were made between 90,000 and 140,000 years ago, during the “Ice Ages”. For much of this time the coastline might have been as much as 100km south of its present location.

    What’s important about this find?

    Firstly, this demonstrates what you can uncover when you bring together different kinds of knowledge: our western scientific approach combined with the remarkable skill sets of the Master Trackers, which have been inculcated in them from a very young age.

    Without them, the trackway would have remained enigmatic, and would have deteriorated in quality due to erosion without the trackmaker ever being identified.




    Read more:
    Fossil treasure chest: how to preserve the geoheritage of South Africa’s Cape coast


    Secondly, we hope it brings attention to the plight of the pangolin in modern times. There are eight extant pangolin species in the world today, and all are considered to be threatened with extinction. Pangolin meat is regarded as a delicacy, pangolin scales are used in traditional medicines, and pangolins are among the most trafficked wild animals on earth. Large numbers in Africa are hunted for their meat every year.

    What does the future hold?

    Our San Indigenous Master Tracker colleagues have just completed their third visit to the southern Cape coast, thanks to funding from the Discovery Wilderness Trust.

    The results have once again been both unexpected and stupendous, and their tracking skills have again been demonstrated to be unparalleled. Many more publications will undoubtedly ensue, bringing their expertise to the attention of the wider scientific community and anyone interested in our fossil heritage or in ancient hunter-gatherer traditions.

    We hope that our partnership continues to lead to our mutual benefit as we probe the secrets of the Pleistocene epoch by following the spoor of ancient animals.

    Clive Thompson is a trustee of the Discovery Wilderness Trust, a non-profit organization that supports environmental conservation and the fostering of tracking skills.

    Charles Helm and Jan Carlo De Vynck do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    ref. First fossil pangolin tracks discovered in South Africa – https://theconversation.com/first-fossil-pangolin-tracks-discovered-in-south-africa-253383

    MIL OSI

  • MIL-OSI Submissions: Oasis are on the road again. But has the ticket scandal spelled the end of dynamic pricing?

    Source: The Conversation – UK – By Jonathan Fry, Lecturer in Business and Management, Aberystwyth University

    When the Oasis reunion tour was announced last summer, there was a scramble to get hold of tickets. Very quickly, there followed another scramble – to understand a phenomenon known as “dynamic pricing”. This is the practice of pricing one product or service differently for different customers. Prices are adjusted according to supply and demand and can also be determined by things like the timing of the purchase.

    But last summer, this seemingly opaque pricing structure left many fans angry, confused and lashing out at the band themselves.

    Despite being generally accepted in other industries, for example, for airline and train tickets, the use of dynamic pricing in the events sector is contentious. It can also pose reputational risks to organisations – and even artists – within the industry. Advocates of dynamic pricing say that prices are fair as they are set by the market.

    Now that Oasis are on the road again, it’s a good time to reflect on how the issue has manifested since the tickets first went on sale back in August 2024. Some tickets increased by £200 above the original price advertised. That significant mark-up did not include any added VIP or hospitality benefits.

    My research focuses on VIP and hospitality event tickets, but also considers broader ticketing issues such as dynamic pricing.

    In 2022, I interviewed a freelance VIP package manager for a highly successful and famous musician. This gave an important insight into the potential for fans to be upset and frustrated on the day of the event if they had misunderstood what they could expect from “platinum” (dynamic) tickets.

    I’ve had instances where somebody turned up at check in and you know, they don’t know what they’ve bought and thought it was a VIP ticket: “How much did you pay for your ticket?”, “Oh, I paid US$1,500…”. So, I’m like: “Okay, well that sounds like it’s … a VIP ticket, the highest-level VIP ticket. But your name isn’t on my list and my list never really lies. But, you know, I’m going to figure it out with the box office. I’ll give you the benefit of the doubt. Go on through to the champagne reception.” So, they do that, and it’s very innocent. Nobody is trying to pull something over anybody.

    I’ve had people come to me. “Oh, the person next to me said that they’ve got a meet and greet after the show, they’ve got a goodie bag. Where’s my goodie bag? They’re literally sat next to me. We’ve compared ticket prices, what’s going on?” I say: “You’ve bought a platinum ticket.” They ask: “What does that mean? What do I get? Surely, I must get something?” I say:“No, you don’t.” It’s horrible.

    And in 2023, I surveyed 312 consumers, of whom only 17% said that they would purchase a dynamically priced ticket. It is likely that, compared to 2025, there was then a lower awareness of dynamic pricing of event tickets.

    Avoiding a backlash

    More recently, research discovered that out of more than 8,000 people surveyed 91% agreed that dynamic pricing should be banned in the UK for events tickets. Some 47% had experienced dynamic pricing when shopping for tickets for live music events – but only 11% felt the concept was communicated to them effectively before they purchased.

    Other research has sought to explain the consumer protection and competition laws that apply to dynamic pricing in the UK. It also provided the context of the Oasis reunion tour dates. The researchers propose recommendations for businesses to consider before adopting dynamic pricing strategies.

    The advice includes making sure that price ranges are clearly available to consumers, and recommends that businesses should be careful in imposing time limits on completing purchases to avoid creating panic in the buyer. Businesses should also make sure the terms of their contracts are easily understood.

    UK regulator, the Competition and Markets Authority (CMA), expressed concern that selling platform Ticketmaster may have breached consumer protection law. It said Ticketmaster had labelled certain seated tickets as “platinum” and sold them for nearly 2.5 times the price of equivalent standard tickets. The CMA also said that Ticketmaster did not properly explain that the tickets offered no additional benefits – and were often located in the same area of the stadium.

    Fans have not had the chance to see Oasis live since 2009.
    Amra Pasic/Shutterstock

    Ticketmaster did not inform consumers that there were two categories of standing tickets at different prices, the CMA said. All the cheaper standing tickets were sold first before the more expensive ones were released. The CMA said this resulted in many fans waiting in a queue without understanding what they would be paying. Then, when they finally had the opportunity to buy tickets, they had to decide on the spot whether to pay a much higher price than they had expected.

    Ticketmaster has since said in a statement: “We strive to provide the best ticketing platform through a simple, transparent and consumer-friendly experience. We welcome the CMA’s input in helping make the industry even better for fans.”

    So what now for dynamic pricing? As an events expert, I believe the industry will continue to use it. But in future, it is likely that consumers will be more savvy and aware of these practices. The industry will have to be led by ideas of best practice – and become much more transparent.

    Jonathan Fry does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Oasis are on the road again. But has the ticket scandal spelled the end of dynamic pricing? – https://theconversation.com/oasis-are-on-the-road-again-but-has-the-ticket-scandal-spelled-the-end-of-dynamic-pricing-256533

    MIL OSI

  • MIL-OSI Submissions: Films can change the world – why universities and film schools should teach impact strategies

    Source: The Conversation – Africa – By Liani Maasdorp, Senior lecturer in Screen Production and Film and Television Studies, University of Cape Town

    When was the last time a film changed the way you saw the world? Or the way you behaved?

    Miners Shot Down (2014) countered mainstream media narratives to reveal how striking mine workers were gunned down by police at Marikana in South Africa. Black Fish (2013) made US theme park SeaWorld’s stock prices plummet. And Virunga (2014) stopped the British oil company Soco International from mining in the Congolese national park from which the film takes its name.

    These films were all at the centre of impact campaigns designed to move people to act. In filmmaking, “impact” may involve bringing people together around important issues. It could also lead to people changing their minds or behaviour. It might change lives or policies.

    Impact is achieved not just by a film’s own power to make people aware of and care about an issue. It requires thinking strategically about how to channel that emotion into meaningful and measurable change.

    Although it is a growing field, for which there are numerous funding opportunities, impact producing is seldom taught at film schools or in university film programmes. Teaching tends to be ad hoc or superficial.

    As scholars who study and teach film, we wanted to know more about where and how people are learning about impact producing; the benefits of learning – and teaching – impact production; and the barriers that prevent emerging filmmakers and film students in Africa and the rest of the majority world from learning this discipline. (Also called the “global south” or the “developing world”, majority world is a term used to challenge the idea that the west is the centre of the world.)

    So, for a recent article in Film Education Journal, we conducted desk research, a survey shared with the members of the Global Impact Producers Alliance and interviews with a sample of stakeholders, selected based on their knowledge of teaching impact or experience of learning about it.

    We found that there are university and college courses that focus on social issue filmmaking, but hardly any that prioritise social impact distribution. Access to free in-person training is highly competitive, generally requiring a film in production. We also found that free online resources – though numerous – can be overwhelming to those new to the field. And the majority of the courses, labs and resources available have been created in the west.

    We believe it is important for film students and emerging filmmakers to know at least the basics of impact producing, for a range of reasons. Film is a powerful tool that can be used to influence audience beliefs and behaviour. Students need to know how they are being influenced by the media – and also how they can use it to advance causes that make the world more just and sustainable. The skills are transferable to other story forms, which empowers students to work in different contexts, in both the commercial and independent film sectors. It can benefit a student’s career progression and future job prospects.

    Existing opportunities

    We found that current impact learning opportunities range in depth and accessibility.

    Many webinars, masterclasses and short one-off training opportunities are freely available online. But some are not recorded: you have to be there in person. Many form part of film festivals and film market programmes, which charge registration fees.

    Impact “labs” are on offer around the world. They usually run for less than a week and are offered by different organisations, often in collaboration with Doc Society (the leading proponent of impact production worldwide). Although they are almost all free of charge, the barrier to entry is high: they are aimed at filmmakers with social impact films already in the making.

    We found that the postgraduate programmes (MA and PhD) most aligned with this field are offered by a health sciences university in the US, Saybrook Univerity, and are very expensive.

    African content, global reach

    In our journal article we presented two impact learning opportunities from the majority world as case studies. One, the Aflamuna Fellowship, is an eight-month in-person programme based in Beirut, Lebanon. It combines theoretical learning, “job shadowing” on existing impact campaigns, and in-service learning through designing and running impact campaigns for new films. This programme has proven very helpful to filmmakers approaching topics that are particularly sensitive within the Middle East and north Africa regions, such as LGBTQ+ rights.

    The other, the UCT/Sunshine Cinema Film Screening Impact Facilitator short course, is based in South Africa but is hosted entirely online. It was developed by the University of Cape Town Centre for Film and Media Studies and the mobile cinema distribution NGO Sunshine Cinema and launched in 2021. We are both connected to it – one as course convenor (Maasdorp) and the other (Loader) as one of the 2023 alumni.

    Self-directed learning (including learning videos, prescribed films, readings and case studies) is followed by discussions with peers in small groups and live online classes with filmmakers, movement builders and impact strategists. The final course assignment is to plan, market, host and report on a film screening and facilitate an issue-centred discussion with the audience. Topics addressed by students in these impact screenings are diverse, ranging from voter rights, to addiction, to climate change, to gender-based violence.

    Both case studies offer powerful good practice models in impact education. Projects developed as part of these programmes go on to be successful examples of impact productions within the industry. The documentary Lobola, A Bride’s True Price? (2022, directed by Sihle Hlophe), for instance, got wide reaching festival acclaim, walking away with several prizes across Africa. Both programmes combine theoretical learning; discussion of case studies relevant to the local context; engagements with experienced impact workers; and application of the learning in practice.

    It is clear from this study that there is a hunger for more structured impact learning opportunities globally, and for local, context specific case studies from around the world.

    Liani Maasdorp is the convenor of the UCT/Sunshine Cinema Impact Facilitator short course. She has in the past received funding from Doc Society and their affiliate projects.

    Reina-Marie Loader does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Films can change the world – why universities and film schools should teach impact strategies – https://theconversation.com/films-can-change-the-world-why-universities-and-film-schools-should-teach-impact-strategies-242043

    MIL OSI

  • MIL-OSI Submissions: Your summer burn survival guide: from sunburn to BBQ mishaps

    Source: The Conversation – UK – By Dan Baumgardt, Senior Lecturer, School of Physiology, Pharmacology and Neuroscience, University of Bristol

    STEKLO/Shutterstock

    Summertime and the living is easy, fish are jumping – and the UK’s appetite for barbecues has left supermarket shelves stripped of burgers and sausages.

    Unfortunately, this BBQ frenzy has already claimed its first casualties, at least in my friendship circle. Over the weekend, a mate of mine, fuelled by Echo Falls Rosé, managed to burn his forearm on the grill rack while flipping burgers. Thankfully, several medically trained friends were on hand to douse the burn with cold water and administer first aid. He escaped relatively unscathed.

    But summer is a hotbed – literally – for burn-related injuries, ranging in severity from mild to life-threatening. Even minor burns deserve serious attention. Yet many people try to brush them off, slap on a brave face, or dismiss sound advice.

    To understand how burns affect the body, it’s helpful to start with a crash course in skin anatomy.

    Anatomy of a burn

    The skin is composed of three distinct layers, each with a specific role. The epidermis is the outer protective layer. It sits above the dermis, which contains your blood vessels, hair follicles, sweat glands and nerve endings that help you sense temperature and touch. The deepest layer is the hypodermis, which is responsible for anchoring the skin to underlying tissues.

    Understanding these layers helps clarify the severity of burns. When exposed to extreme heat, the nerve endings in the skin activate — and, in some cases, are damaged, or even destroyed.

    • Superficial burns (also known as first-degree burns): affect the epidermis and sometimes the upper dermis. These burns cause redness and pain (because nerves are irritated but intact). A mild sunburn is a good example.

    • Partial thickness burns (also known as second-degree burns): go deeper into the dermis, resulting in redness, pain and blistering. Many of us have experienced these after touching something unexpectedly hot. Fortunately, quick reflexes often save us from more serious injury.

    • Full thickness burns (also known as third-degree burns): are the most severe. These extend through all three layers of skin. Instead of red, the skin may appear white, grey, or even black due to charring. Counter-intuitively, these burns can be painless because the nerve endings have been destroyed.

    So while it might seem like a good sign if a burn doesn’t hurt, it may actually indicate far more serious harm. And some burn wounds can include a mix of different depth injuries.

    BBQ safely this summer.
    New Africa/Shutterstock

    Size matters, too. Any burn larger than the size of your hand, regardless of type, or affecting sensitive areas warrants medical opinion. So do any infected, blistering or full thickness burns, any burns associated with smoke inhalation, or burns caused by electricity or chemicals. You may need a tetanus boost if your immunisations aren’t up to date. Burns in children should always receive medical attention, too.

    Summertime burn hazards

    So what dangers lurk beneath the summer sun, some obvious, some less so?

    Sunburn is the most common, and most easily preventable, seasonal burn. It may seem harmless, but sun exposure can cause partial thickness burns, or burns over large surface areas. Worse still, it increases the risk of dehydration, heatstroke, and skin cancer. Please take it seriously. Sun protection is vital.

    While lovely on long summer evenings, campfires pose another risk. Always monitor fires closely, keep flammable liquids well away, and make sure there’s a safe distance between the fire and spectators.

    As we’ve already heard, BBQs – whether at home or on the beach, are also burn hazards. Beach BBQs are popular, but potentially problematic since they can heat the sand or pebbles to extremely hot temperatures. Always keep them well supervised during use, and clear up after using a BBQ properly.

    I’ve seen patients with horrific foot burns from stepping on searing hot sand, including where coals were buried. Hot embers can smoulder unseen for hours. Please don’t bury BBQ remains – have courtesy to other beachgoers, and stay safe.

    What to do after a burn

    Every burn deserves proper care, no matter how small. Burns aren’t just about blisters and peeling – they can lead to long-term complications including infection, tetanus, shock and even permanent scarring – both physical and psychological. And sunburn comes with the risk of a
    nasty heatstroke.

    Take burns seriously.
    Pavel Vatsura/Shutterstock

    Fortunately, basic first aid can make a big difference. :

    • Cool the area under gently running water for at least 20 minutes. Avoid ice or freezing water – it can make things worse.

    • Cover the burn with clingfilm. It protects against infection, doesn’t stick to the wound, and allows for easy monitoring.

    • Decide on medical care. If you’re unsure, always err on the side of caution and get it checked out.

    So while this summer shows no signs of cooling down, make sure you at least stay cool – and safe. Take care around heat sources, and treat every burn with the seriousness it deserves, even if that means a trip to accident and emergency.

    Dan Baumgardt does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Your summer burn survival guide: from sunburn to BBQ mishaps – https://theconversation.com/your-summer-burn-survival-guide-from-sunburn-to-bbq-mishaps-260108

    MIL OSI

  • MIL-OSI Submissions: AI in education: what those buzzwords mean

    Source: The Conversation – Africa – By Herkulaas MvE Combrink, Senior lecturer/ Co-Director, University of the Free State

    style-photography/Getty Images

    You’ll be hearing a great deal about artificial intelligence (AI) and education in 2025.

    The UK government unveiled its “AI opportunities action plan” in mid-January. As part of the plan it has awarded funding of £1 million (about US$1.2 million) to 16 educational technology companies to “build teacher AI tools for feedback and marking, driving high and rising education standards”. Schools in some US states are testing AI tools in their classrooms. A Moroccan university has become the first in Africa to introduce an AI-powered learning system across the institution.

    And the theme for this year’s United Nations International Day of Education, observed annually on 24 January, is “AI and education: Preserving human agency in a world of automation”.

    But what does AI mean in this context? It’s often used as a catch-all term in education, frequently mixed with digital skills, online learning platforms, software development, or even basic digital automation.

    This mischaracterisation can warp perceptions and obscure the true potential and meaning of AI-driven technologies. These technologies were developed by scientists and experts in the field, and brought to scale through big tech companies. For many people, the term AI reminds them of systems like OpenAI’s ChatGPT, which is capable of writing essays or answering complex queries. However, AI’s capabilities extend far beyond these applications – and each has unique implications for education.




    Read more:
    ChatGPT is the push higher education needs to rethink assessment


    I am an expert in AI, machine learning, infodemiology – where I study large amounts of information using AI to combat misinformation – knowledge mapping (discovering and visualising the contents of different areas of knowledge), and Human Language Technology (building) models that use AI to advance human language, such as live translation tools. I do all of this as the head of the Knowledge Mapping Lab, a research group within the Faculty of Economics and Management Sciences, and co-director of the Interdisciplinary Centre for Digital Futures at the University of the Free State.

    In this article I explain the technologies and science behind the buzzwords to shed light on what terms like machine learning and deep learning mean in education, how such technologies can be – or are already being – used in education, and their benefits and pitfalls.

    Machine learning: personalisation in action

    Machine learning is a subset of AI involving algorithms that learn from data to make predictions or decisions. In education, this can be used to adapt content to individual learners – what’s known as adaptive learning platforms. These can, for example, assess students’ strengths and weaknesses, tailoring lessons to their pace and style.

    Imagine a mathematics app that asks questions based on the curriculum, then uses a learner’s answers to identify where they struggle and adjusts its curriculum to focus on foundational skills before advancing. Although the science is still being explored, that level of personalisation could improve educational outcomes.

    Deep learning: assessment and accessibility

    Deep learning is a branch of machine learning. It mimics the human brain through neural networks, enabling more complex tasks such as image and speech recognition. In education, this technology has opened new avenues for assessment and accessibility.

    When it comes to assessment, AI-driven tools can assist in marking, analyse handwritten assignments, evaluate speech patterns in language learning, or translate content into multiple languages in real time. Such technologies can both help teachers to lessen their administrative loads and contribute to the learning journey.

    Then there’s inclusivity. Speech-to-text and text-to-speech applications allow students with disabilities to engage with material in ways that were previously impossible.

    Natural language processing: beyond ChatGPT

    Natural language processing is a branch of AI that allows computers to aid in the understanding, interpretation and generation of human language. ChatGPT is the most familiar example but it is just one of many such applications.

    The field’s potential for education is huge.

    Natural language processing can be used to:

    • analyse student writing for sentiment and style to provide real time feedback into the thinking, tone and quality of writing. This extends beyond syntax and semantics

    • identify plagiarism

    • provide pre-class feedback to learners, which will deepen classroom discussions

    • summarise papers

    • translate complex texts into more digestible formats.

    Reinforcement learning: simulating and gamifying education

    Gamifying education is a way to keep kids engaged while they learn in a virtual space.
    sritanan/Getty Images

    In reinforcement learning, computer systems learn through trial and error.

    This is particularly promising in gamified educational environments. These are platforms where the principles of gamification and education are applied in a virtual world that students “play” through. They learn through playing. Over time, the system learns how to adapt itself to make the content more challenging based on what the student has already learned.

    Challenges

    Of course, these technologies aren’t without their flaws and ethical issues. They raise questions about equity, for instance: what happens when students without access to such tools fall further behind? How can algorithms be prevented from reinforcing biases already present in educational data? In the earlier mathematical example this might not be as much of an issue – but imagine the unintended consequences of reinforcing bias in subjects like history.

    Accuracy and fairness are key concerns, too. A poorly designed model could misinterpret accents or dialects, disadvantaging specific groups of learners.

    An over-reliance on such tools could also lead to an erosion of critical thinking skills among both students and educators. How do we strike the right balance between assistance and autonomy?

    And, from an ethical point of view, what if AI is allowed to track and adapt to a student’s emotional state? How do we ensure that the data collected in such systems is used responsibly and securely?

    Experimentation

    AI’s potential needs to be explored through experimentation. But this works best if managed under controlled environments. One way to do this is through regulatory AI “sandboxes” – spaces in which educators and designers can experiment with new tools and explore applications.

    This approach has been used at the University of the Free State since 2023. As part of the Interdisciplinary Centre for Digital Futures, the sandboxes serve as open educational resources, offering videos, guides and tools to help educators and institutional leaders understand and responsibly implement AI technologies. The resource is open to both students and educators at the university, but our primary focus is on improving educators’ skills.

    AI in education is here to stay. If its components are properly understood, and its implementation is driven by good research and experimentation, it has the potential to augment learning while education remains human-centred, inclusive and empowering.

    This work is part of the Interdisciplinary Centre for Digital Futures (ICDF) research in the AI4ED focus area as well as a research focus of Herkulaas Combrink who is employed at the University of the Free State as a Senior Lecturer and Co Director. Inputs on this work come from a variety of different collaborators, including the Digital Scholarship Centre within the UFS Sasol Library. Additionally, some of the AI4ED principles were part of a PhD and ongoing investigations into the application of AI in Education, Infodemic, and other societal domains. The ICDF does receive funding for different research projects.

    ref. AI in education: what those buzzwords mean – https://theconversation.com/ai-in-education-what-those-buzzwords-mean-247587

    MIL OSI

  • MIL-OSI Submissions: Babies have more of this Alzheimer’s-linked protein than dementia patients – study raises hope for future treatments

    Source: The Conversation – UK – By Rahul Sidhu, PhD Candidate, Neuroscience, University of Sheffield

    FamVeld/Shutterstock.com

    A protein long blamed for the brain damage seen in Alzheimer’s disease has now been found in astonishingly high levels in healthy newborn babies, challenging decades of medical dogma.

    The discovery could transform our understanding of both brain development and Alzheimer’s disease itself. The protein, called p-tau217, has been viewed as a hallmark of neurodegeneration – yet a new study reveals it’s even more abundant in the brains of healthy infants.

    Rather than being toxic, p-tau217 may be essential for building the brain during early development.

    To understand why this matters, it helps to know what tau normally does. In healthy brains, tau is a protein that helps keep brain cells stable and allows them to communicate – essential functions for memory and overall brain function. Think of it like the beams inside a building, supporting brain cells so they can function properly.

    But in Alzheimer’s disease, tau gets chemically changed into a different form called p-tau217. Instead of doing its normal job, this altered protein builds up and clumps together inside brain cells, forming tangles that impair cell function and lead to memory loss typical of the disease.

    For years, scientists have assumed high levels of p-tau217 always spell trouble. The new research suggests they’ve been wrong.

    Dementia explained.

    An international team led by the University of Gothenburg analysed blood samples from over 400 people, including healthy newborns, young adults, elderly adults and those with Alzheimer’s disease. What they found was striking.

    Premature babies had the highest concentrations of p-tau217 of anyone tested. Full-term babies came second. The earlier the birth, the higher the protein levels – yet these infants were perfectly healthy.

    These levels dropped sharply during the first months of life, remained very low in healthy adults, then rose again in people with Alzheimer’s – though never reaching the sky-high levels seen in newborns.

    The pattern suggests p-tau217 plays a crucial role in early brain development, particularly in areas controlling movement and sensation that mature early in life. Rather than causing harm, the protein appears to support the building of new neural networks.

    Rethinking Alzheimer’s disease

    The implications are profound. First, the findings clarify how to interpret blood tests for p-tau217, recently approved by US regulators to aid dementia diagnosis. High levels don’t always signal disease – in babies, they’re part of normal, healthy brain development.

    More intriguingly, the research raises a fundamental question: why can newborn brains safely handle massive amounts of p-tau217 when the same protein wreaks havoc in older adults?

    If scientists can unlock this protective mechanism, it could revolutionise Alzheimer’s treatment. Understanding how infant brains manage high tau levels without forming deadly tangles might reveal entirely new therapeutic approaches.

    The findings also challenge a cornerstone of Alzheimer’s research. For decades, scientists have believed p-tau217 only increases after another protein, amyloid, starts accumulating in the brain, with amyloid triggering a cascade that leads to tau tangles and dementia.

    But newborns have no amyloid buildup, yet their p-tau217 levels dwarf those seen in Alzheimer’s patients. This suggests the proteins operate independently and that other biological processes – not just amyloid – regulate tau throughout life.

    The research aligns with earlier animal studies. Research in mice showed tau levels peak in early development then fall sharply, mirroring the human pattern. Similarly, studies of foetal neurons found naturally high p-tau levels that decline with age.

    If p-tau217 is vital for normal brain development, something must switch later in life to make it harmful. Understanding what flips this biological switch – from protective to destructive – could point to entirely new ways of preventing or treating Alzheimer’s.

    For decades, Alzheimer’s research has focused almost exclusively on the damage caused by abnormal proteins. This study flips that perspective, showing one of these so-called “toxic” proteins may actually play a vital, healthy role at the start of life.

    Babies’ brains might hold the blueprint for keeping tau in check. Learning its secrets could help scientists develop better ways to preserve cognitive function as we age, transforming our approach to one of medicine’s greatest challenges.

    Rahul Sidhu does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Babies have more of this Alzheimer’s-linked protein than dementia patients – study raises hope for future treatments – https://theconversation.com/babies-have-more-of-this-alzheimers-linked-protein-than-dementia-patients-study-raises-hope-for-future-treatments-259838

    MIL OSI

  • MIL-OSI Submissions: 9 million Ethiopian children have been forced out of school: what the government must do

    Source: The Conversation – Africa – By Tebeje Molla, Senior Lecturer, School of Education, Deakin University

    More than nine million Ethiopian children are currently out of school. They are caught in the crossfire of armed conflicts, natural disasters, tribal tensions and economic hardships.

    In 2023, Ethiopia had a total school-aged population of 35,444,482 children, about 52% of them primary school-aged. In the same year, only 22,949,597 children were enrolled in schools, leaving over 35% of school-aged children out of school. In the past year, the ongoing humanitarian crisis has worsened the situation, forcing even more children out of school.

    Armed conflict erupted in 2020 between the federal government and Tigray regional government. The crisis was compounded by armed resistance to the government in the two largest regional states, Amhara and Oromia. There are also ongoing conflicts between the pastoralist communities of the Afar and Somali regions.

    The Tigray war drained the nation’s economic resources. The destruction of infrastructure, particularly schools, in this conflict forced over a million children out of school. Since then conflict in the nine regions has also undermined government control, causing widespread disruptions to essential services, including education and healthcare.

    Most recently, natural disasters, including earthquakes in the eastern parts of the country, have displaced tens of thousands of civilians, including children.

    Scale of the crisis

    The numbers tell the story. As of November 2024, around 10,000 schools were damaged and over 6,000 schools were closed due to conflict, violence and natural disasters. The worst hit regions are Amhara, Oromia, Tigray, Somali and Afar.

    In three of these – Amhara, Oromia and Tigray – a total of 8,910,000 children are out of school. Amhara is particularly hard hit with only 2.3 million students enrolling for the current academic year out of 7 million.

    I am a scholar of education policy with close to 15 years of research on Ethiopia’s education sector. It’s my view that children have borne the heaviest burden from the challenges that have overwhelmed the country’s capacity to provide essential services.

    Leaving millions of children out of school has devastating consequences. There is a well documented increased risk of child labour, early marriage, and other forms of exploitation. Children who miss out on early education also face lifelong disadvantages, including limited employment opportunities and greater vulnerability to poverty and social exclusion.

    When children are not in school and miss out on learning, the consequences are far-reaching. At a personal level, disrupted education hinders their cognitive, social and emotional development. It limits their ability to acquire skills needed for personal growth and future employment. At the societal level, a lack of education drives cycles of poverty, reduces economic productivity and weakens social cohesion. Under-educated citizens are less equipped to take an active part in civic life. It also stifles innovation, worsens inequalities and holds back national progress and stability.

    Despair and hopelessness have driven countless young people from Ethiopia to risk their lives on dangerous migration routes to the Middle East. The loss of educational opportunities for millions of children also undermines the nation’s capacity to develop the human capital needed for its growth. An uneducated population is more susceptible to being drawn into ongoing conflict.

    What can be done?

    Prime Minister Abiy Ahmed came to power in April 2018 with a pledge of change for Ethiopia. But Abiy’s government often sidesteps critical challenges, choosing to amplify positive narratives over confronting pressing issues.

    Instead of tackling the crisis directly, Abiy has left regional state governments to find resources. For example, in November 2024, it was left to an advocacy group formed by Amhara’s ten public universities to appeal to donors for aid for education.

    In early January 2025, the Amhara regional state government also asked stakeholders to help reopen closed schools. In Ethiopia’s federal structure, the education ministry sets national policies and standards, and manages higher education. Regional governments carry out these policies, oversee primary and secondary education, and adapt curricula to local contexts. Budgets are shared based mainly on the population size of each regional state.

    Denying the reality of the crisis only deepens the wounds of the nation and delays the necessary actions for peace and recovery. It’s now time for Abiy’s government to take action. It must:

    • confront the crisis

    • engage in dialogue to resolve conflicts

    • appeal for international support.

    The scale of the disruption demands a coordinated and comprehensive humanitarian response. Global development aid partners need to recognise that the education crisis in Ethiopia deserves immediate and sustained attention. Another round of global funds dedicated to education in emergencies is urgently needed.

    The collective duty should extend beyond providing immediate relief. It should also encourage the Ethiopian government to resolve its various internal conflicts through peaceful dialogue. Diplomacy, negotiation and reconciliation should take precedence over war and violence.

    Tebeje Molla does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. 9 million Ethiopian children have been forced out of school: what the government must do – https://theconversation.com/9-million-ethiopian-children-have-been-forced-out-of-school-what-the-government-must-do-247697

    MIL OSI

  • MIL-OSI Submissions: ‘Completely unexpected’: Antarctic sea ice may be in terminal decline due to rising Southern Ocean salinity

    Source: The Conversation – UK – By Alessandro Silvano, NERC Independent Research Fellow in Oceanography, University of Southampton

    Adélie penguins rely on Antarctic sea ice for habitat. Nick Dale Photo/Shutterstock

    The ocean around Antarctica is rapidly getting saltier at the same time as sea ice is retreating at a record pace. Since 2015, the frozen continent has lost sea ice similar to the size of Greenland. That ice hasn’t returned, marking the largest global environmental change during the past decade.

    This finding caught us off guard – melting ice typically makes the ocean fresher. But new satellite data shows the opposite is happening, and that’s a big problem. Saltier water at the ocean surface behaves differently than fresher seawater by drawing up heat from the deep ocean and making it harder for sea ice to regrow.

    The loss of Antarctic sea ice has global consequences. Less sea ice means less habitat for penguins and other ice-dwelling species. More of the heat stored in the ocean is released into the atmosphere when ice melts, increasing the number and intensity of storms and accelerating global warming. This brings heatwaves on land and melts even more of the Antarctic ice sheet, which raises sea levels globally.

    Our new study has revealed that the Southern Ocean is changing, but in a different way to what we expected. We may have passed a tipping point and entered a new state defined by persistent sea ice decline, sustained by a newly discovered feedback loop.

    The Southern Ocean surrounds Antarctica, which is fringed by sea ice.
    Nasa

    A surprising discovery

    Monitoring the Southern Ocean is no small task. It’s one of the most remote and stormy places on Earth, and is covered in darkness for several months a year. Thanks to new European Space Agency satellites and underwater robots which stay below the ocean surface measuring temperature and salinity, we can now observe what is happening in real time.

    Our team at the University of Southampton worked with colleagues at the Barcelona Expert Centre and the European Space Agency to develop new algorithms to track ocean surface conditions in polar regions from satellites. By combining satellite observations with data from underwater robots, we built a 15-year picture of changes in ocean salinity, temperature and sea ice.

    What we found was astonishing. Around 2015, surface salinity in the Southern Ocean began rising sharply – just as sea ice extent started to crash. This reversal was completely unexpected. For decades, the surface had been getting fresher and colder, helping sea ice expand.

    The annual summer minimum extent of Antarctic sea ice dropped precipitously in 2015.
    NOAA Climate.gov/National Snow and Ice Data Center

    To understand why this matters, it helps to think of the Southern Ocean as a series of layers. Normally, the cold, fresh surface water sits on top of warmer, saltier water deep below. This layering (or stratification, as scientists call it) traps heat in the ocean depths, keeping surface waters cool and helping sea ice to form.

    Saltier water is denser and therefore heavier. So, when surface waters become saltier, they sink more readily, stirring the ocean’s layers and allowing heat from the deep to rise. This upward heat flux can melt sea ice from below, even during winter, making it harder for ice to reform. This vertical circulation also draws up more salt from deeper layers, reinforcing the cycle.

    A powerful feedback loop is created: more salinity brings more heat to the surface, which melts more ice, which then allows more heat to be absorbed from the Sun. My colleagues and I saw these processes first hand in 2016-2017 with the return of the Maud Rise polynya, which is a gaping hole in the sea ice that is nearly four times the size of Wales and last appeared in the 1970s.

    What happens in Antarctica doesn’t stay there

    Losing Antarctic sea ice is a planetary problem. Sea ice acts like a giant mirror reflecting sunlight back into space. Without it, more energy stays in the Earth system, speeding up global warming, intensifying storms and driving sea level rise in coastal cities worldwide.

    Wildlife also suffers. Emperor penguins rely on sea ice to breed and raise their chicks. Tiny krill – shrimp-like crustaceans which form the foundation of the Antarctic food chain as food for whales and seals – feed on algae that grow beneath the ice. Without that ice, entire ecosystems start to unravel.

    What’s happening at the bottom of the world is rippling outward, reshaping weather systems, ocean currents and life on land and sea.

    Feedback loops are accelerating the loss of Antarctic sea ice.
    University of Southampton

    Antarctica is no longer the stable, frozen continent we once believed it to be. It is changing rapidly, and in ways that current climate models didn’t foresee. Until recently, those models assumed a warming world would increase precipitation and ice-melting, freshening surface waters and helping keep Antarctic sea ice relatively stable. That assumption no longer holds.

    Our findings show that the salinity of surface water is rising, the ocean’s layered structure is breaking down and sea ice is declining faster than expected. If we don’t update our scientific models, we risk being caught off guard by changes we could have prepared for. Indeed, the ultimate driver of the 2015 salinity increase remains uncertain, underscoring the need for scientists to revise their perspective on the Antarctic system and highlighting the urgency of further research.

    We need to keep watching, yet ongoing satellite and ocean monitoring is threatened by funding cuts. This research offers us an early warning signal, a planetary thermometer and a strategic tool for tracking a rapidly shifting climate. Without accurate, continuous data, it will be impossible to adapt to the changes in store.


    Don’t have time to read about climate change as much as you’d like?

    Get a weekly roundup in your inbox instead. Every Wednesday, The Conversation’s environment editor writes Imagine, a short email that goes a little deeper into just one climate issue. Join the 45,000+ readers who’ve subscribed so far.


    Alessandro Silvano is a Natural Environment Research Council (United Kingdom Research and Innovation) Independent Research Fellow.

    ref. ‘Completely unexpected’: Antarctic sea ice may be in terminal decline due to rising Southern Ocean salinity – https://theconversation.com/completely-unexpected-antarctic-sea-ice-may-be-in-terminal-decline-due-to-rising-southern-ocean-salinity-259743

    MIL OSI

  • MIL-OSI Submissions: Urban food gardens produce more than vegetables, they create bonds for young Capetonians – study

    Source: The Conversation – Africa – By Tinashe P. Kanosvamhira, Post-doctoral researcher, African Centre for Cities, University of Cape Town

    Urban farms like this one in Nouakchott, Mauritania, have many benefits. John Wessels/AFP via Getty Images)

    Urban agriculture takes many forms, among them community, school or rooftop gardens, commercial urban farms, and hydroponic or aquaponic systems. These activities have been shown to promote sustainable cities in a number of ways. They enhance local food security and foster economic opportunities through small-scale farming initiatives. They also strengthen social cohesion by creating shared spaces for collaboration and learning.

    However, evidence from some African countries (and other parts of the world) shows that very few young people are getting involved in agriculture, whether in urban, peri-urban or rural areas. Studies from Kenya, Tanzania, Ethiopia and Nigeria show that people aged between 15 and 34 have very little interest in agriculture, whether as an educational pathway or career. They perceive farming as physically demanding, low-paying and lacking in prestige. Systemic barriers like limited access to land, capital and skills also hold young people back.

    South Africa has a higher rate of young people engaging in farming (24%) than elsewhere in sub-Saharan Africa. However, this number could be higher if young people better understood the benefits of a career in farming and if they had more support.

    In a recent study I explored youth-driven urban agriculture in Khayelitsha, a large urban area outside Cape Town whose residents are mostly Black, low-income earners.

    The young urban farmers I interviewed are using community gardens to grow more than vegetables. They’re also nurturing social connections, creating economic and business opportunities, and promoting environmental conservation. My findings highlight the transformative potential of youth-driven urban agriculture and how it can be a multifaceted response to urban challenges. It’s crucial that policy makers recognise the value of youth-led urban agriculture and support those doing the work.

    The research

    Khayelitsha is vibrant and bustling. But its approximately 400,000 residents have limited resources and often struggle to make a living.

    I interviewed members of two youth-led gardens. One has just two members; the other has six. All my interviewees were aged between 22 and 27. The relatively low number of interviewees is typical of qualitative research, where the emphasis is placed on depth rather than breadth. This approach allows researchers to obtain detailed, context-rich data from a small, focused group of participants.

    The first garden was founded in January 2020, just a few months before the pandemic struck. The founders wanted to tackle unemployment and food insecurity in their community. They hoped to create jobs for themselves and others, and to provide nutritional support, particularly for vulnerable groups like children with special needs.

    The second garden was established in 2014 by three childhood friends. They were inspired by one founder’s grandmother, who loved gardening. They also wanted to promote organic farming, teach people healthy eating habits, and create a self-reliant community.

    All of my interviewees were activists for food justice. This refers to efforts aimed at addressing systemic inequities in food production, distribution, and access, particularly for marginalised communities. It advocates for equitable access to nutritious, culturally appropriate food.

    One of the gardens, for instance, operates about 30 beds. It cultivates a variety of produce: beetroot, carrots, spinach, pumpkins, potatoes, radishes, peas, lettuce and herbs. 30% of its produce is donated to local community centres each month (they were unable to say how many people benefited from this arrangement). The rest is sold to support the garden financially. Its paying clients include local restaurants and chefs, and members of the community. The garden also partners with schools, hospitals and other organisations to promote healthy eating and sustainable practices.

    The second garden, which is on land belonging to a local early childhood development centre, also focuses on feeding the community, as well as engaging in food justice activism.

    Skills, resilience and connections

    The gardens also help members to develop skills. Members gain practical knowledge about sustainable agriculture, marketing and entrepreneurship, all while managing operations and planning for growth.




    Read more:
    Healthy food is hard to come by in Cape Town’s poorer areas: how community gardens can fix that


    This hands-on experience instils a sense of responsibility and gives participants valuable skills they can apply in future careers or ventures. The founder of the first garden told me his skills empowered him to seek help from his own community rather than waiting for government intervention. He approached the management of an early childhood development centre in the community to request space on their land, and this was granted.

    Social connections have been essential to the gardens’ success. Bonding capital (close ties within their networks) and bridging capital (connections beyond their immediate community) has allowed them to strengthen relationships between themselves and civil society organisations. They’ve also been able to mobilise resources, as in the case of the first garden accessing community land.

    Additionally, the gardens foster community resilience. Members host workshops and events to educate residents about healthy eating, sustainable farming and environmental stewardship.

    By donating produce to local early childhood centres, they provide direct benefits to those most in need. These efforts have transformed the gardens into safe spaces for the community.

    Broader collaboration has also been key to the gardens’ success. For instance, the second garden has worked with global organisations and networks, like the Slow Food Youth Network, to share and gain knowledge about sustainable farming practices.

    Room for growth

    My findings highlight the need for targeted support for youth-driven urban agriculture initiatives. Policy and financial backing can enable these young gardeners to expand their efforts. This in turn will allow them to provide more food to their communities, create additional jobs, and empower more young people.

    At a policy level, the government could prioritise land access for urban agriculture projects, especially in under-served communities. Cities can foster an environment for youth initiatives to thrive by allocating spaces within their planning for urban farming.




    Read more:
    Africa’s megacities threatened by heat, floods and disease – urgent action is needed to start greening and adapt to climate change


    There’s also a need for educational programmes that emphasise the value of sustainable urban agriculture, and workshops and training on entrepreneurship and sustainable farming techniques. Community organising could further empower young farmers. Finally, continued collaboration with national and international food networks would help strengthen such initiatives.

    Tinashe P. Kanosvamhira does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Urban food gardens produce more than vegetables, they create bonds for young Capetonians – study – https://theconversation.com/urban-food-gardens-produce-more-than-vegetables-they-create-bonds-for-young-capetonians-study-243500

    MIL OSI

  • MIL-OSI Submissions: Education in Zimbabwe has lost its value: study asks young people how they feel about that

    Source: The Conversation – Africa – By Kristina Pikovskaia, Leverhulme Early Career Research Fellow, University of Edinburgh

    Zimbabwean students and graduates are actively seeking change to the education system. AFP via Getty Images

    Education, especially higher education, is a step towards adulthood and a foundation for the future.

    But what happens when education loses its value as a way to climb the social ladder? What if a degree is no guarantee of getting stable work, being able to provide for one’s family, or owning a house or car?

    This devaluing of higher education as a path to social mobility is a grim reality for young Zimbabweans. Over the past two decades the southern African country has been beset by economic, financial, political and social challenges.

    These crises have severely undermined the premises and promises of education, especially at a tertiary level. A recent survey by independent research organisation Afrobarometer found that 90% of young Zimbabweans had secondary and post-secondary education compared to 83% of those aged between 36 and 55. But 41% of the youth were unemployed and looking for a job as opposed to 26% of the older generation.

    The situation is so dire that it’s become a recurring theme in Zimdancehall, a popular music genre produced and consumed by young Zimbabweans. “Hustling” (attempts to create income-generating opportunities), informal livelihoods and young people’s collapsed dreams are recurrent topics in songs like Winky D’s Twenty Five, Junior Tatenda’s Kusvikira Rinhi and She Calaz’s Kurarama.

    I study the way people experience the informal economy in Zimbabwe and Zambia. In a recent study I explored the loss of education’s value as a social mobility tool in the Zimbabwean context.

    My research revealed how recent school and university graduates think about the role of education in their lives. My respondents felt let down by the fact that education no longer provided social mobility. They were disappointed that there was no longer a direct association between education and employment.

    However, the graduates I interviewed were not giving up. Some were working towards new qualifications, hoping and preparing for economic improvements. They also thought deeply about how the educational system could be improved. Many young people got involved in protests. These included actions by the Coalition of Unemployed Graduates and the #ThisGown protests, which addressed graduate unemployment issues. Some also took part in #ThisFlag and #Tajamuka protests, which had wider socio-economic and political agendas.

    Understanding history

    To understand the current status and state of education in Zimbabwe it’s important to look to the country’s history.

    Zimbabwe was colonised by the British from the late 19th century. The colonial education system was racialised. Education for white students was academic. For Black students, it was mostly practice-oriented, to create a pool of semi-skilled workers.

    In the 1930s education was instrumental in the formation of Zimbabwe’s Black middle class. A small number of Black graduates entered white collar jobs, using education as a social mobility tool. The educational system also opened up somewhat for women.

    Despite some university reforms during the 1950s, the system remained deeply racialised until the 1980s. That’s when the post-colonial government democratised the education system. Primary school enrolment went up by 242%, and 915% more students entered secondary school. In the 1990s nine more state universities were opened.

    However, worsening economic conditions throughout the 1990s put pressure on the system. A presidential commission in 1999 noted that secondary schools were producing graduates with non-marketable skills – they were too academic and focused on examinations. Students’ experiences, including at the university level, have worsened since then.

    The decline has been driven by systemic and institutional problems in primary and secondary education, like reduced government spending, teachers’ poor working conditions, political interference and brain drain. This, coupled with the collapse of the formal economic sector and a sharp drop in formal employment opportunities, severely undermined education’s social mobility function.

    ‘A key, but no door to open’

    My recent article was based on my wider doctoral research. For this, I studied economic informalisation in Zimbabwe’s capital city, Harare. It involved more than 120 interviews during eight months of in-country research.

    This particular paper builds on seven core interviews with recent school and university graduates in the informal sector, as well as former student leaders.

    Winky D’s “Twenty Five” is about young Zimbabweans’ grievances.

    Some noted that education had lost part of its value as it related to one’s progression in society. As one of my respondents, Ashlegh Pfunye (former secretary-general of the Zimbabwe National Students Union), described it, young people were told that education was a key to success – but there was no door to open.

    Some of my respondents were working in the informal sector, as vendors and small-scale producers. Some could not use their degrees to secure jobs, while others gave up their dreams of obtaining a university degree. Lisa, for example, was very upset about giving up on her dream to pursue post-secondary education and tried to re-adjust to her current circumstances:

    I used to dream that I will have my own office, now I dream that one day I’ll have my own shop.

    Those who had university qualifications stressed that, despite being unable to apply their degrees in the current circumstances, they kept going to school and getting more certification. This prepared them for future opportunities in the event of what everyone hoped for: economic improvement.

    Historical tensions

    Some of my interviewees, especially recent university graduates and activists, were looking for possible solutions – like changing the curriculum and approach to education that trains workers rather than producers and entrepreneurs. As Makomborero Haruzivishe, former secretary-general of the Zimbabwe National Students’ Union, said: “Our educational system was created to train human robots who would follow the instructions.”

    Entrepreneurship education is a popular approach in many countries to changing the structure of classic education. In the absence of employment opportunities for skilled graduates, it is supposed to provide them with the tools to create such opportunities for themselves and others.




    Read more:
    Nigeria’s universities need to revamp their entrepreneurship courses — they’re not meeting student needs


    In 2018, the government introduced what it calls the education 5.0 framework. It has a strong entrepreneurship component. It’s too soon to say whether it will bear fruit. And it may be held back by history.

    For example, the introduction of the Education-with-Production model in the 1980s, which included practical subjects and vocational training, was met with resistance because it was seen as a return to the dual system.

    Because of Zimbabwe’s historically racialised education system, many students and parents favour the UK-designed Cambridge curriculum and traditional academic educational programmes. Zimbabwe has the highest number of entrants into the Cambridge International exam in Africa.

    Feeling let down

    The link between education and employment in Zimbabwe has many tensions: modernity and survival, academic pursuits and practicality, promises and reality. It’s clear from my study that graduates feel let down because the modernist promises of education have failed them.

    Parts of this research have been funded by the University of Oxford and the Leverhulme Trust (ECF-2022-055).

    ref. Education in Zimbabwe has lost its value: study asks young people how they feel about that – https://theconversation.com/education-in-zimbabwe-has-lost-its-value-study-asks-young-people-how-they-feel-about-that-244661

    MIL OSI

  • MIL-OSI Submissions: Food security in Africa: managing water will be vital in a rapidly growing region

    Source: The Conversation – Africa – By Christian Siderius, Senior researcher in water and food security, London School of Economics and Political Science

    Sub-Saharan Africa’s population is growing at 2.7% per year and is expected to reach two billion by the year 2050. The region’s urban population is growing even faster: it was at 533 million in 2023, a 3.85% increase from 2022.

    The need to feed this population will put pressure on land and water resources.

    I’m part of a group of researchers who have looked at whether regional food production would be sufficient to supply growing urban populations. By and large, we have found high levels of food self-sufficiency. But climate change could put a spanner in the works.

    We have also looked at the potential of local water conservation measures to help achieve food self-sufficiency in sub-Saharan Africa.

    Our study shows that measures such as better irrigation or water harvesting could boost food production while buffering the vagaries of weather.

    We found that ambitious – yet realistic – adoption of such measures increases food supply to cities and makes the region as a whole self-sufficient.

    A new model

    In large parts of eastern Africa, rainfall is relatively abundant and well distributed over the growing season, resulting in good yields. In future, however, the gap between water availability and crop water demand is expected to increase.

    We wanted to know whether sub-Saharan Africa would be able to increase its food production to meet future demand, in a changing climate. To do so, we built a novel foodshed model which simulates crop production using climate data and links urban demand to nearby food supply. Foodsheds have been defined as areas where supply matches demand. We assessed various water management measures that could buffer weather variability or increase production (or both). Understanding the potential of such measures can help mobilise and target much needed investments in Africa’s food system.

    Conserving water and growing more food

    First, we looked at whether regional food production was sufficient to supply growing urban populations.

    Combining large databases and crop simulations, we outlined the regions that food might come from for urban areas. Sub-Saharan Africa produces 85% of its overall crop food demand at present, according to our calculations, much of it in eastern Africa. Tanzania, Kenya, and even Uganda – if it were to use its food exports for domestic consumption – come close to being self-sufficient.

    Local exceptions are the large cities of Mombasa, the largest port city in Kenya, and Arusha, an important tourism and diplomatic and conference hub in Tanzania, and their immediate surroundings.

    In future, a larger population will demand more food. At the same time, the gap between how much water is available and how much crops need is expected to increase. Higher water losses due to higher temperatures will not be fully compensated for by changes in rainfall, according to climate model projections. And even where rainfall is projected to increase, more extreme events are likely to affect crop production. It might rain either too much or too little, which will lead to higher year-to-year variability.

    Our study shows that local water conservation measures could buffer some of the projected negative impacts of climate change in eastern Africa. It could also boost food production.

    Water harvesting, soil conservation and making sure water infiltrates in the soil would slow runoff and store more water in the soil.

    Irrigation systems should be gradually upgraded to drip irrigation or sprinklers. This will improve irrigation efficiency and water consumption. On rainfed areas, rainwater harvesting reservoirs should be installed. The water stored could be used for supplemental irrigation during dry periods. Soil moisture conservation measures will also be applied. These measures will prevent water from evaporating from the bare soil. Irrigation could offset occasional drought risk and so provide better financial stability or create possibilities for planting a different or a second or third crop, further increasing production and income.

    Even the foodsheds of rapidly growing cities such as Dar es Salaam in Tanzania will be able to supply enough to meet demand from relatively short distances.

    Large scale expansion of irrigation onto new lands should, however, be considered carefully. Potential trade-offs with energy and tourism incomes must equally be considered.

    In an earlier study, assessing Tanzania’s ambitious formal irrigation expansion plans, we found that expansion without water conservation measures would pose considerable risk to hydropower production in the new Julius Nyerere Hydropower Project. It would also be a risk to river-dependent ecosystems and national parks and the substantial tourism income that they generate.




    Read more:
    Kenya needs to grow more food: a focus on how to irrigate its vast dry areas is key


    Why our findings matter

    Producing more food in Africa is essential to keep pace with population growth and changing diets. The alternative is an increasing dependence on imports from outside the continent. In 2021, the total value of Africa’s food imports was roughly US$100 billion. Imports can be a useful supplement to local production, but major food exporters in Europe and America are already producing at peak productivity. They have limited scope to increase area and production.

    Security concerns around global supply chains in the wake of the COVID-19 pandemic, the war in Ukraine, and broader geo-political realignment have also made countries wary of relying too much on others.

    Our study confirms the potential of Africa to supply much of the increased demand for food within the continent. We looked at all food crops, including regionally important ones such as cassava, beans and millet. Countries in eastern Africa play a pivotal role.

    Improved productivity due to measures proposed would reduce the need for more land elsewhere to grow crops, and limit conflicts related to land use. This is equally important for biodiversity and tourism.




    Read more:
    Diet and nutrition: how well Tanzanians eat depends largely on where they live


    Looking forward

    What we propose requires large investments. Exploring these costs against benefits in a case study in the Rufiji basin in Tanzania we found that most water management measures would be cost effective, but only when considering the overall impact of water conservation on agriculture, hydropower production, and the riverine ecosystem.

    Not all farmers will be able to finance these measures themselves. The government and private sector have to provide incentives, reduce risks and increase access to affordable loans.

    Nor should these measures be taken in isolation. Other buffer mechanisms to support a stable food supply are increased storage facilities for food, diversified production, and stable and diversified trade relationships.
    With farmers innovating, the region’s infrastructure rapidly developing, and expanding urban areas becoming catalysts for growth, there is both the need and the scope to further invest in and improve the region’s food system.

    Christian Siderius received funding to conduct this research from the Netherlands Environmental Assessment Agency (PBL) for the Future Water Challenges project (E555182DA/5200000978/9) and in preparation of the 2021 United Nations Food Systems Summit. Other cited work was carried out under the Future Climate for Africa UMFULA project with financial support from the UK Natural Environment Research Council (grants NE/M020398/1 and NE/M020258) and the UK government’s former
    Department for International Development.

    Christian is a director and founder of Uncharted Waters Ltd, a not-for-profit climate-food system analytics company, and a Visiting Senior Fellow at the Grantham Research Institute of the London School of Economics and Political Science in the United Kingdom, and Visiting Senior Researcher the Water Resources Management group at Wageningen University in the Netherlands

    ref. Food security in Africa: managing water will be vital in a rapidly growing region – https://theconversation.com/food-security-in-africa-managing-water-will-be-vital-in-a-rapidly-growing-region-241281

    MIL OSI

  • MIL-OSI Submissions: Counting Uganda’s lions: we found that wildlife rangers do a better job than machines

    Source: The Conversation – Africa – By Alexander Richard Braczkowski, Research Fellow at the Centre for Planetary Health and Resilient Conservation Group, Griffith University

    Lions are a symbol of Africa’s last wild places. It’s a species central to many of the continent’s cultures and religions. But lion populations have reportedly declined over the past 50 years, especially in parts of west and east Africa.

    Concern over this decline has prompted large financial commitments to shore up numbers. These investments must go hand in hand with the critical work of closely monitoring lion populations. It’s important to understand how their numbers and their distribution respond to conservation actions such as anti-poaching, managing conflicts with cattle farmers, and securing protected areas.

    Many traditional methods used to count lions can produce unreliable results. And many existing estimates are based on assumptions about vast expanses which have not been surveyed.

    We are researchers with over 50 years of combined experience in conservation, big cat ecology, and the complexities of people and wildlife living together. We have long suspected wildlife tourism rangers operating within our study locations in Uganda could help us find lions in hard-to-reach places and map their distribution. After all, tourism rangers are government employees whose primary role is to guide tourists in observing and photographing wildlife daily. They have a deeper understanding of animal behaviour than most others.

    We therefore set out to study the efficacy of wildlife tourism rangers in collecting data necessary for estimating lion population numbers. We compared their performance to another commonly used field method to count big cats: remote infrared camera traps. We found that an approach led by wildlife rangers could be very useful in counting lions in many parts of their African range.

    Counting the lions of the Nile River

    As the morning sun rises on the banks of the River Nile in north-western Uganda, two wildlife rangers turn on their iPhones, preloaded with tracking software which will help them monitor where they have searched for lions. Lilian Namukose and Silva Musobozi head into the heart of Murchison Falls National Park. Here, their daily work is to locate and photograph the region’s largest predator: the African lion.

    The study area is the Nile Delta region (255km²) of the park, Uganda’s largest protected area. The region flanks the upper reaches of the Nile River, Africa’s longest waterway. It is a biodiversity hotspot but faces immense human pressures, from commercial oil extraction and wire snare poaching.




    Read more:
    The fast, furious, and brutally short life of an African male lion


    For these reasons it is critical to establish robust measures of how many lions still exist there, and develop monitoring schemes which will be long lasting.

    Over 76 sampling days we collaborated with Namukose and Musobozi, who drove 2,939km searching for lions. At the same time, we deployed infrared camera traps across 32 locations in the same study area. This allowed us to compare how these two methods performed head-to-head in exactly the same study area and time period. What we measured was the number of individually identifiable lions through their unique whisker spot patterns, suitable for advanced scientific analysis called spatial capture-recapture modelling.

    At the end of our survey period the rangers detected 30 lions 102 times, generating an estimate of 13.91 individuals per 100km² with acceptable precision. By contrast, the infrared camera traps could not reliably identify lions. There were only two usable detections because of poor image quality.

    One of the most important results of our surveys was that the ranger-led survey was 50% cheaper than running camera traps, and each detection by a camera trap was 100 times more expensive than a detection by a ranger.

    What rangers could mean for lion conservation across Africa

    Our survey of Murchison’s Nile Delta region showed us two key things. First, rangers’ intimate knowledge of lion behaviour (especially specific thickets, and regions of high lion activity) helped us achieve high lion detection rates. Second, using tourism rangers as lion monitors gives rangers an entry point into the conservation science field.

    This approach not only empowers rangers as active conservation stakeholders, but builds the local capacity that’s needed in many of the places where lions still roam. This science capacity is key if lion populations are to be monitored accurately and regularly (ideally yearly).

    This is all the more critical in key source sites of lions in Uganda which have experienced significant declines in recent years, especially Kidepo Valley and Queen Elizabeth National Park. The current lion population in Uganda is estimated at 291 individuals, far lower than many other places in east Africa (the Maasai Mara alone holds about 400 lions).

    Silva Musobozi, one of the rangers who did the fieldwork of the scientific study, adds:

    Rangers are arguably the closest group to wildlife on the ground and have good knowledge of animal behaviour. Through capacity building and training, rangers can be better incorporated into the scientific and management process.

    Nicholas Elliot of Wildlife Counts in Nairobi, Kenya, contributed to the research on which this article is based.

    Alexander Richard Braczkowski receives funding from Northern Arizona University and Griffith University.

    Duan Biggs is a member of the IUCN (World Conservation Union).

    Arjun M. Gopalaswamy and Peter Lindsey do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    ref. Counting Uganda’s lions: we found that wildlife rangers do a better job than machines – https://theconversation.com/counting-ugandas-lions-we-found-that-wildlife-rangers-do-a-better-job-than-machines-244206

    MIL OSI

  • MIL-OSI Submissions: Industrial scale farming is flawed: what ecologically-friendly farming practices could look like in Africa

    Source: The Conversation – Africa – By Rachel Wynberg, Professor and DST/NRF Bio-economy Research Chair, University of Cape Town

    African Perspectives on Agroecology is a new book with 33 contributions from academics, non-governmental organisations, farmer organisations and policy makers. It is free to download, and reviewers have described it as a “must read for all who care about the future of Africa and its people”. The book outlines how agroecology, which brings ecological principles into farming practices and food systems, can solve food shortages and environmental damage caused by mass, commercial farming. We asked the book’s editor and the South African Research Chair on Environmental and Social Dimensions of the Bio-economy, Rachel Wynberg, to set out why this book is so important.

    What’s wrong with the current system of food production?

    The dominant model of modern agriculture in the world is based on monoculture, where one crop is grown across large areas using chemical fertilisers and pesticides. It relies on seeds that are owned by big corporations and are often subsidised by governments at a high cost.

    The book outlines how this approach to growing food is flawed. Firstly, it carries major costs. According to the Food and Agriculture Organisation’s State of Food and Agriculture 2024 report, the costs of diet-related disease, hunger and malnutrition and other costs amount to about US$8 trillion a year. Countries in the global south carry much of the burden.

    Secondly, the current approach is a major contributor to greenhouse gas emissions. This happens through deforestation and land degradation, livestock and fertiliser emissions, energy use, and the globalised nature of agriculture. Food is often produced far from where it is consumed.

    Huge farmlands also wipe out biodiversity and degrade one third of all soils, globally. Industrial agriculture has many negative impacts on ecosystem health, livestock and human wellbeing.

    What’s the alternative?

    Agroecology is a good alternative. It uses natural processes such as fixing nitrogen in the soil by planting legumes, and conserving natural habitat to encourage beneficial predators that keep pests in check. It includes planting a diversity of crops, rather than just one, to prevent pest outbreaks, and avoiding synthetic pesticides and herbicides.

    Agroecology places importance on building natural, local, economically viable and socially just food systems. It aims to support farmers and rural communities.




    Read more:
    Africa’s worsening food crisis – it’s time for an agricultural revolution


    As a result, it fosters more equal social relations and improves food and nutritional security.

    Agroecology also recognises local ways of knowing and doing things, and respects the rights of Indigenous people to seeds and plants that they have planted for many generations. Transforming research and education are an important part of agroecology.

    What are the advantages?

    Agroecology increases the capacity of farming systems to adapt to climate change. Studies show how agroecology increases crop yields, regulates water and nutrients, increases agricultural diversity and reduces pests.

    It gives farmers more choice about what to grow and eat. This enables them to produce a wider variety of healthy food.

    Can agroecology grow enough food for everyone?

    Agroecology can be scaled up through:

    • farmer-to-farmer knowledge exchanges

    • creating professional networks of agroecology practitioners

    • local seed-saving networks or groups that share different seeds that are adapted to local conditions




    Read more:
    Indigenous plants and food security: a South African case study


    • solidarity networks: community-based groups or movements that aim to support each other, cooperate and take collective action.

    • the revival and use of indigenous and under-utilised crops and livestock breeds such as pearl and finger millet, sorghum and Nguni cattle

    • linking producers with consumers and markets.

    What needs to be done?

    Urgent actions are needed, especially in the climate “hotspot” of sub-Saharan Africa. Agroecology needs supportive policies and funding. South Africa has had a draft agroecology strategy for more than 10 years but this has not yet been adopted.

    Development aid for farmers often undermines agroecology. It typically promotes a “new” African Green Revolution that uses hybrid seeds, agrochemicals, new technologies, and links to markets. However, hybrid seed, especially genetically modified seed, can contaminate local seed systems that are better adapted to local conditions.

    The book illustrates what can go wrong. Maize is said to have “modernised” development and promoted foreign investment in Africa. But it has displaced indigenous crops such as sorghum and millet which are more nutritious and drought-resistant.




    Read more:
    Amazing ting: South Africa must reinvigorate sorghum as a key food before it’s lost


    Subsidy programmes and state support for hybrid maize also back multinational agrochemical and seed companies.

    Governments, industry and those funding research, innovation and consumer marketing must actively move away from a maize culture and invest in a bigger range of crops.

    For millions of smallholder African farmers, there is a deep understanding of how animals, plants, soil, people and weather patterns are connected to and affect one another. Agricultural development programmes, chemical fertilisers, pesticides, and herbicides, and genetically modified seeds disrupt these relationships. They can devalue local knowledge and skills in favour of “expert”-led innovations. This means that farmers lose their capacity to understand their environment and their ability to react appropriately.




    Read more:
    Agriculture training in South Africa badly needs an overhaul. Here are some ideas


    Lastly, agriculture research and training needs to be rethought. Research and development is now mostly shaped by market-led approaches that favour crops grown by large-scale commercial farmers. A public sector research and development agenda for agroecology needs to be developed. It should be based both on scientific knowledge as well as traditional and local knowledge.

    What would help?

    Agricultural research should be co-created by everyone involved. Farmer-led research and innovation can support food system transformations.

    New ways of seeing and doing research are evolving. Western scientific and traditional knowledges are mixing in ways that can transform farming. Our book points out that social movements are emerging as a powerful force for change.

    We hope to support these efforts through a new, four year, European Union supported initiative to establish a research and training network: the Research for Agroecology Network in Southern Africa. New agroecology knowledge networks in South Africa and Zimbabwe have also been started to coordinate research and develop curricula.

    Rachel Wynberg’s research is supported by a grant from the Seed and Knowledge Initiative and South Africa’s National Research Foundation. She is a Board member of the NGO Biowatch South Africa.

    ref. Industrial scale farming is flawed: what ecologically-friendly farming practices could look like in Africa – https://theconversation.com/industrial-scale-farming-is-flawed-what-ecologically-friendly-farming-practices-could-look-like-in-africa-245579

    MIL OSI

  • MIL-OSI Submissions: Nigeria’s plastic bottle collectors turn waste into wealth: survey sheds light on their motivation

    Source: The Conversation – Africa – By Solaja Mayowa Oludele, Lecturing, Olabisi Onabanjo University

    Plastic waste in Nigeria presents a dual challenge: cleaning up environmental pollution, and tapping into its economic potential.

    Many countries worldwide face similar challenges. India, for one, has chosen policies that give producers of plastic the responsibility to manage their waste. Rwanda has banned single-use plastic and promoted recycling initiatives led by communities.

    These approaches show it’s possible to address plastic waste issues while fostering economic opportunities.




    Read more:
    Nigeria’s plastic ban: why it’s good and how it can work


    In Nigeria, informal collectors of plastic bottle waste are central to achieving both of these goals. They turn waste into monetary value.

    Previous research has highlighted the environmental and economic benefits of collecting plastic bottle waste. There’s been less attention on what shapes perceptions of waste collection as a business, particularly in Nigeria.

    This article explores that gap, looking at the socio-cultural, economic and environmental influences on those perceptions.

    I am a researcher in the areas of plastic waste management, environmental governance and sustainable development. My work includes studying homes made from recycled plastic bottles in sustainable community-based housing projects.

    Here I’ll be drawing from an exploratory survey conducted in the Ijebu area of Ogun State, Nigeria. Using a questionnaire, we surveyed 86 participants who had at least five years of experience in the plastic waste industry.

    The study identified factors like education, family size, religion, gender, age, and economic dynamics as relevant to participation in the business of plastic bottle waste collection.

    Understanding these influences might help the government to target policies.




    Read more:
    Nigeria is the world’s 2nd biggest plastic polluter: expert insights into the crisis


    Education level and information

    Our study found that participants with higher education levels better understood the economic benefits of plastic waste collection as a systematic form of business. The less educated participants viewed waste collection more as a hand-to-mouth way of earning a living.

    Education programmes built into waste management campaigns could improve recognition of waste collection as a structured and profitable business opportunity and develop a business-like culture among the collectors.

    Parenthood, family size and financial obligations

    Family size was a factor affecting perceptions of plastic bottle waste collection as a business. People with large families saw waste collection as a feasible way to provide food, housing, education and other essentials.

    However, the association of waste collection with income instability highlights the need to formalise and stabilise the sector. Waste collection must be made into a sustainable and reliable business model.

    Religion and cultural norms

    Religion and cultural beliefs emerged as influences from our survey. This was evident in the responses of people who followed African traditional religions and Islam.

    These respondents viewed waste collection as financially feasible, aligning with religious teachings that emphasise resource management and stewardship. For example, Islamic teachings on israf (avoiding wastefulness) and zakat (charity) promote efficient resource use and economic activities that benefit communities.

    Similarly, African traditional religion often emphasises communal responsibility and the sustainable use of resources. These religious principles underscore the cultural acceptance of waste collection as both a practical and a morally guided economic activity.

    Other cultural norms, such as the value placed on communal responsibility and cooperation, also influenced attitudes towards waste collection. In communities with a strong tradition of collective action, where unity and mutual support are highly valued, waste collection is often viewed as a collaborative effort.

    These cultural norms reinforce the idea that waste collection is not just an individual task, but a collective duty that benefits the entire community.




    Read more:
    Informal waste management in Lagos is big business: policies need to support the trade


    Gender dynamics

    Gender plays a role in perception and practice in waste collection. Our survey found that male participants were more likely than female participants to perceive this activity as a business.

    As constrained as they are by lack of access to resources, women are involved in separating and marketing reusable items. Measures like microfinance could increase women’s engagement and business opportunities.

    This would empower women and make waste collection a more inclusive and sustainable business.

    Age and desire to be an entrepreneur

    Perceptions were influenced by age in our study. Younger individuals, up to 14 years old, viewed plastic bottle waste collection as a gateway to employment. Adults aged 33-38 used their experience to get better returns on the business.

    This age-based distinction suggests that different stages of life bring unique motivations and approaches to waste collection.

    Policy actions that support entrepreneurship at various life stages can promote long-term engagement in the industry. This will help formalise waste collection as a sustainable and profitable business.

    Economic and social factors

    Income opportunities affected participants’ experiences more than social factors. Oftentimes, this determined how long they stayed in the business. Those earning more were likelier to reinvest and grow, while lower earnings often led to disengagement or exit. This highlights the importance of financial incentives in shaping waste collection practices.

    Social connections also play a role in fostering collaboration. It facilitates teamwork and the exchange of ideas, and creates a sense of shared purpose and collective outcomes among participants.

    Strengthening these economic and social bonds can formalise plastic bottle waste collection, making it a more efficient and profitable business.




    Read more:
    Waste disposal in Nigeria is a mess: how Lagos can take the lead in sorting and recycling


    Looking ahead

    The study has significant application to Nigeria’s waste management industry. Adding education programmes into waste management programmes will improve people’s business skills.

    Well-coordinated intervention strategies can remove cultural and gender-specific barriers. For instance, cooperatives and microfinance may make waste collection more financially appealing.

    Strategies can also draw on cultural norms to increase community acceptance of waste collection and make it more inclusive.

    Samuel Oludare Awobona, a doctoral student at Osun State University, Osogbo, Nigeria, contributed to this research.

    The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    ref. Nigeria’s plastic bottle collectors turn waste into wealth: survey sheds light on their motivation – https://theconversation.com/nigerias-plastic-bottle-collectors-turn-waste-into-wealth-survey-sheds-light-on-their-motivation-247819

    MIL OSI

  • MIL-OSI Submissions: Cyberattacks: how companies can communicate effectively after being hit

    Source: The Conversation – France – By Paolo Antonetti, Professeur, EDHEC Business School

    In its latest annual publication, insurance group Hiscox surveyed more than 2,000 cybersecurity managers in eight countries including France. Two thirds of the companies in the survey reported having been the victim of a cyberattack between mid-August 2023 and September 2024, a 15% increase over the previous period. In terms of potential financial losses, Statista estimated that cyberattacks cost France up to €122 billion in 2024, compared to €89 in 2023 – a 37% rise.


    A weekly e-mail in English featuring expertise from scholars and researchers. It provides an introduction to the diversity of research coming out of the continent and considers some of the key issues facing European countries. Get the newsletter!

    The main forms of cyberattacks on French businesses, the recommendations for how companies can protect themselves, and the technical and legal responses they can adopt are well documented.

    However, much less is known about appropriate communications and public relations responses to cyberattacks. The issues at stake are critical. When a company is the target of a cyberattack, should it systematically accept responsibility, or can it instead claim to be a victim to protect its reputation? A wrong answer can aggravate the situation and undermine the confidence of customers and investors.

    Positioning as a victim

    Our recent research questions the assumption that accepting causal responsibility should be the norm after a cyberattack: we show that positioning oneself as a victim can be more effective in limiting damage to one’s image – provided claims of victimhood are deployed intelligently.

    There is evidence that firms need a strategy to present themselves effectively as victims of cybercriminals. Some firms, such as T-Mobile and Equifax, have in the past paid compensation to consumers while refusing to accept any responsibility, essentially presenting themselves as victims.

    Similarly, the large French telecommunications operator Free presented itself as a victim when communicating about the large-scale cyberattack that affected its operations last October, which may have had an impact on its image. The UK’s TalkTalk initially framed itself as a victim of a cybercrime but was later criticized for its inadequate security measures.

    Victimhood and sympathy

    Clumsily declaring itself as the sole entity to blame or the sole victim of a cyberattack – which is what interests us here – can be risky and backfire on a company, damaging its credibility rather than protecting its reputation.

    When companies present themselves as victims of cybercrime, they can elicit sympathy from stakeholders. People tend to be more compassionate toward businesses that depict themselves as wronged rather than those that deny responsibility or shift blame. In essence, this strategy frames the organization as a target of external forces beyond its control, rather than as negligent or incompetent. It leverages a fundamental social norm – people’s instinctive tendency to support those they see as victims.

    But claims of victimhood must align with public expectations and the specific context of the breach. They should not be about shirking responsibility, but about acknowledging harm in a way that fosters understanding and trust. The following approaches and choices can help.

    • align with public perception

    The reactions of stakeholders often depend on their understanding of the situation. If the attack is perceived as an external and malicious act, it is crucial for a company to adopt a consistent stance by emphasizing that it itself has been a victim. But if internal negligence is proven, claiming victim status could be counterproductive. The swiftness of a company’s response, the level of transparency and the relative stance taken are all part of a good strategy.

    • express support for stakeholders

    Adopting a position of victimhood does not mean denying all responsibility or minimizing the consequences of an attack. The company must show that it takes the situation seriously by expressing empathy and commitment to affected stakeholders. It must pay particular attention to those affected inside the organization: a claim of victimhood should be part of an apology or a message expressing concern. An effective message must be sincere and oriented toward concrete solutions.

    • consider reputation

    We find that it is easier for companies to claim victimhood persuasively if they are perceived as virtuous. This reputation can be due to a positive track record in terms of corporate social responsibility or because they are a not-for-profit institution (e.g. a library, a university or a hospital). Virtuous victims generate sympathy and empathy, and this is also reflected after a cyberattack.

    • highlight the harmfulness and sophistication of the attack

    The results of our study also show that public acceptance of victim status is more effective when the cyberattack is perceived to be the work of highly competent malicious actors. It is also important for a company to persuade the public that the attack harmed the company, while keeping the main focus of the response on the public.

    • don’t complain

    It is essential to distinguish between legitimate claims of victim status and communication that could be perceived as an attempt to exonerate oneself. An overly plaintive tone could undermine a company’s credibility. The approach should be factual and constructive, focusing on the measures taken to overcome the crisis.

    • test reactions before communicating widely

    Companies’ responses to a cyberattack can vary depending on the context and the public. It is best to assess different approaches before embarking on large-scale communication. This can be done through internal tests, focus groups or targeted surveys. Subtle differences in the situation can cause important shifts in how the public perceives the breach and what the best response might be.

    Our study sheds light on a shift in public expectations about crisis management: in the age of ubiquitous cybercrime, responsibilities are often shared. Poorly managed communication after a cyberattack can lead to a lasting loss of trust and expose a company to increased legal risks. Claiming victim status effectively, with an empathetic and transparent approach, can help mitigate the impact of the crisis and preserve the organization’s reputation.


    This article was written with Ilaria Baghi (University of Modena and Reggio Emilia).

    Paolo Antonetti ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’a déclaré aucune autre affiliation que son organisme de recherche.

    ref. Cyberattacks: how companies can communicate effectively after being hit – https://theconversation.com/cyberattacks-how-companies-can-communicate-effectively-after-being-hit-255061

    MIL OSI

  • MIL-OSI Submissions: From the marriage contract to breaking the glass under the chuppah, many Jewish couples adapt their weddings to celebrate gender equality

    Source: The Conversation – USA (3) – By Samira Mehta, Associate Professor of Women and Gender Studies & Jewish Studies, University of Colorado Boulder

    The ketubah is a binding document in Jewish law that traditionally spells out a groom’s responsibilities toward his wife − but that many couples adapt to be more egalitarian. PowerSiege/iStock via Getty Images Plus

    Traditional Jewish weddings share one key aspect with traditional Christian weddings. Historically, the ceremony was essentially a transfer of property: A woman went from being the responsibility of her father to being the responsibility of her husband.

    That may not be the first thing Americans associate with weddings today, but it lives on in rituals and vows. Think, in a traditional Christian wedding, of a bride promising “to obey” her husband, or being “given away” by her father after he walks her down the aisle.

    Feminism has changed some aspects of the Christian wedding. More egalitarian or feminist couples, for example, might have the bride be “given away” by both her parents, or have both the bride and groom escorted in by parents. Others skip the “giving” altogether. Queer couples, too, have reimagined the wedding ceremony.

    Mara Mooiweer, left, and Elisheva Dan dance during their socially distanced wedding in Brookline, Mass., during the COVID-19 pandemic.
    Jessica Rinaldi/The Boston Globe via Getty Images

    During research for my book “Beyond Chrismukkah,” about Christian-Jewish interfaith families, many interviewees wound up talking about their weddings and the rituals that they selected or innovated for the day to reflect their cultural background. Some of them had also designed their ceremonies to reflect feminism and marriage equality – something that the interfaith weddings had in common with many weddings where both members of the couple were Jewish.

    These values have transformed many Jewish couples’ weddings, just as they have transformed the Christian wedding. Some Jewish couples make many changes, while some make none. And like every faith, Judaism has lots of internal diversity – not all traditional Jewish weddings look the same.

    Contracts and covenants

    Perhaps one of the most important places where feminism and marriage equality have reshaped traditions is in the “ketubah,” or Jewish marriage contract.

    A traditional ketubah is a simple legal document in Hebrew or Aramaic, a related ancient language. Two witnesses sign the agreement, which states that the groom has acquired the bride. However, the ketubah is also sometimes framed as a tool to protect women. The document stipulates the husband’s responsibility to provide for his wife and confirms what he should pay her in case of divorce. Traditional ketubot – the plural of ketubah – did not discuss love, God or intentions for the marriage.

    A groom signs the ketubah as witnesses sit beside him in Jerusalem, Israel, in 2014.
    Dan Porges/Getty Images

    Contemporary ketubot in more liberal branches of Judaism, whether between opposite- or same-sex couples, are usually much more egalitarian documents that reflect the home and the marriage that the couple want to create. Sometimes the couple adapt the Aramaic text; others keep the Aramaic and pair it with a text in the language they speak every day, describing their intentions for their marriage.

    Rather than being simple, printed documents, contemporary ketubot are often beautiful pieces of art, made to hang in a place of prominence in the newlyweds’ home. Sometimes the art makes references to traditional Jewish symbols, such as a pomegranate for fertility and love. Other times, the artist works with the couple to personalize their decorations with images and symbols that are meaningful to them.

    Contemporary couples will often also use their ketubah to address an inherent tension in Jewish marriage. Jewish law gives men much more freedom to divorce than it gives women. Because women cannot generally initiate divorce, they can end up as “agunot,” which literally means “chained”: women whose husbands have refused to grant them a religious divorce. Even if the couple have been divorced in secular court, an “agunah” cannot, according to Jewish law, remarry in a religious ceremony.

    Contemporary ketubot will sometimes make a note that, while the couple hope to remain married until death, if the marriage deteriorates, the husband agrees to grant a divorce if certain conditions are met. This prevents women from being held hostage in unhappy marriages.

    Other couples eschew the ketubah altogether in favor of a new type of document called a “brit ahuvim,” or covenant of lovers. These documents are egalitarian agreements between couples. The brit ahuvim was developed by Rachel Adler, a feminist rabbi with a deep knowledge of Jewish law, and is grounded in ancient Jewish laws for business partnerships between equals. That said, many Jews, including some feminists, do not see the brit ahuvim as equal in status to a ketubah.

    Two female ducks are depicted on the ketubah hanging in the sunroom in Lennie Gerber and Pearl Berlin’s home in High Point, N.C.
    AP Photo/Allen G. Breed

    Building together

    Beyond the ketubah, there are any number of other changes that couples make to symbolize their hopes for an egalitarian marriage.

    Jewish ceremonies often take place under a canopy called the chuppah, which symbolizes the home that the couple create together. In a traditional Jewish wedding, the bride circles the groom three or seven times before entering the chuppah. This represents both her protection of their home and that the groom is now her priority.

    Many couples today omit this custom, because they feel it makes the bride subservient to the groom. Others keep the circling but reinterpret it: In circling the groom, the bride actively creates their home, an act of empowerment. Other egalitarian couples, regardless of their genders, share the act of circling: Each spouse circles three times, and then the pair circle once together.

    In traditional Jewish weddings, like in traditional Christian weddings, the groom gives his bride a ring to symbolize his commitment to her – and perhaps to mark her as a married woman. Many contemporary Jewish couples exchange two rings: both partners offering a gift to mark their marriage and presenting a symbol of their union to the world. While some see this shift as an adaptation to American culture, realistically, the dual-ring ceremony is a relatively new development in both American Christian and American Jewish marriage ceremonies.

    Finally, Jewish weddings traditionally end when the groom stomps on and breaks a glass, and the entire crowd yells “Mazel tov” to congratulate them. People debate the symbolism of the broken glass. Some say that it reminds us that life contains both joy and sorrow, or that it is a reminder of a foundational crisis in Jewish history: the destruction of the Second Temple in Jerusalem in 70 C.E. Others say that it is a reminder that life is fragile, or that marriage, unlike the glass, is an unbreakable covenant.

    Yulia Tagil and Stas Granin celebrate their union on July 25, 2010, at a square in Tel Aviv. The couple held a public wedding to protest Israeli marriage guidelines set by the chief rabbinate.
    Uriel Sinai/Getty Images

    Regardless of what it means, some contemporary couples both step on glasses, or have one partner place their foot on top of the other’s so that the newlyweds can break the glass together. The couple symbolize their commitment to equality – and both get to do a fun wedding custom.

    There are many other innovations in contemporary Jewish weddings that have much less to do with feminism and egalitarianism, such as personalized wedding canopies or wedding programs. But these key changes represent how the wedding ceremony itself has become more egalitarian in response to both feminism and marriage equality.

    Samira Mehta receives funding from the Henry Luce Foundation for work on Jews of Color.

    ref. From the marriage contract to breaking the glass under the chuppah, many Jewish couples adapt their weddings to celebrate gender equality – https://theconversation.com/from-the-marriage-contract-to-breaking-the-glass-under-the-chuppah-many-jewish-couples-adapt-their-weddings-to-celebrate-gender-equality-229084

    MIL OSI

  • MIL-OSI Submissions: Who’s the most American? Psychological studies show that many people are biased and think it’s a white English speaker

    Source: The Conversation – USA (2) – By Katherine Kinzler, Professor of Psychology, University of Chicago

    Some people have a narrow view of who is American. The Good Brigade/DigitalVision via Getty Images

    In the U.S. and elsewhere, nationality tends to be defined by a set of legal parameters. This may involve birthplace, parental citizenship or procedures for naturalization.

    Yet in many Americans’ minds these objective notions of citizenship are a little fuzzy, as social and developmental psychologists like me have documented. Psychologically, some people may just seem a little more American than others, based on factors such as race, ethnicity or language.

    Reinforced by identity politics, this results in different ideas about who is welcome, who is tolerated and who is made to not feel welcome at all.

    How race affects who belongs

    Many people who explicitly endorse egalitarian ideals, such as the notion that all Americans are deserving of the rights of citizenship regardless of race, still implicitly harbor prejudices over who’s “really” American.

    In a classic 2005 study, American adults across racial groups were fastest to associate the concept of “American” with white people. White, Black and Asian American adults were asked whether they endorse equality for all citizens. They were then presented with an implicit association test in which participants matched different faces with the categories “American” or “foreign.” They were told that every face was a U.S. citizen.

    White and Asian participants responded most quickly in matching the white faces with “American,” even when they initially expressed egalitarian values. Black Americans implicitly saw Black and white faces as equally American – though they too implicitly viewed Asian faces as being less American.

    Similarly, in a 2010 study, several groups of American adults implicitly considered British actress Kate Winslet to be more American than U.S.-born Lucy Liu – even though they were aware of their actual nationalities.

    Importantly, the development of prejudice can even include feelings that disadvantage one’s own group. This can be seen when Asian Americans who took part in the studies found white faces to be more American than Asian faces. A related 2010 study found that Hispanic participants were also more likely to associate whiteness with “Americanness.”

    Who’s the American?
    AP Photo

    Language and nationality

    These biased views of nationality begin at a young age – and spoken language can often be a primary identifier of who is in which group, as I show in my book “How You Say It.”

    Although the U.S. traditionally has not had a national language, many Americans feel that English is critical to being a “true American.” And the president recently released an executive order claiming to designate English as the official language.

    In a 2017 study conducted by my research team and led by psychologist Jasmine DeJesus, we gave children a simple task: After viewing a series of faces that varied in skin color and listening to those people speak, children were asked to guess their nationality. The faces were either white- or Asian-looking and spoke either English or Korean. “Is this person American or Korean?” we asked.

    We recruited three groups of children for the study: white American children who spoke only English, children in South Korea who spoke only Korean, and Korean American children who spoke both languages. The ages of the children were either 5-6 or 9-10.

    The vast majority of the younger monolingual children identified nationality with language, describing English speakers as American and Korean speakers as Korean – even though both groups were divided equally between people who looked white or Asian.

    As for the younger bilingual children, they had parents whose first language was Korean, not English, and who lived in the United States. Yet, just like the monolingual children, they thought that the English speakers, and not the Korean speakers, were the Americans.

    As they age, however, children increasingly view racial characteristics as an integral part of nationality. By the age of 9, we found that children were considering the white English speakers to be the most American, compared with Korean speakers who looked white or English speakers who looked Asian.

    Interestingly, this impact was more pronounced in the older children we recruited in South Korea.

    Deep roots

    So it seems that for children and adults alike, assessments of what it means to be American hinge on certain traits that have nothing to do with the actual legal requirements for citizenship. Neither whiteness nor fluency in English is a requirement to become American.

    And this bias has consequences. Research has found that the degree to which people link whiteness with Americanness is related to their discriminatory behaviors in hiring or questioning others’ loyalty.

    That we find these biases in children does not mean they are in any way absolute. We know that children begin to pick up on these types of biased cultural cues and values at a young age. It does mean, however, that these biases have deep roots in our psychology.

    Understanding that biases exist may make it easier to correct them. So Americans celebrating the Fourth of July perhaps should ponder what it means to be an American – and whether social biases distort your beliefs about who belongs.

    This is an updated version of an article originally published on July 2, 2020.

    Katherine Kinzler receives funding from the National Science Foundation.

    ref. Who’s the most American? Psychological studies show that many people are biased and think it’s a white English speaker – https://theconversation.com/whos-the-most-american-psychological-studies-show-that-many-people-are-biased-and-think-its-a-white-english-speaker-256418

    MIL OSI

  • MIL-OSI Submissions: 1 in 3 Florida third graders have untreated cavities – how parents can protect their children’s teeth

    Source: The Conversation – USA (3) – By Olga Ensz, Clinical Assistant Professor of Community Dentistry, University of Florida

    Many Florida children lack access to routine dental care. Lu ShaoJi/Moment via Getty Images

    “He hides his smile in every school photo,” Jayden’s mother told me, holding up a picture of her 6-year-old son.

    I first met Jayden – not his real name – as a patient at the University of Florida community dental outreach program in Gainesville, Florida. Jayden had visible cavities on his front teeth – dark spots that had become the target of teasing and bullying by classmates. The pain had become so severe that he began missing school. His family, living in a rural part of north Florida, had spent months trying to find a dentist who accepted Medicaid.

    In the meantime, Jayden stopped smiling.

    As a dental public health professional working in community dental outreach settings, I’ve seen firsthand how children across the state face significant barriers to achieving good oral health. Despite being largely preventable, tooth decay remains the most common chronic disease among children in the U.S., and Florida is no exception.

    Pediatric dental health in Florida

    Untreated dental problems can lead to pain, infection, difficulty eating or sleeping, and even affect a child’s ability to concentrate and learn. Poor oral health has also been linked to broader health issues such as heart disease.

    According to the most recent data available from the Florida Department of Health, nearly 1 in 3 third graders in the Sunshine State had untreated tooth decay – that is, cavities – during the 2021–2022 school year. That’s almost double the national average of 17% of children ages 6-9 with untreated tooth decay and underscores the severity of the issue in Florida.

    In addition, only 37% of Florida third graders had dental sealants. These thin coatings applied to the chewing surfaces of molars are proven to prevent up to 80% of cavities. Nationwide, 51.4% of kids have this cost-effective treatment.

    The most recent data available from the 2017-2018 school year shows that 24% of children ages 3-6 in Florida’s Head Start program, which provides free health and education for low-income families with young children, had untreated tooth decay. By comparison, the U.S. Centers for Disease Control and Prevention found that 11% of U.S. children ages 2-5 had untreated decay.

    These numbers represent children like Jayden, whose pain and missed school days are preventable.

    A 2023 report found that Florida children are increasingly visiting emergency rooms for nontraumatic dental conditions. Besides being costly and stressful for families, these visits generally provide only temporary relief. Emergency departments simply aren’t equipped to offer dental care that addresses the root problem.

    Slipping through the cracks

    Florida ranks among the worst states in the U.S. for dental care access, with over 5.9 million residents living in dental care health professional shortage areas. In fact, 65 of Florida’s 67 counties face shortages of dental professionals, with some areas reporting just 6.6 dentists per 100,000 people – far below the national average of 60.4.

    This lack of access to care is compounded by poverty and insurance limitations.

    More than 2 million Florida children are enrolled in Medicaid, but only 18% of Florida dentists – about 2,500 in total – accept it. And even families with private insurance often face high out-of-pocket costs, making essential dental care unaffordable for some. Delaying routine dental visits can allow minor issues to worsen over time, ultimately requiring more complex and costly treatment.

    As a result, Florida ranks 43rd out of 50 states in the percentage of children receiving dental care in the past year.

    Lack of awareness is also a problem. Research shows that many parents don’t realize their children should see a dentist by their first birthday, and that baby teeth matter just as much as adult teeth.

    Prevention works

    Historically, community water fluoridation has been one of the most effective public health strategies to reduce children’s tooth decay. While fluoridation is not meant to be a standalone prevention method, multiple studies have shown that it helps to prevent cavities in both children and adults. As recently as May 2024, the CDC supported the safety of this strategy.

    However, a new Florida law, signed by Gov. Ron DeSantis in May 2025 and going into effect on July 1, now prohibits local governments from adding fluoride to public drinking water. This makes other preventive treatments even more essential.

    Fluoride varnish, recommended by pediatric and dental associations, is a topical treatment that should be applied every 3-6 months to reduce the risk of tooth decay.

    When a child has just the beginnings of a cavity, silver diamine fluoride is a noninvasive liquid treatment that can stop it from progressing. This is especially beneficial for young children or those with limited access to care.

    These highly effective, evidence-based treatments are safe and cost-effective, and they can be delivered in schools, medical offices and clinics.

    Creating a fun brushing routine can help your child maintain a healthy smile.
    PeopleImages/iStock via Getty Images Plus

    Keeping your kids’ teeth healthy

    Here are some steps parents can take right now to protect their child’s dental health:

    • Schedule regular dental visits, starting by age 1. Children should see a dentist by their first birthday or within six months of their first tooth. After that, annual visits help catch problems early, when treatment is easier and less expensive.

    • For families in areas with few dental providers, parents can ask their child’s pediatrician for referrals, check state Medicaid websites or use the American Association of Pediatric Dentists’ “Find a Pediatric Dentist” tool. Some communities also offer care through federally qualified health centers, dental schools or mobile clinics at low or no cost.

    • As soon as their teeth come in, children need to brush twice a day with fluoridated toothpaste. Use a smear of toothpaste about the size of a grain of rice for children under age 3, and a pea-sized amount for ages 3–5.

    • Make brushing a fun and supported routine. Help your child brush until they can do it well on their own, usually around age 7 or 8. Play a favorite song or video to make brushing time enjoyable.

    • Limit sugary snacks and drinks. Offer water and healthy snacks like fruits and vegetables. Avoid letting infants fall asleep with bottles of milk or juice, and limit sticky, sugary foods like candy, chips and cookies.

    • Ask your dentist about sealants and fluoride varnish. These treatments are especially important for children at higher risk for cavities, such as those with limited access to dental care, a family history of tooth decay, visible plaque or the habit of frequent snacking.

    Olga Ensz does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. 1 in 3 Florida third graders have untreated cavities – how parents can protect their children’s teeth – https://theconversation.com/1-in-3-florida-third-graders-have-untreated-cavities-how-parents-can-protect-their-childrens-teeth-257200

    MIL OSI

  • MIL-OSI Submissions: Difficult work arrangements force many women to stop breastfeeding early. Here’s how to prevent this

    Source: The Conversation – Indonesia – By Andini Pramono, Research officer, Department of Health Economics, Wellbeing and Society, National Centre for Epidemiology and Population Health, Australian National University

    Research shows that six months of exclusive breastfeeding, and continuing until two years old or beyond, provide multiple benefits for the baby and mother.

    It can prevent deaths both in infants and mothers – including in wealthy nations like the United States. It also benefits the global economy and the enviroment.

    However, after maternity leave ends, mothers returning to paid work face many challenges maintaining breastfeeding. This often leads mothers to stop breastfeeding their children before six months – the duration of exclusive breastfeeding recommended by the World Health Organisation (WHO) and others.

    According to the WHO, less than half of babies under six months old worldwide are exclusively breastfed.

    In Indonesia, research shows 83% of mothers initiate breastfeeding, but only 57% are still breastfeeding at around six months. In Australia, 96% of mothers start breastfeeding, but then there is a rapid fall to only 39% by around three months and only 15% by around five months.

    Among the key reasons for low rates of exclusive breastfeeding are the difficult work conditions women face when they return to paid work.

    So how can governments and workplaces – especially in countries that have yet to do enough, like Indonesia and Australia – better support breastfeeding mothers, particularly at work?

    Half a billion reasons to change

    For more than a century, the International Labour Organization (ILO) has set global standards for maternity protection through the Maternity Protection Convention and accompanying recommendations, as well as the ILO Workers with Family Responsibilities Convention, aiming to protect female workers’ rights.

    So far, only 66 member states have ratified at least one of the Maternity Protection Conventions, while 43 have ratified the Workers with Family Responsibilities Convention. Unfortunately, Indonesia has not ratified either convention. So far, Australia has only ratified the family responsibilities convention.

    In some countries, protections are aligned with the ILO Conventions. For example, in Denmark and Norway, the governments offer maternity leave of at least 14 weeks. During leave, mothers’ earnings are protected at a rate of at least two-thirds of their pre-birth earnings. Public funds ensure this is done in a manner determined by national law and practice, so the employer is not solely responsible for the payment.

    A Canadian study highlights the proportion of mothers exclusively breastfeeding to six months increased by almost 40% when paid maternity leave was expanded from six to 12 months. At the same time, average breastfeeding duration increased by one month, from five to six months.

    Evidence shows paid maternity leave and providing an adequate lactation room at work both contribute positively to breastfeeding rates.

    Despite this, half a billion women globally still lack adequate maternity protections.

    For example, welfare reforms in the US encouraging new mothers’ return to work within 12 weeks led to a 16–18% reduction in breastfeeding initiation. It also saw a four to six week reduction in the time babies were breastfed.

    Indonesia and Australia aren’t doing enough

    Neither Indonesia or Australia are currently doing enough to meet the ILO’s maternity protection standards.

    In Indonesia, the 2003 Labour Law urges companies to give 12 weeks of paid maternity leave for women workers to support breastfeeding. Furthermore, the 2012 regulation on exclusive breastfeeding obligates workplace and public space management to provide a space or facility to breastfeed and express breast milk. However, the monitoring of its implementation is weak.

    In Australia, paid parental leave (PPL) policy supports parents who take time off from paid work to care for their young children.

    Eligible working mothers or primary carers are entitled to up to 20 weeks (or 22 weeks if the child is born or adopted from 1 July 2024) of government paid parental leave within the first two years of the birth or adoption of a child.

    In the Federal Budget announced on 15 May 2024, the Australian government has added payment of superannuation contributions to the parental leave package for births and adoptions on or after 1 July 2025. However, the PPL is a low amount, paid at the national minimum wage ($882.80 per week)].

    Some mothers can combine the government payment with additional paid leave from their employer. However in 2022-2023, only 63% of Australian employers offered this, leaving nearly half of new mothers with only minimum financial support.

    Unlike Indonesia, Australia has no legal requirement for employers to offer paid breastfeeding breaks in their workplace, so mothers can express and take home their breastmilk. This can badly impact women’s and children’s health.

    While Australia’s support for breastfeeding mothers is welcome, it’s still inadequate to meet the ILO’s international standard – particularly Australia’s low payment rate of government PPL (at the minimum wage, rather than two-thirds of previous earnings) and the lack of legislation for paid breastfeeding breaks.

    How employers and colleagues can help

    Globally, the barriers to maintain breastfeeding include not only lack of maternity leave duration and pay, but also unavailability of breastfeeding and breast pumping facilities at workplaces, sometimes unsupportive colleagues and supervisors, and lack of time at work to breastfeed or expressing breastmilk.

    Breastfeeding a baby should not preclude women from earning a living. In 2022, female workers were 39.5% of total workers globally, while in Australia and Indonesia they made up 47.4% and 39.5% respectively.

    An accessible facility or space for breastfeeding or breast pumping is vital to support breastfeeding working mothers.

    In Indonesia, a 2013 Ministry of Health regulation outlines the procedure for an employer to provide a space and facility for mothers to breastfeed and breast pump.

    The minimum specifications of this facility are described as a lockable, clean and quiet room, with a sink for washing, suitable temperature, lighting and flooring. While these specifications are technically mandatory, monitoring is weak, meaning if employers fail to meet the requirements there are no specific consequences.

    But a breastfeeding space alone is not enough. In many jobs, mothers cannot leave their tasks during working hours, even if there is a lactation room.

    Supportive employers need to regulate time and flexibility to breastfeed and express breastmilk, including providing flexible working arrangements and paid breastfeeding breaks during working hours. Supportive attitudes from co-workers and managers are also important.

    Suitable staff training on breastfeeding and policies supporting mothers, such as providing time and facility to express breastmilk in work hours, are crucial. Training on how to support co-worker can include anything from basic information breastfeeding, to what to say (or not say) with a breastfeeding co-worker.

    Access to supportive childcare is another issue globally.

    For those families who can access childcare, childcare centres can also help by:

    • encouraging and accommodating mothers to visit for breastfeeding
    • having written policies supporting breastfeeding
    • providing parents with resources on breastfeeding
    • and referring parents to community resources for breastfeeding support.

    Practical ways to support more families

    The Australian Breastfeeding Association has an accreditation program that helps workplaces to be breastfeeding-friendly. Workplace policies, including adequate time and space for pumping, are positively associated with longer breastfeeding duration.

    The program assesses workplaces for three aspects: time, space and supportive culture. This means, workplaces are encouraged to provide a special space and time for breastfeeding and breast pumping in a supportive culture and flexible working hours.

    Mothers should consider to prepare how to align breastfeeding with work early – during pregnancy. Start by discussing your breastfeeding goals with healthcare professionals and finding a baby-friendly hospital.

    Discuss your breastfeeding plan with your supervisor at work during your pregnancy, including finding out your maternity leave (paid and unpaid) entitlements. Also consider childcare arrangements that will work best for you with breastfeeding.

    For further information and support, you can find resources from local breastfeeding support groups, such as the Indonesian Breastfeeding Mothers Association and Australian Breastfeeding Association.

    Julie P. Smith is a qualified breastfeeding counselor and honorary member of the Australian Breastfeeding Association.

    Andini Pramono dan Liana Leach tidak bekerja, menjadi konsultan, memiliki saham, atau menerima dana dari perusahaan atau organisasi mana pun yang akan mengambil untung dari artikel ini, dan telah mengungkapkan bahwa ia tidak memiliki afiliasi selain yang telah disebut di atas.

    ref. Difficult work arrangements force many women to stop breastfeeding early. Here’s how to prevent this – https://theconversation.com/difficult-work-arrangements-force-many-women-to-stop-breastfeeding-early-heres-how-to-prevent-this-211831

    MIL OSI

  • MIL-OSI Submissions: We discovered Raja Ampat’s reef manta rays prefer staying close to home – which could help us save more of them

    Source: The Conversation – Indonesia – By Edy Setyawan, Marine Ecologist, University of Auckland, Waipapa Taumata Rau

    The reef manta ray (Mobula alfredi) is a tough swimmer. They can travel hundreds of kilometres to feed themselves. The longest recorded movement for an individual reef manta ray was 1,150km, observed in eastern Australia.

    But even though they are able to swim long distances, our study on reef manta rays in Raja Ampat, Southwest Papua, discovered they are more likely to swim short distances. They appear to prefer staying close to their local habitats, strengthening their social bonds and forming distinct populations.

    Our research – involving researchers from Indonesia, New Zealand and Australia and published in the Royal Society Open Science journal in April – increases our understanding of this globally vulnerable species.

    Policymakers can use our findings to enhance conservation efforts for the species in Raja Ampat waters, which currently are facing challenges due to fishing and tourism.

    Why don’t reef manta rays roam far?

    Our study found reef manta rays occupy three distinct habitats within Raja Ampat. As of February 2024, we recorded 1,250 individual manta rays around Waigeo Island’s extensive coral reef ecosystem in the northwest of Raja Ampat; 640 manta rays around the coral reef ecosystem in the southeast of Misool, southern Raja Ampat; and no more than 50 manta rays in the Ayau atoll ecosystem up north.

    Within their own habitat, the manta rays tend to move around from one area to another, sticking to relatively short distances within 12 kilometres. They only occasionally make longer trips to similar areas in other habitats across Raja Ampat.

    We believe there are a few reasons why reef manta rays in Raja Ampat do not often venture far. The first reason is the presence of natural barriers, such as deep waters – over 1,000 metres below sea level – between Ayau Atoll and Waigeo Island, as well as the sea between Misool and Kofiau, which is 800-900 metres deep.

    Travelling through deep waters poses increased risks to reef manta rays due to potential encounters with natural predators, such as killer whales (Orcinus orca) and large sharks, which frequently inhabit deep open water.

    The second reason is that each habitat is well-equipped with sufficient resources, such as food and cleaning stations, reducing the need for the reef manta rays to travel extensively.

    Our previous research has identified dozens of feeding areas and cleaning stations in each habitat occupied by local populations of reef manta rays in Raja Ampat.

    Raja Ampat’s ‘small town’ of reef manta rays

    The habits of reef manta rays in Raja Ampat are gradually forming a unique population.

    We have found that they do not form a single large population, but instead split into three local populations, creating a metapopulation. A metapopulation consists of several local populations of the same species, each occupying its own habitat but all situated within the same geographic region.

    Think of a metapopulation as a small town, consisting of three hamlets. When each hamlet has enough food and water, the people prefer to stay in their own settlement. But they still live in the same town and occasionally visit each other.

    We found this movement pattern based on our tracking process from 2016 to 2021 using acoustic telemetry, which functions similarly to office check-in systems.

    In the tracking process, we combined this acoustic tracking with network analysis to map out the movement network of the manta rays, consisting of nodes and links. Nodes represent important areas for the manta rays, like cleaning stations and feeding areas, and links represent the movement between these key areas.

    The metapopulation occurs because individual manta rays migrate between local populations. Based on our observation, the migrating manta rays usually head back to their original area — it is often seasonal – while those that spread out generally do not return.

    This movement pattern means there is less mixing of individuals between local populations compared to within a single local population.

    How to better protect reef manta rays

    Some conservation policies and efforts have successfully increased the populations of reef manta rays in Raja Ampat.

    But increased human activities such as fishing and tourism in eastern Indonesia still pose challenges. While manta rays are not directly caught or hunted, they often get entangled in fishing lines and nets, which may cause harm and sometimes death.

    Additionally, with the increasing popularity of Raja Ampat as a top tourism destination, overcrowding and aggressive behavior by divers and snorkelers in Raja Ampat disrupt manta ray cleaning and feeding, which may affect their health and fitness.

    Conservation strategies for reef manta rays require a more precise and targeted approach to effectively address these growing challenges.

    The recognition of these rays as a metapopulation comprising three distinct local populations can inform a strategy shift in conservation management.

    Recently, we have presented our research findings and recommendations to the authorities responsible for managing the Raja Ampat Marine Protected Area (MPA) network.

    We recommend the MPA management authority in Raja Ampat create and implement three separate management units, each tailored to the specific needs of one of the local manta ray populations.

    Separate units are necessary because each habitat has different demographics and is far apart, making it difficult to manage them as a single unit. This strategy is feasible because local rangers in each habitat already conduct regular patrols and monitoring.

    We also see the urgent need to protect a critical area for various activities of reef manta rays in Raja Ampat called Eagle Rock, which is currently outside existing protected zones. Located in west of Waigeo, Eagle Rock could be effectively safeguarded by expanding the Raja Ampat MPA network to encompass this area.

    Protecting Eagle Rock is crucial, not only because it serves as a vital migration corridor connecting significant areas and habitats within the South East Misool MPA, Dampier Strait MPA, Raja Ampat MPA, and West Waigeo MPA, but also due to the increased threat from nickel mining activities on Kawe Island.

    MPAs prohibit industrial fishing, restrict tourism and all unsustainable activities — including mining — to minimise environmental impact.

    Besides mapping out the movement patterns and networks of key areas and habitats of reef manta rays in Raja Ampat, our research lays the groundwork for future studies, including genetic analysis and satellite tracking.

    These advanced techniques can offer deeper insights into the population structure, home range, and distribution of reef manta rays in the region, helping to enhance management and conservation strategies.

    Edy Setyawan has received funding from the Manaaki New Zealand Scholarship – Ministry of Foreign Affairs and Trade (MFAT) New Zealand, and the WWF Russell E. Train Education for Nature Program (EFN), United States.

    ref. We discovered Raja Ampat’s reef manta rays prefer staying close to home – which could help us save more of them – https://theconversation.com/we-discovered-raja-ampats-reef-manta-rays-prefer-staying-close-to-home-which-could-help-us-save-more-of-them-230692

    MIL OSI

  • MIL-OSI Submissions: If we don’t teach youth about sexual assault and consent, popular media will

    Source: The Conversation – Canada – By Shannon D. M. Moore, Assistant professor of social studies education, Department of Curriculum Teaching and Learning, Faculty of Education, University of Manitoba

    The sexual assault trial of five former World Juniors hockey players has spotlighted issues around sexual assault and consent.

    Sexual assault, intimate partner violence and other forms of gender-based violence aren’t inevitable. Kindergarten to Grade 12 public schools have an ethical obligation to enact sexuality education that is responsive to current contexts, respects human diversity, empowers young people and is rooted in human rights.

    We argue for harnessing popular media to advance sexuality education. Children and youth learn about a great deal about gender, relationships, sexuality and consent from popular media.

    Although there is strong theoretical rationale for using popular media to confront sexual assault, many teachers identify and experience barriers to putting this into practice in their classrooms.

    Let’s (not) talk about sex?

    Many factors shape the reality that comprehensive sex education remains wholly absent or inadequate in schools.

    Talking about sex in society and in schools is often taboo. Discussions of healthy relationships and consent are often highly controlled, minimized or relegated to a sexual education curriculum that is not universally taught. This is due to parental opt-outs/ins in many provinces.

    Some opponents of sexual education curriculum say parents should have full authority over the subject. Others exploit misunderstandings of age appropriateness and the presumed innocence of children and youth. Among the public at large, there is a lack of knowledge (or belief) about the high rates of sexual assault and other forms of gender-based violence experienced by youth within and beyond schools

    Not surprisingly, neglecting comprehensive sexuality education has many adverse consequences. Students learn that eliminating sexual violence is not a societal priority. Those who have experienced assault and other forms of violence learn that they are not important, as their stories are often silenced, ignored or distrusted.

    As a result, rape culture and gender-based violence remains unchallenged in schools, while it is normalized, legitimized and endorsed in popular media.




    Read more:
    ‘Adolescence’ on Netflix: A painful wake-up call about unregulated internet use for teens


    Meet your child’s other teacher

    In the absence or confines of comprehensive sex education in schools, youth identify popular media as their main source of information about sex and relationships.

    As professor of criminal justice, Nickie D. Phillips, writes, popular media is one of the “primary sites through which rape culture [is] understood, negotiated and contested.”

    What youth watch, play, listen to or create on social media has a significant role in teaching dominant understandings that normalize sexual violence, misogyny and the patriarchy.

    Critical media scholars Michael Hoechsmann and Stuart Poyntz emphasize that popular media “plays a central role in the socialization, acculturation and intellectual formation of young people. It is a … force to be reckoned with, and we ignore it at our peril.

    As teacher educators and educational researchers, the teachers we have worked with across grades and subject areas recognize how popular media is always and already present in classrooms, and many embrace the opportunities it affords for necessary conversations that are relevant to students.

    Challenges with using popular media

    The teacher participants in our study revealed that classroom culture wars have had a chilling impact on their practice, making them feel more wary about tackling particular topics.

    We found that despite research-informed rationale for using popular media to ground sexuality education, teachers encounter several barriers and complications in doing so.

    Teachers’ discomfort was exacerbated when school leaders did not support their efforts to advance these lessons, even though they were anchored to the provincial curriculum. Teacher participants also spoke of a lack of professional development or preparation to talk about healthy relationships and consent in teacher education contexts.

    Finally, they also raised concerns about teaching with and through violent, sexually suggestive or explicit popular media in classrooms. This is the case even though young people are learning about sex through limitless access to digital pornography and R-rated popular media outside of classrooms.

    Influencing healthy relationships

    There is limited research about how popular media content could be used to teach about sexual violence prevention. Through our ongoing research, we have identified several starting points for using popular media content to ground conversations about healthy relationships, boundaries and consent.

    1. Start with media constructions of gender: As popular media contributes to societal expectations of gender, students should begin by interrogating how masculinities and femininities are constructed and mobilized in popular media.

    This can include examining how male, female and non-binary characters are constructed and presented to audiences, their position within the broader storyline and their level of dialogue and how varied intersections of identity impact these depictions.

    Discussions of gender based violence must begin with intersectional discussions of gender, as these constructions contribute to the issue (for example, the hypersexualization and subordination of females, the exoticization and dehumanization of racialized women or the portrayal of males as powerful, aggressive and preoccupied with sex).

    2. Begin with unfamiliar content: Students can initially become defensive when they are asked to critically engage with media content that deeply connect with their identity and give them a sense of joy.

    While the goal is to move to the interrogation of students’ own media diets, it can positively generate student participation when educators begin analytical and critical discussions about media with unfamiliar, or at least not cherished, material (like popular songs, video or social media).

    This means students learn how to analyze content before connecting this analysis with themes related to gender-based violence, like: how popular media normalizes sexual violence against women and promotes unhealthy representations of romance and relationships; how popular media contributes to victim blaming or siding with perpetrators and promotes “himpathy” for males who commit sexual assault.

    3. Offer a feminist lens: As teacher educators, we recognize that there is no single method or approach that tends to every aspect of sexual assault and other forms of gender-based violence. Yet, we also know that educators seek resources to engage more meaningfully with students.

    Cards to foster conversation

    We constructed a deck of educational playing cards that educators can use to foster conversations about media portrayals of gender, healthy relationships and consent (or lack thereof).

    These cards employ a feminist lens, based on Sarah Ahmed’s Living a Feminist Life. We advocate for teachers to have time in professional learning spaces to try out the cards with other educators before they facilitate complex conversations related to gender-based violence with students.

    If as a society we want to see fewer instances of gender-based violence, teachers need provincial curriculum documents that align with the research on comprehensive sex education. They also need school leaders who will support their work and model consent in the broader school culture, and more professional development and preparation in teacher education.

    Shannon D. M. Moore receives funding from the Social Sciences and Humanities Research Council

    Jennifer Watt receives funding from the Social Sciences and Humanities Research Council .

    ref. If we don’t teach youth about sexual assault and consent, popular media will – https://theconversation.com/if-we-dont-teach-youth-about-sexual-assault-and-consent-popular-media-will-256741

    MIL OSI

  • MIL-OSI Submissions: Will the fragile ceasefire between Iran and Israel hold? One factor could be crucial to it sticking

    Source: The Conversation – Global Perspectives – By Ali Mamouri, Research Fellow, Middle East Studies, Deakin University

    Amir Levy/Getty Images

    After 12 days of war, US President Donald Trump announced a ceasefire between Israel and Iran that would bring to an end the most dramatic, direct conflict between the two nations in decades.

    Israel and Iran both agreed to adhere to the ceasefire, though they said they would respond with force to any breach.

    If the ceasefire holds – a big if – the key question will be whether this signals the start of lasting peace, or merely a brief pause before renewed conflict.

    As contemporary war studies show, peace tends to endure under one of two conditions: either the total defeat of one side, or the establishment of mutual deterrence. This means both parties refrain from aggression because the expected costs of retaliation far outweigh any potential gains.

    What did each side gain?

    The war has marked a turning point for Israel in its decades-long confrontation with Iran. For the first time, Israel successfully brought a prolonged battle to Iranian soil, shifting the conflict from confrontations with Iranian-backed proxy militant groups to direct strikes on Iran itself.

    This was made possible largely due to Israel’s success over the past two years in weakening Iran’s regional proxy network, particularly Hezbollah in Lebanon and Shiite militias in Syria.

    Over the past two weeks, Israel has inflicted significant damage on Iran’s military and scientific elite, killing several high-ranking commanders and nuclear scientists. The civilian toll was also high.

    Additionally, Israel achieved a major strategic objective by pulling the United States directly into the conflict. In coordination with Israel, the US launched strikes on three of Iran’s primary nuclear facilities: Fordow, Natanz and Isfahan.

    Despite these gains, Israel has not accomplished all of its stated goals. Prime Minister Benjamin Netanyahu had voiced support for regime change, urging Iranians to rise up against Supreme Leader Ali Khamenei’s government, but the senior leadership in Iran remains intact.

    Additionally, Israel has not fully eliminated Iran’s missile program. (Iran continued striking to the last minute before the ceasefire.) And Tehran did not acquiesce to Trump’s pre-war demand to end uranium enrichment.

    Although Iran was caught off-guard by Israel’s attacks — particularly as it was engaged in nuclear negotiations with the US — it responded by launching hundreds of missiles towards Israel.

    While many were intercepted, a significant number penetrated Israeli air defences, causing widespread destruction in major cities, dozens of fatalities and hundreds of injuries.

    Iran has demonstrated its capacity to strike back, though Israel has succeeded in destroying many of its air defence systems, some ballistic missile assets (including missile launchers) and multiple energy facilities.

    Since the beginning of the assault, Iranian officials have repeatedly called for a halt to resume negotiations. Under such intense pressure, Iran has realised it would not benefit from a prolonged war of attrition with Israel — especially as both nations face mounting costs and the risk of depleting their military stockpiles if the war continues.

    As theories of victory suggest, success in war is defined not only by the damage inflicted, but by achieving core strategic goals and weakening the enemy’s will and capacity to resist.

    While Israel claims to have achieved the bulk of its objectives, the extent of the damage to Iran’s nuclear program is not fully known, nor is its capacity to continue enriching uranium.

    Both sides could remain locked in a volatile standoff over Iran’s nuclear program, with the conflict potentially reigniting whenever either side perceives a strategic opportunity.

    Sticking point over Iran’s nuclear program

    Iran faces even greater challenges when it emerges from the war. With a heavy toll on its leadership and nuclear infrastructure, Tehran will likely prioritise rebuilding its deterrence capability.

    That includes acquiring new advanced air defence systems — potentially from China — and restoring key components of its missile and nuclear programs. (Some experts say Iran has not used some of its most powerful missiles to maintain this deterrence.)

    Iranian officials have claimed they safeguarded more than 400 kilograms of 60% enriched uranium before the attacks. This stockpile could theoretically be converted into nine to ten nuclear warheads if further enriched to 90%.

    Trump declared Iran’s nuclear capacity had been “totally obliterated”, whereas Rafael Grossi, the United Nations’ nuclear watchdog chief, said damage to Iran’s facilities was “very significant”.

    However, analysts have argued Iran will still have a depth of technical knowledge accumulated over decades. Depending on the extent of the damage to its underground facilities, Iran could be capable of restoring and even accelerating its program in a relatively short time frame.

    And the chances of reviving negotiations on Iran’s nuclear program appear slimmer than ever.

    What might future deterrence look like?

    The war has fundamentally reshaped how both Iran and Israel perceive deterrence — and how they plan to secure it going forward.

    For Iran, the conflict has reinforced the belief that its survival is at stake. With regime change openly discussed during the war, Iran’s leaders appear more convinced than ever that true deterrence requires two key pillars: nuclear weapons capability, and deeper strategic alignment with China and Russia.

    As a result, Iran is expected to move rapidly to restore and advance its nuclear program, potentially moving towards actual weaponisation — a step it had long avoided, officially.

    At the same time, Tehran is likely to accelerate military and economic cooperation with Beijing and Moscow to hedge against isolation. Iranian Foreign Minister Abbas Araghchi emphasised this close engagement with Russia during a visit to Moscow this week, particularly on nuclear matters.

    Israel, meanwhile, sees deterrence as requiring constant vigilance and a credible threat of overwhelming retaliation. In the absence of diplomatic breakthroughs, Israel may adopt a policy of immediate preemptive strikes on Iranian facilities or leadership figures if it detects any new escalation — particularly related to Iran’s nuclear program.

    In this context, the current ceasefire already appears fragile. Without comprehensive negotiations that address the core issues — namely, Iran’s nuclear capabilities — the pause in hostilities may prove temporary.

    Mutual deterrence may prevent a more protracted war for now, but the balance remains precarious and could collapse with little warning.

    Ali Mamouri does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Will the fragile ceasefire between Iran and Israel hold? One factor could be crucial to it sticking – https://theconversation.com/will-the-fragile-ceasefire-between-iran-and-israel-hold-one-factor-could-be-crucial-to-it-sticking-259669

    MIL OSI

  • MIL-OSI Submissions: AI applications are producing cleaner cities, smarter homes and more efficient transit

    Source: The Conversation – Canada – By Mohammadamin Ahmadfard, Postdoctoral Fellow, Mechanical & Industrial Engineering, Toronto Metropolitan University

    Artificial intelligence (AI) is quietly transforming how cities generate, store and distribute energy, acting as the invisible conductor that orchestrates cleaner, smarter and more resilient cities.

    By integrating renewables — from solar panels and wind turbines to geothermal grids, hydrogen plants, electric vehicles and batteries — AI can enable cities to manage diverse energy sources as a single, intelligent system.

    One striking example is the Oya Hybrid Power Station in South Africa. Here, AI-driven controls seamlessly co-ordinate solar, wind and battery storage to deliver reliable power to up to 320,000 households. Using AI makes this kind of integration not only possible, but dramatically more efficient.

    Recent research shows AI can also optimize how batteries, solar and the grid interact in buildings. A 2023 study found that deep learning and real-time data helped a boarding school in Turin, Italy increase low-cost energy purchases and cut its electricity bill by more than half.

    Cleaner, smarter energy grids

    AI models are increasingly able to predict weather with greater precision. These predictions allow electric grid operators to plan hours ahead, storing excess energy in batteries or adjusting supply to meet demand before a storm or heatwave hits.

    Using AI to respond strategically to weather is a game-changer. In Cambridge, England, a system called Aardvark uses satellite and sensor data to generate rapid, accurate forecasts of sun and wind patterns.

    Unlike traditional supercomputer-driven weather models, Aardvark’s AI can deliver precise local forecasts in minutes on an ordinary computer. This makes advanced weather prediction more accessible and affordable for cities, utilities and even smaller organizations — potentially transforming how communities everywhere plan for and respond to changing weather.

    AI models are increasingly able to predict weather with greater precision, allowing electric grid operators to plan ahead, storing excess energy in batteries or adjusting supply to meet demand before a storm or heat wave hits.
    (Shutterstock)

    AI for smarter district heating and cooling

    In Munich, Germany, AI is improving geothermal district heating by using underground sensors to monitor temperature and moisture levels in the ground.

    The collected data feeds into a digital simulation model that helps optimize network operations. In more advanced versions, during winter cold snaps, such systems can suggest lowering flow to underused spaces like half-empty offices and boosting heat where demand is higher, such as in crowded apartments.

    This intelligent, self-optimizing approach extends the life of equipment and delivers more warmth with the same energy input.

    This is a breakthrough with enormous potential for cities in cold climates with established geothermal networks, such as Winnipeg in Canada and Iceland’s Reykjavik.

    Although these cities have not yet adopted AI-driven monitoring systems, they could benefit from AI’s real-time improvements in efficiency, comfort and energy savings during harsh winters — a principle that holds true wherever geothermal district heating and cooling exists.

    Inside the home, AI-managed smart climate systems can factor in how many people are in each room, which appliances are in use, how much natural sunlight each space receives.
    (Shutterstock)

    Smart buildings

    Inside the home, AI-managed smart climate systems can factor in how many people are in each room, which appliances are in use, how much natural sunlight each space receives and how much electricity or heat a home’s solar panels generate throughout the day.

    Based on this, AI determines how to heat or cool rooms efficiently, and can transfer energy from one space to another, balancing comfort with minimal energy use.

    Coastal cities and those in wind-heavy regions are using AI in other creative ways. In Orkney, Scotland, excess wind and tidal energy are converted into green hydrogen. Instead of letting that surplus power go to waste, an AI system called HyAI controls when to generate hydrogen based on wind forecasts, electricity prices and how full the hydrogen storage tanks are.

    When winds are strong at night and electricity is cheap, the AI can divert surplus power to produce hydrogen and store it for later use. On calmer days, that stored hydrogen can power fuel cells or buses.

    Energy storage

    AI is transforming energy storage into a smart, revenue-generating force. In Finland, a startup called Capalo AI has developed Zeus VPP, an AI-powered virtual power plant that aggregates distributed batteries from homes, businesses and other sites.

    Zeus VPP uses advanced forecasting and AI algorithms to decide when batteries should charge or discharge, factoring in energy prices, local consumption and weather forecasts. This enables battery owners to earn revenue by participating in electricity markets, while also supporting grid stability and making better use of renewable energy.

    Utility companies are also using AI to monitor everything from high-voltage transmission lines to neighbourhood transformers, dramatically increasing reliability.

    AI-powered dynamic line rating adjusts how much electricity a line can carry in real time, boosting capacity by 15 to 30 per cent when conditions allow. This helps utilities maximize the use of existing infrastructure instead of relying on costly upgrades.

    At the local level, AI analyzes smart metre data to predict which transformers are overheating due to rising EV and heat pump use.

    By forecasting these stress points, utilities can proactively upgrade equipment before failures happen — a shift from reactive to predictive maintenance that makes the grid stronger and cities more resilient.

    AI-powered public transit and mobility

    Transportation innovation is becoming part of the energy solution, with AI at the centre of this transformation. In New York City, energy company Con Edison has installed major battery storage systems to help manage peak electricity demand and reduce reliance on polluting peaker plants, which supply energy only during high-demand periods.

    More broadly, Con Edison is deploying advanced AI-powered analytics software across its electric grid — optimizing voltage, enhancing reliability and enabling predictive maintenance. Together, these efforts show how combining energy storage and AI-driven analytics can make even the world’s busiest cities more resilient and efficient.

    AI is also powering “vehicle-to-grid” innovations in California, where an AI-driven platform manages electric school buses that can supply stored energy back to the grid during periods of high demand.

    By carefully managing when buses charge and discharge, these systems help keep the grid reliable and ensure vehicles are ready for their daily routes. As this technology expands, parked electric vehicles could serve as valuable backup resources for the electricity system.

    Transportation innovation is becoming part of the energy solution.
    (Shutterstock)

    AI for clean energy initiatives

    AI is rapidly transforming cities by revolutionizing how energy is used and managed. Google, for example, has slashed cooling energy at its data centres by up to 40 per cent using AI that fine-tunes fans, pumps and windows more efficiently than any human operator.

    Organizations like the Electric Power Research Institute (EPRI), in collaboration with NVIDIA, Microsoft and others, have launched the Open Power AI Consortium, which is creating open-source AI tools for utilities worldwide.

    These tools will enable even the most resource-constrained cities to deploy advanced AI capabilities, without having to start from scratch, helping to level the playing field and accelerate the global energy transition.

    The result is not just cleaner air and lower energy bills, but a path to fewer blackouts and more resilient homes.

    Mohammadamin Ahmadfard receives funding from the Natural Sciences and Engineering Research Council of Canada (NSERC) and Mitacs Inc. for his postdoctoral research at Toronto Metropolitan University.

    ref. AI applications are producing cleaner cities, smarter homes and more efficient transit – https://theconversation.com/ai-applications-are-producing-cleaner-cities-smarter-homes-and-more-efficient-transit-256291

    MIL OSI

  • MIL-OSI Submissions: Ibn Battuta, a 14th-century judge and ambassador, travelled further than Marco Polo. The Rihla records his adventures

    Source: The Conversation – Global Perspectives – By Ismail Albayrak, Professor of Islam and Catholic Muslim Relations, Australian Catholic University

    In our guides to the classics, experts explain key literary works.

    Ibn Battuta, was born in Tangier, Morocco, on February 24, 1304. From a statement in his celebrated travel book the Rihla (“legal affairs are my ancestral profession,”) he evidently came from an intellectually distinguished family.

    According to the Rihla (travelogue), Ibn Battuta embarked on his travels from Tangier at the age of 22 with the intention of performing the Hajj (the sacred pilgrimage to Mecca) in 1325. Although he returned to Fez (his adopted home-town) around the end of 1349, he continued to visit various regions, including Granada and Sudan, in subsequent years.

    Over the course of his almost 30 years of travel, Ibn Battuta covered an astonishing distance of approximately 73,000 miles (117,000 kilometres), visiting a region that today encompasses more than 50 countries. His journeys covered much of the medieval Islamic world and beyond, excluding Northern Europe.

    In 1355, he returned to Morocco for the last time and remained there for the rest of his life. Upon his return he dictated his experiences, observations and anecdotes to the Andalusian scholar Ibn Juzayy, with a compilation of his travels completed in 1355 or 1356.

    The work, formally titled A Gift to Researchers on the Curiosities of Cities and the Marvels of Journeys, is more commonly referred to as Rihlat Ibn Battuta or simply Rihla.

    A painting of Ibn Battuta (on right) in Egypt by Leon Benett.
    Wikimedia Commons, CC BY

    More than a travelogue or geographical record, this book provides rich insights into 14th-century social and political life, capturing cultural diversity across nations. Ibn Battuta details local lifestyles, linguistic traits, beliefs, clothing, cuisines, holidays, artistic traditions and gender relations, as well as commercial activities and currencies.

    His observations also include geographical features such as mountains, rivers and agricultural products. Notably, the work highlights his encounters with over 60 sultans and more than 2,000 prominent figures, making it a valuable historical resource.

    The travels

    His travels began after a dream. According to Ibn Battuta, one night, while in Fuwwa, a town near Alexandria in Egypt, he dreamed of flying on a massive bird across various lands, landing in a dark, greenish country.

    To test the local sheikh’s mystical knowledge, he decided if the sheikh knew of his dream, he was truly extraordinary. The next morning, after leading the dawn prayer, he saw the sheikh bid farewell to visitors. Later, the sheikh astonishingly revealed knowledge of Ibn Battuta’s dream and prophesied his pilgrimage through Yemen, Iraq, Turkey and India.

    At the time, the Middle East was under the rule of the Mamluk sultanate, Anatolia was divided among principalities and the Mongol Ilkhanate state controlled Iran, Central Asia, and the Indian subcontinent.

    Ibn Battuta initially travelled through North Africa, Egypt, Palestine and Syria, completing his first Hajj in 1326.

    He then visited Iraq and Iran, returning to Mecca. In 1328, he explored East Africa, reaching Mogadishu, Mombasa, Sudan and Kilwa (modern Tanzania), as well as Yemen, Oman and Anatolia, where he documented cities like Alanya, Konya, Erzurum, Nicaea and Bursa.

    His descriptions are vivid. Describing the city of Dimyat, on the bank of the Nile, he says:

    Many of the houses have steps leading down to the Nile. Banana trees are especially abundant there, and their fruit is carried to Cairo in boats. Its sheep and goats are allowed to pasture at liberty day and night, and for this reason the saying goes of Dimyat, ‘Its wall is a sweetmeat and its dogs are sheep’. No one who enters the city may afterwards leave it except by the governor’s seal […]

    Farmland on the banks of the Nile river today.
    Alice-D/shutterstock

    When it comes to Anatolia (in modern-day Turkey), he declares:

    This country, known as the Land of Rum, is the most beautiful in the world. While Allah Almighty has distributed beauty to other lands separately, He has gathered them all here. The most beautiful and well-dressed people live in this land, and the most delicious food is prepared here […] From the moment we arrived, our neighbors — both men and women — showed great concern for our wellbeing. Here, women do not shy away from men; when we departed, they bid us farewell as if we were family, expressing their sadness through tears.

    A judge and husband

    In 1332, Ibn Battutua met the Byzantine Emperor Andronikos III Palaiologos.
    Wikimedia Commons, CC BY

    Since Ibn Battuta dictated his work, it’s difficult to assess the extent of the scribe’s influence in recording his narratives. Despite being an educated man, he occasionally narrates like a commoner and sometimes exceeds the bounds of polite language. At times, he provides excessive detail, giving the impression he may be quoting from sources beyond his own observations.

    Nevertheless, the Rihla stands out for its engaging style and captivating anecdotes, drawing readers in.

    Ibn Battuta later journeyed through Crimea, Central Asia, Khwarezm (a large oasis region in the territories of present-day Turkmenistan and Uzbekistan), Bukhara (a city in Uzbekistan), and the Hindu Kush Mountains. In 1332, he met Byzantine Emperor Andronikos III Palaiologos and travelled to Istanbul with the caravan of Uzbek Khan’s third wife. He mentions a caravan that even has a market:

    Whenever the caravan halted, food was cooked in great brass cauldrons, called dasts, and supplied from them to the poorer pilgrims and those who had no provisions. […] This caravan contained also animated bazaars and great supplies of luxuries and all kinds of food and fruit. They used to march during the night and light torches in front of the file of camels and litters, so that you saw the countryside gleaming with light and the darkness turned into radiant day.

    Ibn Battuta arrived in Delhi in 1333, where he served as a judge under Sultan Muhammad bin Tughluq for seven years. He married or was married to local women in many of the places he stayed. Among his wives were ordinary people as well as the daughters of the administrative class.

    Miniature painting in Mughal style depicting the court of Muhammad bin Tughluq.
    Wikimedia Commons, CC BY

    The Sultan’s generosity, intelligence and unconventional ruling style both impressed and surprised Ibn Battuta. However, Muhammad bin Tughluq was known for making excessively harsh and abrupt decisions at times, which led Ibn Battuta to approach him with caution. Nevertheless, with the Sultan’s support, he remained in India for a long time and was eventually chosen as an ambassador to China in 1341.

    In 1345 his mission was disrupted when his ship capsized off the coast of Calcutta (then known as Sadqawan) in the Indian Ocean. Though he survived, he lost most of his possessions.

    After the incident, he remained in India for a while before continuing his journey by other means. During this period, he travelled through India, Sri Lanka and the Maldives. He served as a judge in the latter for one and a half years. In 1345, he journeyed to China via Bengal, Burma and Sumatra, reaching the city of Guangzhou but limiting his exploration to the southern coast.

    He was among the first Arab travellers to record Islam’s spread in the Malay Archipelago, noting interactions between Muslims and Hindu-Buddhist communities. Visiting Java and Sumatra, he praised Sultan Malik al-Zahir of Sumatra as a generous, pious and scholarly ruler and highlighted his rare practice of walking to Friday prayers.

    On his return, Ibn Battuta explored regions such as Iran, Iraq, North Africa, Spain and the Kingdom of Mali, documenting the vast Islamic world.

    Back in his homeland, Ibn Battuta served as a judge in several locations. He died around 1368-9 while serving as a judge in Morocco and was buried in his birthplace, Tangier.

    Historic copy of selected parts of the Travel Report by Ibn Battuta, 1836 CE, Cairo.
    Wikimedia Commons, CC BY

    The status of women

    Ibn Battuta’s travels revealed intriguing insights into the status of women across regions. In inner West Africa, he observed matriarchal practices where lineage and inheritance were determined by the mother’s family.

    Among Turks, women rode horses like raiders, traded actively and did not veil their faces.

    In the Maldives, husbands leaving the region had to abandon their wives. He noted that Muslim women there, including the ruling woman, did not cover their heads. Despite attempting to enforce the hijab as a judge, he failed.

    He offers fascinating insights into food cultures. In Siberia, sled dogs were fed before humans. He described 15-day wedding feasts in India.

    He tried local produce such as mango in the Indian subcontinent, which he compared to an apple, and sun-dried, sliced fish in Oman.

    Religious practices

    Ibn Battuta’s accounts of the Hajj (pilgrimage) rituals he performed six times provide a unique perspective. He references a fatwa by Ibn Taymiyyah, prominent Islamic scholar and theologian known for his opposition to theological innovations and critiques of Sufism and philosophy, advising against shortening prayers for those travelling to Medina.

    Ibn Battuta’s accounts, particularly regarding the Iranian region, offer important perspectives into religious sects during a period when Iran started shifting from Sunnism to Shiism. He describes societies with diverse demographics, including Persians, Azeris, Kurds, Arabs and Baluchis. His observations on religious practices are especially significant.

    Inclined toward Sufism, Ibn Battuta often dressed like a dervish during his travels. He offers a compelling view of Islamic mysticism. He considered regions like Damascus as places of abundance and Anatolia as a land of compassion, interpreting them with a spiritual perspective.

    His accounts of Sufi education, dervish lodges, zawiyas (similar to monasteries), and tombs, along with the special invocations of Sufi masters, are important historical records. He also observed and documented unique practices, such as the followers of the Persian Sufi saint Sheikh Qutb al-Din Haydar wearing iron rings on their hands, necks, ears, and even private parts to avoid sexual intercourse.

    While Ibn Battuta primarily visited Muslim lands, he also travelled to non-Muslim territories, offering key understandings into different religious cultures, for instance interactions between Crimean Muslims and Christian Armenians in the Golden Horde region.

    He also documented churches, icons and monasteries, such as the tomb of the Virgin Mary in Jerusalem. His observation of Muslims openly reciting the call to prayer (adhan) in China is significant.

    Other anecdotes include the division of the Umayyad Mosque in Damascus into a mosque and Christian church. Most importantly, his encounters with Hindus and Buddhists in the Indian subcontinent and Malay Islands provide rich historical context.

    Umayyad Mosque, Damascus.
    eyetravelphotos/shutterstock

    His accounts of death rituals reveal diverse practices. In Sinop (a city in Turkey), 40 days of mourning were declared for a ruler’s mother, while in Iran, a funeral resembled a wedding celebration. He observed similarities in cremation practices between India and China and described a chilling custom in some regions where slaves and concubines were buried alive with the deceased.

    Ibn Battuta’s Rihla, widely translated into Eastern and Western languages, has drawn some criticism for containing depictions that sometimes diverge from historical continuity or borrow from other works. Ibn Battuta himself admitted to using earlier travel books as references.

    Despite limited recognition in older sources, the Rihla gained prominence in the West in the 19th century. His legacy remains vibrant today. Morocco declared 1996–1997 the “Year of Ibn Battuta,” and established a museum in Tangier to honour him. In Dubai, a mall is named after him.

    Notably, Ibn Battuta travelled to more destinations than Marco Polo and shared a broader range of humane anecdotes, showcasing the depth and diversity of his experiences.

    Ismail Albayrak does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Ibn Battuta, a 14th-century judge and ambassador, travelled further than Marco Polo. The Rihla records his adventures – https://theconversation.com/ibn-battuta-a-14th-century-judge-and-ambassador-travelled-further-than-marco-polo-the-rihla-records-his-adventures-246148

    MIL OSI

  • MIL-OSI Submissions: 4 reasons to be concerned about Bill C-4’s threats to Canadian privacy and sovereignty

    Source: The Conversation – Canada – By Sara Bannerman, Professor and Canada Research Chair in Communication Policy and Governance, McMaster University

    In Canada, federal political parties are not governed by basic standards of federal privacy law. If passed, Bill C-4, also known as the Making Life More Affordable for Canadians Act, would also make provincial and territorial privacy laws inapplicable to federal political parties, with no adequate federal law in place.

    Federal legislation in the form of the Privacy Act and the Personal Information Protection and Electronic Documents Act sets out privacy standards for government and business, based on the fair information principles that provide for the collection, use and disclosure of Canadians’ personal information.

    At the moment, these laws don’t apply to political parties. Some provinces — especially British Columbia — have implemented laws that do. In May 2024, the B.C. Supreme Court upheld the provincial Information Commissioner’s ruling that B.C.’s privacy legislation applies to federal political parties. That decision is currently under appeal.

    Bill C-4 would undermine those B.C. rights. It would make inapplicable to federal parties the standard privacy rights that apply in other business and government contexts— such as the right to consent to the collection, use and disclosure of personal information — and to access and correct personal information held by organizations.

    Why should we be concerned about Bill C-4’s erasure of these privacy protections for Canadians? There are four reasons:

    1. Threats to Canada’s sovereignty

    In light of threats to Canadian sovereignty by United States President Donald Trump, the Canadian government and Canadian politicians must rethink their approach to digital sovereignty.

    Until now, Canadian parties and governments have been content to use American platforms, data companies and datified campaign tactics. Bill C-4 would leave federal parties free to do more of the same. This is the opposite of what’s needed.

    The politics that resulted in Trump being elected twice to the Oval Office was spurred in part by the datafied campaigning of Cambridge Analytica in 2016 and Elon Musk in 2024. These politics are driven by micro-targeted and arguably manipulative political campaigns.

    Do Canadians want Canada to go in the same direction?




    Read more:
    How political party data collection may turn off voters


    Are political parties spying and experimenting on Canadians via personal data collection?
    (Unsplash/Arthur Mazi), FAL

    2. Threats to Canada’s future

    Bill C-4 would undermine one of the mechanisms that makes Canada a society: collective political decisions.

    Datified campaigning and the collection of personal information by political parties change the nature of democracy. Rather than appealing to political values or visions of what voters may want in the future or as a society — critically important at this historical and troubling moment in history — datified campaigning operates by experimenting on unwitting individual citizens who are alone on their phones and computers. It operates by testing their isolated opinions and unvarnished behaviours.

    For example, a political campaign might do what’s known as A/B testing of ads, which explores whether ad A or ad B is more successful by issuing two different versions of an ad to determine which one gets more clicks, shares, petition signatures, donations or other measurable behaviour. With this knowledge, a campaign or party can manipulate the ads through multiple versions to get the desired behaviour and result. They also learn about ad audiences for future targeting.




    Read more:
    A/B testing: how offline businesses are learning from Google to improve profits


    In other words, political parties engaging in this tactic aren’t engaging with Canadians — they’re experimenting on them to see what type of messages, or even what colour schemes or visuals, appeal most. This can be used to shape the campaign or just the determine the style of follow-up messaging to particular users.

    University researchers, to name just one example, are bound by strict ethical protocols and approvals, including the principle that participants should consent to the collection of personal information, and to participation in experiments and studies. Political parties have no such standards, despite the high stakes — the very future of democracy and society.

    Most citizens think of elections as being about deliberation and collectively deciding what kind of society they want to live in and what kind of future they want to have together as they decide how to cast their ballots.

    But with datified campaigning, citizens may not be aware of the political significance of their online actions. Their data trail might cause them to be included, or excluded, from a party’s future campaigning and door-knocking, for example. The process isn’t deliberative, thoughtful or collective.

    3. Secret personal data collection

    Political parties collect highly personal data about Canadians without their knowledge or consent. Most Canadians are not aware of the extent of the collection by political parties and the range of data they collect, which can include political views, ethnicity, income, religion or online activities, social media IDs, observations of door-knockers and more.

    If asked, most Canadians would not consent to the range of data collection by parties.

    4. Data can be dangerous in the wrong hands

    Some governments can and do use data to punish individuals politically and criminally, sometimes without the protection of the rule of law.

    Breaches and misuses of data, cybersecurity experts say, are no longer a question of “if,” but “when.”

    Worse, what would happen if the wall between political parties and politicians or government broke down and the personal information collected by parties became available to governments? What if the data were used for political purposes, such as for vetting people for political appointments or government benefits? What if it were used against civil servants?

    What if it were to be used at the border, or passed to other governments? What if it were passed to and used by authoritarian governments to harass and punish citizens?

    What if it was passed to tech companies and further to data brokers?

    OpenMedia recently revealed that Canadians’ data is being passed to the many different data companies political parties use. That data is not necessarily housed in Canada or by Canadian companies.

    If provincial law is undermined, there are few protections against any of these problems.

    Strengthening democracy

    Bill C-4 would erase the possibility of provincial and territorial privacy laws being applied to federal political parties, with virtually nothing remaining. Privacy protection promotes confidence and engagement with democratic processes — particularly online. Erasing privacy protections threatens this confidence and engagement.

    The current approach of federal political parties in terms of datified campaigning and privacy law is entirely wrong for this political moment, dangerous to Canadians and dangerous to democracy. Reforms should instead ensure federal political parties must adhere to the same standards as businesses and all levels of government.

    Data privacy is important everywhere, but particularly so for political parties, campaigns and democratic engagement. It is important at all times — particularly now.

    Sara Bannerman receives funding from the Canada Research Chairs program, the Social Sciences and Humanities Research Council, and McMaster University. She has previously received funding from the Office of the Privacy Commissioner’s Contributions Program and the Digital Ecosystem Research Challenge.

    ref. 4 reasons to be concerned about Bill C-4’s threats to Canadian privacy and sovereignty – https://theconversation.com/4-reasons-to-be-concerned-about-bill-c-4s-threats-to-canadian-privacy-and-sovereignty-259331

    MIL OSI

  • MIL-OSI Submissions: Appeals court ruling grants Donald Trump broad powers to deploy troops to American cities

    Source: The Conversation – Canada – By Jack L. Rozdilsky, Associate Professor of Disaster and Emergency Management, York University, Canada

    Residents of Los Angeles will need to get used to federally controlled National Guard troops operating on their streets. Due to a ruling from an appeals court on June 19, United States President Donald Trump now has broad authority to deploy military forces in American cities.

    This is a troubling development. All presidents have held in their grasp extraordinary powers to deploy military troops domestically. But Trump stands apart with his apparent keen interest in manufacturing false emergencies to exploit extraordinary power.

    An 1878 law called the Posse Comitatus Act restricts using the military for domestic law enforcement. The broader principle being challenged by Trump’s actions in L.A. is the norm of the military not being allowed to interfere in the affairs of civilian governance.

    Injunctions and appeals

    Five months into Trump’s presidency, L.A. has been targeted for aggressive immigration enforcement. In their pluralistic city where dozens of languages and nationalities peacefully co-exist, some Angelenos believe the city is experiencing an attack on its most essential social fabric.

    On June 7, Trump acted under United States Code Title 10 provisions to take over command and control of California’s National Guard. Federalized military forces were deployed.

    The objective was to counter what Trump argued was a form of rebellion against the authority of the government of the United States. In fact, these “rebellions” were largely peaceful protests in downtown L.A.

    On June 9, the U.S. District Court for the Northern District of California granted an injunction restraining the president’s use of military force in L.A. The court order supported Gov. Gavin Newsom’s contention that Trump overstepped his authority.

    On June 19, a decision from a panel of judges at the U.S. Court of Appeals for the Ninth Circuit overturned the injunction.

    What this means at the moment is that Trump does not have to return control of the troops to Newsom. California has options to continue litigation by asking the Federal Appeals Court to rehear the matter, or perhaps directly asking the U.S. Supreme Court to intervene.

    Moving toward authoritarianism

    Trump’s June 7 memorandum facilitating his move to overrule Newsom’s authority and seize control of 2,000 National Guard troops was based on the president defining his own so-called emergency.

    He claimed incidents of violence and disorder following aggressive immigration enforcement amounted to a form of rebellion against the U.S.

    As Trump flexes his emergency power might, his second term has been called the 911 presidency. He has used extraordinary emergency powers at a pace well beyond his predecessors, pressing the limits to address his administration’s supposed sense of serious perils overtaking the nation.

    Issues arise when the level of actual danger locally is not at all representative of what the president suggests is a full-scale national emergency. For example, demonstrations over immigration raids occupied only a tiny parcel of real estate in L.A.’s huge metropolitan area. A Los Angeles-based rebellion against the U.S. was not occurring.

    As dissent over aggressive immigration enforcement actions grew, localized clashes with law enforcement did occur. Mutual aid surged into Los Angeles, where neighbouring California law enforcement agencies acted to assist one another. The law enforcement challenges never rose to the level of the governor of California requesting additional federal support.

    Shortly after the federal government took over the California National Guard, Newsom said the move was purposefully inflammatory.

    In addition to declaring dubious emergencies to amass power, stoking violence is a characteristic of authoritarian rulers. Creating fear, division and feelings of insecurity can lead to community crises. Trump did not need to wait for a crisis; it seems he simply invented one.

    No guardrails

    The expression “out of kilter” comes to mind as Trump inches closer to invoking the Insurrection Act of 1807. If so, the situation will look quite similar in practice to what is happening now in Los Angeles.

    Five years ago, Trump flirted with invoking the Insurrection Act during Black Lives Matter unrest in Washington, D.C., in and around Lafayette Park.

    As recent L.A. protests intensified, Trump stated: “We’re going to have troops everywhere.”

    Currently, there are few guardrails in place to prevent a rogue president from misusing the military in domestic civilian affairs. Trump has been coy about whether he would tap into the greater powers available to him under the Insurrection Act.

    Real emergencies presenting existential threats to America do persist. Nuclear proliferation, climate change and pandemics need serious leaders. But politically exploiting last-resort emergency laws designed to provide options to deal with genuine existential threats — not to weaponize them against protesters demonstrating against public policy — is absurd.

    Jack L. Rozdilsky receives support for research communication and public scholarship from York University. He also has received research support from the Canadian Institutes of Health Research.

    ref. Appeals court ruling grants Donald Trump broad powers to deploy troops to American cities – https://theconversation.com/appeals-court-ruling-grants-donald-trump-broad-powers-to-deploy-troops-to-american-cities-258894

    MIL OSI

  • MIL-OSI Submissions: How old are you really? Are the latest ‘biological age’ tests all they’re cracked up to be?

    Source: The Conversation – Global Perspectives – By Hassan Vally, Associate Professor, Epidemiology, Deakin University

    We all like to imagine we’re ageing well. Now a simple blood or saliva test promises to tell us by measuring our “biological age”. And then, as many have done, we can share how “young” we really are on social media, along with our secrets to success.

    While chronological age is how long you have been alive, measures of biological age aim to indicate how old your body actually is, purporting to measure “wear and tear” at a molecular level.

    The appeal of these tests is undeniable. Health-conscious consumers may see their results as reinforcing their anti-ageing efforts, or a way to show their journey to better health is paying off.

    But how good are these tests? Do they actually offer useful insights? Or are they just clever marketing dressed up to look like science?

    How do these tests work?

    Over time, the chemical processes that allow our body to function, known as our “metabolic activity”, lead to damage and a decline in the activity of our cells, tissues and organs.

    Biological age tests aim to capture some of these changes, offering a snapshot of how well, or how poorly, we are ageing on a cellular level.

    Our DNA is also affected by the ageing process. In particular, chemical tags (methyl groups) attach to our DNA and affect gene expression. These changes occur in predictable ways with age and environmental exposures, in a process called methylation.

    Research studies have used “epigenetic clocks”, which measure the methylation of our genes, to estimate biological age. By analysing methylation levels at specific sites in the genome from participant samples, researchers apply predictive models to estimate the cumulative wear and tear on the body.

    What does the research say about their use?

    Although the science is rapidly evolving, the evidence underpinning the use of epigenetic clocks to measure biological ageing in research studies is strong.

    Studies have shown epigenetic biological age estimation is a better predictor of the risk of death and ageing-related diseases than chronological age.

    Epigenetic clocks also have been found to correlate strongly with lifestyle and environmental exposures, such as smoking status and diet quality.

    In addition, they have been found to be able to predict the risk of conditions such as cardiovascular disease, which can lead to heart attacks and strokes.

    Taken together, a growing body of research indicates that at a population level, epigenetic clocks are robust measures of biological ageing and are strongly linked to the risk of disease and death

    But how good are these tests for individuals?

    While these tests are valuable when studying populations in research settings, using epigenetic clocks to measure the biological age of individuals is a different matter and requires scrutiny.

    For testing at an individual level, perhaps the most important consideration is the “signal to noise ratio” (or precision) of these tests. This is the question of whether a single sample from an individual may yield widely differing results.

    A study from 2022 found samples deviated by up to nine years. So an identical sample from a 40-year-old may indicate a biological age of as low as 35 years (a cause for celebration) or as high as 44 years (a cause of anxiety).

    While there have been significant improvements in these tests over the years, there is considerable variability in the precision of these tests between commercial providers. So depending on who you send your sample to, your estimated biological age may vary considerably.

    Another limitation is there is currently no standardisation of methods for this testing. Commercial providers perform these tests in different ways and have different algorithms for estimating biological age from the data.

    As you would expect for commercial operators, providers don’t disclose their methods. So it’s difficult to compare companies and determine who provides the most accurate results – and what you’re getting for your money.

    A third limitation is that while epigenetic clocks correlate well with ageing, they are simply a “proxy” and are not a diagnostic tool.

    In other words, they may provide a general indication of ageing at a cellular level. But they don’t offer any specific insights about what the issue may be if someone is found to be “ageing faster” than they would like, or what they’re doing right if they are “ageing well”.

    So regardless of the result of your test, all you’re likely to get from the commercial provider of an epigenetic test is generic advice about what the science says is healthy behaviour.

    Are they worth it? Or what should I do instead?

    While companies offering these tests may have good intentions, remember their ultimate goal is to sell you these tests and make a profit. And at a cost of around A$500, they’re not cheap.

    While the idea of using these tests as a personalised health tool has potential, it is clear that we are not there yet.

    For this to become a reality, tests will need to become more reproducible, standardised across providers, and validated through long-term studies that link changes in biological age to specific behaviours.

    So while one-off tests of biological age make for impressive social media posts, for most people they represent a significant cost and offer limited real value.

    The good news is we already know what we need to do to increase our chances of living longer and healthier lives. These include:

    • improving our diet
    • increasing physical activity
    • getting enough sleep
    • quitting smoking
    • reducing stress
    • prioritising social connection.

    We don’t need to know our biological age in order to implement changes in our lives right now to improve our health.

    Hassan Vally does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. How old are you really? Are the latest ‘biological age’ tests all they’re cracked up to be? – https://theconversation.com/how-old-are-you-really-are-the-latest-biological-age-tests-all-theyre-cracked-up-to-be-257710

    MIL OSI

  • MIL-OSI Submissions: Why the traditional college major may be holding students back in a rapidly changing job market

    Source: The Conversation – USA (2) – By John Weigand, Professor Emeritus of Architecture and Interior Design, Miami University

    Rethinking the college major could help colleges better understand what employers and students need. Westend61/Getty Images

    Colleges and universities are struggling to stay afloat.

    The reasons are numerous: declining numbers of college-age students in much of the country, rising tuition at public institutions as state funding shrinks, and a growing skepticism about the value of a college degree.

    Pressure is mounting to cut costs by reducing the time it takes to earn a degree from four years to three.

    Students, parents and legislators increasingly prioritize return on investment and degrees that are more likely to lead to gainful employment. This has boosted enrollment in professional programs while reducing interest in traditional liberal arts and humanities majors, creating a supply-demand imbalance.

    The result has been increasing financial pressure and an unprecedented number of closures and mergers, to date mostly among smaller liberal arts colleges.

    To survive, institutions are scrambling to align curriculum with market demand. And they’re defaulting to the traditional college major to do so.

    The college major, developed and delivered by disciplinary experts within siloed departments, continues to be the primary benchmark for academic quality and institutional performance.

    This structure likely works well for professional majors governed by accreditation or licensure, or more tightly aligned with employment. But in today’s evolving landscape, reliance on the discipline-specific major may not always serve students or institutions well.

    As a professor emeritus and former college administrator and dean, I argue that the college major may no longer be able to keep up with the combinations of skills that cross multiple academic disciplines and career readiness skills demanded by employers, or the flexibility students need to best position themselves for the workplace.

    Students want flexibility

    The college curriculum may be less flexible now than ever.
    MoMo Productions/Digital Vision via Getty Images

    I see students arrive on campus each year with different interests, passions and talents – eager to stitch them into meaningful lives and careers.

    A more flexible curriculum is linked to student success, and students now consult AI tools such as ChatGPT to figure out course combinations that best position them for their future. They want flexibility, choice and time to redirect their studies if needed.

    And yet, the moment students arrive on campus – even before they apply – they’re asked to declare a major from a list of predetermined and prescribed choices. The major, coupled with general education and other college requirements, creates an academic track that is anything but flexible.

    Not surprisingly, around 80% of college students switch their majors at least once, suggesting that more flexible degree requirements would allow students to explore and combine diverse areas of interest. And the number of careers, let alone jobs, that college graduates are expected to have will only increase as technological change becomes more disruptive.

    As institutions face mounting pressures to attract students and balance budgets, and the college major remains the principal metric for doing so, the curriculum may be less flexible now than ever.

    How schools are responding

    The college major emerged as a response to an evolving workforce that prioritized specialized knowledge.
    Fuse/Corbia via Getty Images

    In response to market pressures, colleges are adding new high-demand majors at a record pace. Between 2002 and 2022, the number of degree programs nationwide increased by nearly 23,000, or 40%, while enrollment grew only 8%. Some of these majors, such as cybersecurity, fashion business or entertainment design, arguably connect disciplines rather than stand out as distinct. Thus, these new majors siphon enrollment from lower-demand programs within the institution and compete with similar new majors at competitor schools.

    At the same time, traditional arts and humanities majors are adding professional courses to attract students and improve employability. Yet, this adds credit hours to the degree while often duplicating content already available in other departments.

    Importantly, while new programs are added, few are removed. The challenge lies in faculty tenure and governance, along with a traditional understanding that faculty set the curriculum as disciplinary experts. This makes it difficult to close or revise low-demand majors and shift resources to growth areas.

    The result is a proliferation of under-enrolled programs, canceled courses and stretched resources – leading to reduced program quality and declining faculty morale.

    Ironically, under the pressure of declining demand, there can be perverse incentives to grow credit hours required in a major or in general education requirements as a way of garnering more resources or adding courses aligned with faculty interests. All of which continues to expand the curriculum and stress available resources.

    Universities are also wrestling with the idea of liberal education and how to package the general education requirement.

    Although liberal education is increasingly under fire, employers and students still value it.

    Students’ career readiness skills – their ability to think critically and creatively, to collaborate effectively and to communicate well – remain strong predictors of future success in the workplace and in life.

    Reenvisioning the college major

    Assuming the requirement for students to complete a major in order to earn a degree, colleges can also allow students to bundle smaller modules – such as variable-credit minors, certificates or course sequences – into a customizable, modular major.

    This lets students, guided by advisers, assemble a degree that fits their interests and goals while drawing from multiple disciplines. A few project-based courses can tie everything together and provide context.

    Such a model wouldn’t undermine existing majors where demand is strong. For others, where demand for the major is declining, a flexible structure would strengthen enrollment, preserve faculty expertise rather than eliminate it, attract a growing number of nontraditional students who bring to campus previously earned credentials, and address the financial bottom line by rightsizing curriculum in alignment with student demand.

    One critique of such a flexible major is that it lacks depth of study, but it is precisely the combination of curricular content that gives it depth. Another criticism is that it can’t be effectively marketed to an employer. But a customized major can be clearly named and explained to employers to highlight students’ unique skill sets.

    Further, as students increasingly try to fit cocurricular experiences – such as study abroad, internships, undergraduate research or organizational leadership – into their course of study, these can also be approved as modules in a flexible curriculum.

    It’s worth noting that while several schools offer interdisciplinary studies majors, these are often overprescribed or don’t grant students access to in-demand courses. For a flexible-degree model to succeed, course sections would need to be available and added or deleted in response to student demand.

    Several schools also now offer microcredentials– skill-based courses or course modules that increasingly include courses in the liberal arts. But these typically need to be completed in addition to requirements of the major.

    We take the college major for granted.

    Yet it’s worth noting that the major is a relatively recent invention.

    Before the 20th century, students followed a broad liberal arts curriculum designed to create well-rounded, globally minded citizens. The major emerged as a response to an evolving workforce that prioritized specialized knowledge. But times change – and so can the model.

    John Weigand does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Why the traditional college major may be holding students back in a rapidly changing job market – https://theconversation.com/why-the-traditional-college-major-may-be-holding-students-back-in-a-rapidly-changing-job-market-258383

    MIL OSI