Category: Academic Analysis

  • MIL-Evening Report: Is owning a dog good for your health?

    Source: The Conversation (Au and NZ) – By Tania Signal, Professor of Psychology, School of Health, Medical and Applied Sciences, CQUniversity Australia

    Pogodina Natalia/Shutterstock

    Australia loves dogs. We have one of the highest rates of pet ownership in the world, and one in two households has at least one dog.

    But are they good for our health?

    Mental health is the second-most common reason cited for getting a dog, after companionship. And many of us say we “feel healthier” for having a dog – and let them sleep in our bedroom.

    Here’s what it means for our physical and mental health to share our homes (and doonas) with our canine companions.

    Are there physical health benefits to having a dog?

    Having a dog is linked to lower risk of death over the long term. In 2019, a systematic review gathered evidence published over 70 years, involving nearly four million individual medical cases. It found people who owned a dog had a 24% lower risk of dying from any cause compared to those who did not own a dog.

    Having a dog may help lower your blood pressure through more physical activity.
    Barnabas Davoti/Pexels

    Dog ownership was linked to increased physical activity. This lowered blood pressure and helped reduce the risk of stroke and heart disease.

    The review found for those with previous heart-related medical issues (such as heart attack), living with a dog reduced their subsequent risk of dying by 35%, compared to people with the same history but no dog.

    Another recent UK study found adult dog owners were almost four times as likely to meet daily physical activity targets as non-owners. Children in households with a dog were also more active and engaged in more unstructured play, compared to children whose family didn’t have a dog.

    Exposure to dirt and microbes carried in from outdoors may also strengthen immune systems and lead to less use of antibiotics in young children who grow up with dogs.

    Children in households with a dog were often more active.
    Maryshot/Shutterstock

    Health risks

    However, dogs can also pose risks to our physical health. One of the most common health issues for pet owners is allergies.

    Dogs’ saliva, urine and dander (the skin cells they shed) can trigger allergic reactions resulting in a range of symptoms, from itchy eyes and runny nose to breathing difficulties.

    A recent meta-analysis pooled data from nearly two million children. Findings suggested early exposure to dogs may increase the risk of developing asthma (although not quite as much as having a cat does). The child’s age, how much contact they have with the dog and their individual risk all play a part.

    Slips, trips and falls are another risk – more people fall over due to dogs than cats.

    Having a dog can also expose you to bites and scratches which may become infected and pose a risk for those with compromised immune systems. And they can introduce zoonotic diseases into your home, including ring worm and Campylobacter, a disease that causes diarrhoea.

    For those sharing the bed there is an elevated the risk of allergies and picking up ringworm. It may result in lost sleep, as dogs move around at night.

    On the other hand some owners report feeling more secure while co-sleeping with their dogs, with the emotional benefit outweighing the possibility of sleep disturbance or waking up with flea bites.

    Proper veterinary care and hygiene practices are essential to minimise these risks.

    Many of us don’t just share a home with a dog – we let them sleep in our beds.
    Claudia Mañas/Unsplash

    What about mental health?

    Many people know the benefits of having a dog are not only physical.

    As companions, dogs can provide significant emotional support helping to alleviate symptoms of anxiety, depression and post-traumatic stress. Their presence may offer comfort and a sense of purpose to individuals facing mental health challenges.

    Loneliness is a significant and growing public health issue in Australia.

    In the dog park and your neighbourhood, dogs can make it easier to strike up conversations with strangers and make new friends. These social interactions can help build a sense of community belonging and reduce feelings of social isolation.

    For older adults, dog walking can be a valuable loneliness intervention that encourages social interaction with neighbours, while also combating declining physical activity.

    However, if you’re experiencing chronic loneliness, it may be hard to engage with other people during walks. An Australian study found simply getting a dog was linked to decreased loneliness. People reported an improved mood – possibly due to the benefits of strengthening bonds with their dog.

    Walking a dog can make it easier to talk to people in your neighbourhood.
    KPegg/Shutterstock

    What are the drawbacks?

    While dogs can bring immense joy and numerous health benefits, there are also downsides and challenges. The responsibility of caring for a dog, especially one with behavioural issues or health problems, can be overwhelming and create financial stress.

    Dogs have shorter lifespans than humans, and the loss of a beloved companion can lead to depression or exacerbate existing mental health conditions.

    Lifestyle compatibility and housing conditions also play a significant role in whether having a dog is a good fit.

    The so-called pet effect suggests that pets, often dogs, improve human physical and mental health in all situations and for all people. The reality is more nuanced. For some, having a pet may be more stressful than beneficial.

    Importantly, the animals that share our homes are not just “tools” for human health. Owners and dogs can mutually benefit when the welfare and wellbeing of both are maintained.

    Tania Signal does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Is owning a dog good for your health? – https://theconversation.com/is-owning-a-dog-good-for-your-health-238888

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: People don’t like a ‘white saviour’, but does it affect how they donate to charity?

    Source: The Conversation (Au and NZ) – By Robert Hoffmann, Professor of Economics, Tasmanian Behavioural Lab, University of Tasmania

    Shutterstock

    Efforts to redress global inequality are facing an unexpected adversary: the white saviour. It’s the idea that people of colour, whether in the Global South or North, need “saving” by a white Western person or aid worker.

    An eclectic mix of white activists have been publicly accused of being white saviours for trying to help different causes in the Global South. They include celebrities who adopted orphaned children, organised benefit concerts such as Live Aid, or called out rights abuses.

    Others include professional and volunteer charity workers and journalists reporting on poverty in Africa. Even activism at home can earn the white saviour label, like efforts to refine the proposal for the Indigenous Voice to Parliament in Australia.

    We conducted a series of studies with 1,991 representative Australians to find out what people thought made a white saviour, how charity appeal photographs create this impression, and how it affected donations.

    White saviourism and charities

    The concern is that white people’s overseas charity, even when well-meaning, can inadvertently hurt rather than help the cause. It could perpetuate harmful stereotypes of white superiority, disempower local people, or misdirect resources to make helpers feel good rather than alleviating genuine need.

    The fear of being labelled a white saviour could make people think twice about giving time or money to worthy causes. It might stop aid organisations using proven appeals to raise donations they need.

    Médecins Sans Frontières (MSF), for instance, released a video apologising for using photos depicting white people in aid settings and which aren’t representative of the majority local staff they employ.

    Therein lies the dilemma: white donors can relate to photos of white helpers, but this is easily interpreted as white savourism.

    What makes someone a white saviour?

    Very little research exists into exactly what white saviourism means. Broadly, it seems to describe people in the Global North who support international causes for selfish reasons, to satisfy their own sentimentality and need for a positive image. We wanted to go deeper.

    In the first of our studies, we showed our participants 26 photographs depicting different Global South aid settings with a white helper.

    The helpers that participants thought of as highly “white saviour” typically had these characteristics:

    • they appeared to be privileged and superior

    • they gave help sentimentally and tokenistically

    • they conformed to the colonial stereotype of the helpless local and powerful foreigner.

    Further analysis showed these characteristics boil down to two essential features: ineffectiveness of the help and entitlement of the helpers.

    These two perceptions of the white saviour explain the problem for charity. Behavioural economics research has identified two main reasons for donating, and these perceptions undermine both.

    Why do people donate at all?

    So to see how much white saviourism affects charities, we need to know why people donate in the first place.

    One reason for giving is pure altruism, the desire to help others with no direct benefit to oneself. The effective altruism movement encourages people to make every donated dollar count – getting the maximum bang for the buck in terms of measurable outcomes for those in need.

    The difficulty for effective altruists is in assessing the impact of different charities vying for their donations. There are now websites that list charities by lives saved per dollar donated.




    Read more:
    How white saviourism harms international development


    Alternatively, donors might look at a charity’s appeal images for clues of how effectively it will use their dollars.

    Depicting white people as saviours can create the impression of tokenistic aid that only serves the helper’s sentimental needs. Evidence shows people resent impure motives in others (including organisations) and might try to penalise them.

    Behavioural economics research also shows, as you might expect, that some people are more concerned about themselves than others when giving. This is known as “warm glow” giving.

    Warm glow givers have several self-serving motivations. They include giving to gain self-respect or social status.

    People also have a desire to meet their social obligations. For richer folks this could include charitable giving. And giving can reduce guilt they might feel about their privilege.

    Just like the effective altruist, the warm glow giver could be put off by any sign of white saviourism. They don’t want to be seen to be endorsing it.

    Do people still donate?

    All this suggests that seeing a white saviour depiction in a charitable appeal will make people donate less.

    We examined this in another study, in which participants were shown each of the previous photos. This time they were asked, for every photo, if they were willing to donate to a charity that uses it.

    And as we thought, the photos previously rated as high in white saviourism had low intentions to donate.

    Participants were shown photos of white aid workers in the Global South.
    Shutterstock

    But intentions do not always equal actions, as psychologist have demonstrated for many years.

    To overcome this, we measured real donations in another study. Again participants saw the same photos, but this time they had the chance to donate part of their participation fee to a real charity when seeing them.

    What we found surprised us: the white saviour effect disappeared. How high a photo was on the white saviour scale had no impact on how much participants donated when seeing it.

    Does the end justify the motivation?

    Our results summarise the dilemma. Donors might object to white saviourism by charities, but in the end feel that it’s the help that counts, not the motivation behind it.

    We found some evidence for this when we asked participants about their general views of white saviourism.

    Almost 70% agreed that white saviour motives are common in Western help and that this was problematic for recipients. But interestingly, only 42% thought helpers with these motives deserved criticism.

    Together, this might suggest that people feel white saviour help is better than no help. There are voices in the charity community who echo this sentiment: imposing conditions on charitable giving will serve to reduce it.

    In an interview with the Wall Street Journal, Elise Westhoff, president of the Philanthropy Roundtable in the United States, said “by imposing those ‘musts’ and ‘shoulds’, you really limit human generosity”.

    But this doesn’t mean there are no legitimate concerns. There are, but it’s not hard for charities to address them.

    Our results show that white saviour perceptions do not affect actual donations, so read another way, suggests charities can safely replace highly white saviour images without losing donations for their causes.

    The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    ref. People don’t like a ‘white saviour’, but does it affect how they donate to charity? – https://theconversation.com/people-dont-like-a-white-saviour-but-does-it-affect-how-they-donate-to-charity-239307

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: XEC is now in Australia. Here’s what we know about this hybrid COVID variant

    Source: The Conversation (Au and NZ) – By Lara Herrero, Research Leader in Virology and Infectious Disease, Griffith University

    Kateryna Kon/Shutterstock

    Over the nearly five years since COVID first emerged, you’d be forgiven if you’ve lost track of the number of new variants we’ve seen. Some have had a bigger impact than others, but virologists have documented thousands.

    The latest variant to make headlines is called XEC. This omicron subvariant has been reported predominantly in the northern hemisphere, but it has now been detected in Australia too.

    So what do we know about XEC?

    Is COVID still a thing?

    People are now testing for COVID less and reporting it less. Enthusiasm to track the virus is generally waning.

    Nonetheless, Australia is still collecting and reporting COVID data. Although the number of cases is likely to be much higher than the number documented (around 275,000 so far this year), we can still get some idea of when we’re seeing significant waves, compared to periods of lower activity.

    Australia saw its last COVID peak in June 2024. Since then cases have been on the decline.

    But SARS-CoV-2, the virus that causes COVID, is definitely still around.

    Which variants are circulating now?

    The main COVID variants circulating currently around the world include BA.2.86, JN.1, KP.2, KP.3 and XEC. These are all descendants of omicron.

    The XEC variant was first detected in Italy in May 2024. The World Health Organization (WHO) designated it as a variant “under monitoring” in September.

    Since its detection, XEC has spread to more than 27 countries across Europe, North America and Asia. As of mid-September, the highest numbers of cases have been identified in countries including the United States, Germany, France, the United Kingdom and Denmark.

    XEC is currently making up around 20% of cases in Germany, 12% in the UK and around 6% in the US.

    The virus behind COVID continues to evolve.
    Photo by Centre for Ageing Better/Pexels

    Although XEC remains a minority variant globally, it appears to have a growth advantage over other circulating variants. We don’t know why yet, but reports suggest it may be able to spread more easily than other variants.

    For this reason, it’s predicted XEC could become the dominant variant worldwide in the coming months.

    How about in Australia?

    The most recent Australian Respiratory Surveillance Report noted there has been an increasing proportion of XEC sequenced recently.

    In Australia, 329 SARS-CoV-2 sequences collected from August 26 to September 22 have been uploaded to AusTrakka, Australia’s national genomics surveillance platform for COVID.

    The majority of sequences (301 out of 329, or 91.5%) were sub-lineages of JN.1, including KP.2 (17 out of 301) and KP.3 (236 out of 301). The remaining 8.5% (28 out of 329) were recombinants consisting of one or more omicron sub-lineages, including XEC.

    Estimates based on data from GISAID, an international repository of viral sequences, suggests XEC is making up around 5% of cases in Australia, or 16 of 314 samples sequenced.

    Queensland reported the highest rates in the past 30 days (8%, or eight of 96 sequences), followed by South Australia (5%, or five out of 93), Victoria (5%, or one of 20) and New South Wales (3%, or two of 71). WA recorded zero sequences out of 34. No data were available for other states and territories.

    What do we know about XEC? What is a recombinant?

    The XEC variant is believed to be a recombinant descendant of two previously identified omicron subvariants, KS.1.1 and KP.3.3. Recombinant variants form when two different variants infect a host at the same time, which allows the viruses to switch genetic information. This leads to the emergence of a new variant with characteristics from both “parent” lineages.

    KS.1.1 is one of the group commonly known as “FLiRTvariants, while, KP.3.3 is one of the “FLuQE” variants. Both of these variant groups have contributed to recent surges in COVID infections around the world.

    The WHO’s naming conventions for new COVID variants often use a combination of letters to denote new variants, particularly those that arise from recombination events among existing lineages. The “X” typically indicates a recombinant variant (as with XBB, for example), while the letters following it identify specific lineages.

    We know very little so far about XEC’s characteristics specifically, and how it differs from other variants. But there’s no evidence to suggest symptoms will be more severe than with earlier versions of the virus.

    What we do know is what mutations this variant has. In the S gene that encodes for the spike protein we can find a T22N mutation (inherited from KS.1.1) as well as Q493E (from KP.3.3) and other mutations
    known to the omicron lineage.

    Will vaccines still work well against XEC?

    The most recent surveillance data doesn’t show any significant increase in COVID hospitalisations. This suggests the current vaccines still provide effective protection against severe outcomes from circulating variants.

    As the virus continues to mutate, vaccine companies will continue to update their vaccines. Both Pfizer and Moderna have updated vaccines to target the JN.1 variant, which is a parent strain of the FLiRT variants and therefore should protect against XEC.

    However, Australia is still waiting to hear which vaccines may become available to the public and when.

    In the meantime, omicron-based vaccines such as the the current XBB.1.5 spikevax (Moderna) or COMIRNATY (Pfizer) are still likely to provide good protection from XEC.

    It’s hard to predict how XEC will behave in Australia as we head into summer. We’ll need more research to understand more about this variant as it spreads. But given XEC was first detected in Europe during the northern hemisphere’s summer months, this suggests XEC might be well suited to spreading in warmer weather.

    Lara Herrero receives funding from NHMRC.

    ref. XEC is now in Australia. Here’s what we know about this hybrid COVID variant – https://theconversation.com/xec-is-now-in-australia-heres-what-we-know-about-this-hybrid-covid-variant-239292

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: What are the greatest upsets in NRL grand final history?

    Source: The Conversation (Au and NZ) – By Wayne Peake, Adjunct research fellow, School of Humanities and Communication Arts, Western Sydney University

    The Penrith Panthers and Melbourne Storm will contest the National Rugby League (NRL) grand final on Sunday.

    Betting markets have them pretty much equal favourites. However, history shows grand finals don’t always go to plan.

    But what are the biggest upsets in NRL grand final history?

    Using a combination of formlines during the season and in finals, betting odds, media coverage and past performances, here are some of the most outlandish upsets in rugby league’s history.

    1944: Balmain 12, Newtown 8

    In 1944, Newtown was the minor premier while Balmain was second.

    Newtown entered the finals series as hot favourite and looked even hotter after destroying third-placed St George 55–7 in the first semi-final.

    However, in the final, Balmain won 19–6. That wasn’t the end of the story, though.

    Under the rules of the day, Newtown, as minor premier, could seek a rematch in a grand final “challenge”.

    Newton fielded a much stronger side and most expected it to reverse the final result. However, Balmain won again, 12–8.

    1952: Western Suburbs 22, South Sydney 12

    In 1952, Wests were minor premiers, while Souths finished third.

    Souths won the first semi-final 18–10 but Wests, as minor premiers, went straight to the grand final challenge three weeks later anyway. Meanwhile, Souths beat North Sydney to advance.

    According to the Sydney Truth, Wests were “regarded in some quarters as rank outsiders”.

    Then, rumours spread that Wests had “thrown” the first game and the referee assigned to the decider, George Bishop, had placed £400 on them, causing their price to shorten.

    Bishop sent off a player from each team ten minutes into the second half. Souths scored a try with 20 minutes to go to take the lead before Wests scored four tries in the last ten minutes to win.

    Bishop retired after the grand final.

    1963: St George 8, Western Suburbs 3

    In 1963, St George was minor premiers, while Wests were second. However, Wests, which had lost the previous two grand finals to St George, had beaten them twice in the regular rounds and again in the major semi-final, and went into the game favourite.

    On grand final day, the field deteriorated into a quagmire and led to the famous post-match “gladiators” photograph of captains Arthur Summons and Norm Provan shaking hands while coated in mud.

    The foul conditions contributed to a low-scoring game, which St George won 8–3.

    Once more it was suspected the referee, this time Darcy Lawler, had a financial interest in the outcome. He, too, retired immediately.

    Today we view St George’s victory in the context of a huge winning streak of premierships from 1955 to 1966.

    1989: Canberra 19, Balmain 14

    South Sydney had been minor premiers while Balmain finished third, one point clear of Canberra.

    Balmain were generally considered to have been more impressive than Canberra and were favourites for the grand final.

    One media expert, Harry Craven, was so confident Balmain would win he had his “weatherboard” (house) on the Tigers.

    In the grand final, Balmain led 14-8 with 15 minutes to play before Canberra levelled at 14–14 with 90 seconds remaining.

    After 20 minutes of extra time, Canberra won 19–14 and became the first team to win from further back than third in the regular season.

    1995: Canterbury 17, Manly 4

    Possibly the hottest grand final favourites of the past half-century, Manly lost just two games in the regular season and shared the minor premiership with Canberra.

    Canterbury (officially, the “Sydney Bulldogs” in 1995) were sixth and needed to win four straight games to be premier.

    The two sides met once in the regular season, with Manly winning 26-0.

    In the grand final, the Bulldogs led 6–4 at half-time and disaster loomed when Terry Lamb was sin-binned early in the second term.

    Somehow, the Dogs held Manly out until his return, then gained the ascendancy and won comfortably.

    1997: Newcastle 22, Manly 16

    In 1997 we had the first season of the News Limited-funded “Super League”.

    The glamourous Manly side was once more expected to be easy winners over Newcastle, which was contesting its first grand final.

    Only two teams in 70 years had won at their first attempt, while Manly had won its past 11 matches against the Knights.

    The grand final followed its anticipated plot until Newcastle’s Robbie O’Davis evened the score at 16–16. Newcastle missed with two field goal attempts, but after the second, Darren Albert regathered the ball and pierced the Manly defence to score under the posts with six seconds remaining.

    In 1997, the Newcastle Knights secured a maiden title against the Manly Sea Eagles.

    1999: Melbourne 20, St George Illawarra 18

    Odds for the 1999 grand final are unknown but the press anointed St George “hot favourites” while Canterbury champion Ricky Stuart rated them “unbeatable”.

    Melbourne was in just its second year of NRL competition and had never beaten St George.

    Melbourne had pulled off “escapes” against Canterbury and Parramatta to make the decider but the Saints were winning with ease and even crushed Melbourne 34–10 in the qualifying final.

    In the decider, St George led 14–0 and was looking good. Then, in the 51st minute, Anthony Mundine kicked the ball to a vacant try line but fumbled it touching down.

    The Melbourne Storm shocked the NRL world when they won the 1999 grand final.

    Nevertheless, St George maintained an 18–6 advantage midway through the second half, before a Storm fightback.

    With minutes remaining, Melbourne received a penalty try which it converted to win the game.

    The biggest upset: 1969, Balmain 11, South Sydney 2

    Most agree the biggest grand final upset is Balmain’s 11-2 defeat of South Sydney in 1969.

    Bookies had Souths as heavy favourites – they had won the previous two grand finals, while Balmain was a young team lacking grand final experience.

    However, the form lines of the two teams were not dissimilar.

    At the end of the regular season, South Sydney was the minor premier with Balmain just one win behind them.

    Souths defeated Balmain by one point in the semi-final, and a week later, Balmain beat Manly by a point to scrape into the grand final.

    Despite South’s heavy favouritism, Balmain were not friendless. Of six “experts” whose opinion was sought by one newspaper on the morning of the game, two picked Balmain outright and another conceded them an even-money chance.

    It was perhaps the circumstances of the game, as much as the result, that has lent the 1969 grand final its legend status.

    Souths, noted for their attacking potency, were unable to score a try. Balmain scored a single try early in the second half but then several Balmain players set about disrupting the Souths attack by, allegedly, feigning injuries to give their teammates a breather.

    The game has since become known as the “sit-down grand final”.

    Wayne Peake does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. What are the greatest upsets in NRL grand final history? – https://theconversation.com/what-are-the-greatest-upsets-in-nrl-grand-final-history-239380

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: How we created a beautiful native wildflower meadow in the heart of the city using threatened grassland species

    Source: The Conversation (Au and NZ) – By Katherine Horsfall, PhD Candidate, School of Agriculture, Food and Ecosystem Sciences, The University of Melbourne

    Matthew Stanton, CC BY-NC

    A city street may seem an unusual place to save species found in critically endangered grasslands. My new research, though, shows we can use plants from these ecosystems to create beautiful and biodiverse urban wildflower meadows. This means cities, too, can support nature repair.

    Species-rich grassy ecosystems are some of the most threatened plant communities on the planet. Occupying easily developed flat land, grassy ecosystems are routinely sacrificed as our cities expand.

    In south-east Australia, the volcanic plains that support Melbourne’s northern and western suburbs were once grasslands strewn with wildflowers, “resembling a nobleman’s park on a gigantic scale”, according to early explorer Thomas Mitchell. But these exceptionally diverse, critically endangered ecosystems have been reduced to less than 1% of their original area. The few remnants continue to be lost to urban development and weed invasion.

    A mix of the seeds used to create the meadow.
    Hui-Anne Tan, CC BY-NC

    Unfortunately, efforts to restore the grasslands around Melbourne have had mixed results. In 2020 the City of Melbourne took matters into its own hands. Recognising it is possible to enrich the diversity of birds, bats and insects by providing low-growing native plants, the council set a goal to increase understorey plants by 20% on the land it manages.

    Creating a large native grassland in inner-city Royal Park would help achieve this goal. Adopting a technique used by wildflower meadow designers, we sowed a million seeds of more than two dozen species from endangered grasslands around Melbourne. All but one of these species established in the resulting native wildflower meadow.

    The recreated native wildflower meadow is close to an inner-city road.
    Matthew Stanton, CC BY-NC

    What were the challenges at this site?

    Existing restoration techniques remove nutrient-enriched topsoils full of weed seeds before sowing native seeds. The target plant community can then establish with less competition from nutrient-hungry weeds.

    However, this approach could not be used at the Royal Park site. Topsoil removal cannot be used on many urban sites where soils are contaminated or there are underground services. Alternative approaches are needed to reduce weed competition while minimising soil disturbance.

    I saw a possible answer in the horticultural approaches used to create designed wildflower meadows.

    Preparing the selected site in Royal Park by raking away mulch.
    Hui-Anne Tan, CC BY-NC

    While still rare in Australia, designed wildflower meadows can increase the amenity and biodiversity of urban environments. They also reduce the costs of managing and mowing turf grass. These meadows are designed to be infrequently mown or burnt.

    Wildflower meadow designers typically use an international suite of species that can be established from seed and persist without fertiliser or regular irrigation. An abundance of flowers makes people more accepting of “messy” vegetation. Recognising this, designers select a mix of species that will flower for as much of the year as possible.

    Seed being spread by hand across the prepared area in April 2020.
    Hui-Anne Tan, CC BY-NC

    To reduce competition from weeds, these meadows are often created on a layer of sand that covers the original site soils. The low-nutrient sand buries weed seeds and creates a sowing surface that resists weed invasion from the surrounding landscape.

    However, the grasslands around Melbourne grow on clay soils, not sand. Would these techniques work for plants from these ecosystems?

    A deep sand layer controls weeds and slugs

    To find out we sowed more than a million seeds on sites with two depths of sand (10mm and 80mm) and one without a sand layer in Royal Park. Within one year, 26 of the 27 species sown had established to form a dense, flowering meadow across all sand depths. These plants included three threatened species.

    The hoary sunray, Leucochrysum albicans subsp. tricolor, is one of the endangered species in the native wildflower meadow.
    Marc Freestone/Royal Botanic Gardens Victoria, CC BY-NC-SA

    Crucially, the deepest sand layer reduced weed numbers and therefore time spent weeding.

    Interestingly, slugs played a role in determining the diversity of the native meadow. South-east Australia’s grasslands have largely evolved without slugs. As a result, seedlings lack chemical or physical defences against grazing by slugs, which can greatly reduce species diversity in native meadows.

    Again, sand provided a real benefit. Fewer slugs occurred on the deepest sand layer compared to bare soil. The suggestion that sand can deter slugs is consistent with meadow research in Europe.

    By September 2020, seedlings are growing on the prepared plots. The roof tile in the foreground is for monitoring slug numbers.
    Hui-Anne Tan, CC BY-NC

    Now to repair nature in all our cities

    Our research gives us another technique to reinstate critically endangered plant communities. We can use it to bring nature back to city parks and streets.

    Working in urban contexts also unlocks other advantages. There’s ready access to irrigation while the meadow gets established and to communities keen to care for natural landscapes. Creating native wildflower meadows in cities also helps native animals survive, including threatened species that call our cities home.

    People will be able to engage with beautiful native plants that are now rare in cities. Enriching our experience of nature can enhance our health and wellbeing.

    The meadow’s plant community was established by November 2020, six months after sowing.
    David Hannah, CC BY-NC

    My colleagues and I trialled these approaches with the support of the City of Melbourne. We are continuing our research to improve the scale and sustainability of native wildflower meadows in other municipalities.

    Native wildflower meadows and grassland restoration projects could genuinely help Australia meet its commitment to restore 30% of degraded landscapes. But first we need to invest much more in seed production. Reinstating native species on degraded land requires a lot of seed.

    Once seed supply is more certain, we will be able to bring back native biodiversity and beauty to streets, parks and reserves across the country.


    I would like to acknowledge the Traditional Custodians of the land on which the project took place, the Wurundjeri and Bunurong people of the Kulin Nations, and we pay our respects to their Elders, past, present and emerging. I also acknowledge my colleagues listed as co-authors on the research paper that formed the basis of this article: urban ecologists Nicholas S.G. Williams and Stephen Livesley, and seed ecologists Megan Hirst and John Delpratt.

    Katherine Horsfall received funding from the City of Melbourne to undertake this research and receives funding from the Australian Research Training Program.

    ref. How we created a beautiful native wildflower meadow in the heart of the city using threatened grassland species – https://theconversation.com/how-we-created-a-beautiful-native-wildflower-meadow-in-the-heart-of-the-city-using-threatened-grassland-species-240332

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: From cheeky thrill to grande dame – the Moulin Rouge celebrates 135 years of scandal and success

    Source: The Conversation (Au and NZ) – By Will Visconti, Teacher and researcher, Art History, University of Sydney

    Henri de Toulouse-Lautrec At the Moulin Rouge – The Dance, 1890 Henri de Toulouse-Lautrec/Wikimedia Commons

    When the Moulin Rouge first opened on October 6 1889, it drew audiences from across classes and countries.

    The Moulin offered an array of fin-de-siècle (end-of-the-century) entertainments to Paris locals and visitors. Located in Montmartre, its name, the “red windmill”, alluded to Montmartre’s history as a rural idyll. The neighbourhood was also associated with artistic bohemia, crime, and revolutionary spirit. This setting added a certain thrill for bourgeois audiences.

    From irreverent newcomer to a French institution, the Moulin Rouge has survived scandal, an inferno and found new ways to connect with audiences.




    Read more:
    How the Eiffel Tower became silent cinema’s icon


    Red and electric

    In 1889, the Moulin Rouge was not the only red landmark to open in Paris. The Eiffel Tower, built as part of the Universal Exhibition and originally painted red, had opened earlier that same year. What set them apart, however, was their popularity.

    The Moulin Rouge was an instant hit, capitalising on the global popularity of a dance called the cancan. Dancers like Moulin Rouge headliner La Goulue (“The Glutton”, real name Louise Weber) were seen as more appropriate emblems for the city than the Tower, which many considered an eyesore.

    In an illustration from Le Courrier Français newspaper, a dancer modelled on a photograph of La Goulue holds her leg aloft, flashing her underwear with the caption “Greetings to the provinces and abroad!”.

    Every aspect of the Moulin spoke to the zeitgeist, from its design to the performances, the use of electric lights that adorned its façade, and its advertising.

    Its managers, the impresario team of Joseph Oller and Charles Harold Zidler, had a string of successful venues and businesses to their names. They recognised the importance of modern marketing, using print media, publicity photographs, and posters to spark public interest.

    Among the most iconic images of the Moulin is Henri de Toulouse-Lautrec’s 1891 poster. At its centre is La Goulue, kicking her legs amid swirling petticoats.

    Henri Toulouse-Lautrec’s 1891 poster.
    Shutterstock

    She certainly can cancan

    Found primarily in working-class dance halls from as early as the 1820s, the cancan became a staple of popular entertainment the world over.

    Part of the dance’s thrill lay in the dancers’ freedom of movement and titillation of spectators, as well as its anti-establishment energy. Women used the cancan to thumb their nose at authority via steps like the coup de cul (“arse flash”) or coup du chapeau (removing men’s hats with a high kick).

    The cancan was not the only attraction at the Moulin. There were themed spaces, sideshows, and variety performances ranging from belly dancers and conjoined twins to Le Pétomane (“The Fartomaniac”) who was a flatulist and the highest-paid performer. People watching was equally popular.

    Famous farter, Le Pétomane (Joseph Pujol).
    Wikimedia Commons

    Scandals, riots, and royalty

    Over the years, the Moulin has been no stranger to controversy.

    In its early years, it cultivated an air of misbehaviour and featured in pleasure guides for visiting sex tourists.

    In 1893 it hosted the Bal des Quat’z’Arts (Four-Arts Ball) held by students from local studios. Accusations of public indecency were made against the models and dancers in attendance, and violent protests followed after the women were arrested.

    In 1907 the writer Colette appeared onstage at the Moulin in an Egyptian-inspired pantomime with her then-lover, Missy, the Marquise de Belbeuf. When the act culminated in a passionate kiss, a riot broke out.

    Historical footage shows the Moulin Rouge as it was.

    Kicking on and on

    Over time, the Moulin Rouge shows changed their format to keep pace with public taste, though the cancan remained. The venue hosted revues and operettas, and various stars including Edith Piaf, Ella Fitzgerald, Frank Sinatra and Liza Minnelli.

    Famous guests have included British royalty: from Edward VII (while Prince of Wales) to his great-granddaughter, Queen Elizabeth II, and her son, Prince Edward.

    Since its opening, the Moulin’s fortunes have waxed and waned.

    In 1915 the Moulin Rouge burned down but was rebuilt in 1921. Its famous windmill sails fell off overnight earlier this year but were swiftly repaired.

    In the 1930s, it survived the Depression and rise of cinema (also capturing the attention of several filmakers). It also survived the Nazi occupation of Paris in the 1940s.

    By the early 1960s, Jacki Clerico was managing the Moulin’s show after his father had revamped the venue as a dinner theatre destination. The younger Clérico oversaw additions like a giant aquarium where dancers swam with snakes, and its now-famous “nude line” – a chorus of topless dancers – in its shows.

    In 1963, the Moulin Rouge struck upon a winning formula: revues, all named by Clérico with titles beginning with the letter “F” – from Frou Frou to Fantastique and Formidable. Since 1999, the revue Féerie (“Fairy”, also a French genre of stage extravaganza) has been performed almost without interruption.

    The Moulin Rouge or ‘red mill’ today, with its famous windmill.
    Rafa Barcelos/Shutterstock

    Ticket sales were boosted thanks to Baz Luhrmann’s 2001 film Moulin Rouge! and more recently Moulin Rouge! The Musical.

    Since COVID, the Moulin Rouge management have diversified. The windmill’s interior has been rented out via AirBnB and the Moulin’s dance troupe has performed on France’s televised New Year’s Eve celebrations. This year, the Moulin Rouge and its dancers were part of the Paris Olympics celebrations, dancing in heavy rain.

    Though people have come to appreciate the Eiffel Tower too, the Moulin Rouge can still argue its status as the pinnacle of live entertainment in the French capital: immediately recognisable, internationally visible, and quintessentially Parisian.

    Will Visconti is the author of Beyond the Moulin Rouge: The Life & Legacy of La Goulue (2022), published by the University of Virginia Press.

    ref. From cheeky thrill to grande dame – the Moulin Rouge celebrates 135 years of scandal and success – https://theconversation.com/from-cheeky-thrill-to-grande-dame-the-moulin-rouge-celebrates-135-years-of-scandal-and-success-239849

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: 71% of Australian uni staff are using AI. What are they using it for? What about those who aren’t?

    Source: The Conversation (Au and NZ) – By Stephen Hay, Senior Lecturer, School of Education and Professional Studies, Griffith University

    Yanz Island/Shutterstock

    Since ChatGPT was released at the end of 2022, there has been a lot of speculation about the actual and potential impact of generative AI on universities.

    Some studies have focused on students’ use of AI. There has also been research on what it means for teaching and assessment.

    But there has been no large-scale research on how university staff in Australia are using AI in their work.

    Our new study surveyed more than 3,000 academic and professional staff at Australian universities about how they are using generative AI.

    Our study

    Our survey was made up of 3,421 university staff, mostly from 17 universities around Australia.

    It included academics, sessional academics (who are employed on a session-by-session basis) and professional staff. It also included adjunct staff (honorary academic positions) and senior staff in executive roles.

    Academic staff represented a wide range of disciplines including health, education, natural and physical sciences, and society and culture. Professional staff worked in roles such as research support, student services and marketing.

    The average age of respondents was 44.8 years and more than half the sample was female (60.5%).

    The survey was open online for around eight weeks in 2024.

    We surveyed academic and professional staff at universities around Australia.
    Panitan/Shutterstock

    Most university staff are using AI

    Overall, 71% of respondents said they had used generative AI for their university work.

    Academic staff were more likely to use AI (75%) than professional staff (69%) or sessional staff (62%). Senior staff were the most likely to use AI (81%).

    Among academic staff, those from information technology, engineering, and management and commerce were most likely to use AI. Those from agriculture and environmental studies, and natural and physical sciences, were least likely to use it.

    Professional staff in business development, and learning and teaching support, were the most likely to report using AI. Those working in finance and procurement, and legal and compliance areas, were least likely to use AI.

    Given how much publicity and debate there has been about AI in the past two years, the fact that nearly 30% of university staff had not used AI suggests adoption is still at an early stage.

    What tools are staff using?

    Survey respondents were asked which AI tools they had used in the previous year. They reported using 216 different AI tools, which was many more than we anticipated.

    Around one-third of those using AI had only used one tool, and a further quarter had used two. A small number of staff (around 4%) had used ten tools or more.

    General AI tools were by far the most frequently reported. For example, ChatGPT was used by 88% of AI users and Microsoft Copilot by 37%.

    University staff are also commonly using AI tools with specific purposes such as image creation, coding and software development, and literature searching.

    We also asked respondents how frequently they used AI for a range of university tasks. Literature searching, writing and summarising information were the most common, followed by course development, teaching methods and assessment.

    ChatGPT was the most common generative AI tool used by our respondents.
    Monkey Business Images/ Shutterstock

    Why aren’t some staff using AI?

    We asked staff who had not yet used AI for work to explain their thinking. The most common reason they gave was AI was not useful or relevant to their work. For example, one professional staff member stated:

    While I have explored a couple of chat tools (Chat GPT and CoPilot) with work-related questions, I’ve not needed to really apply these tools to my work yet […].

    Others said they weren’t familiar with the technology, were uncertain about its use or didn’t have time to engage. As one academic told us plainly, “I don’t feel confident enough yet”.

    Ethical objections to AI

    Others raised ethical objections or viewed the technology as untrustworthy and unreliable. As one academic told us:

    I consider generative AI to be a tool of plagiarism. The uses to date, especially in the creative industries […] have involved machine learning that uses the creative works of others without permission.

    They also also raised about AI undermining human activities such as writing, critical thinking and creativity – which they saw as central to their professional identities. As one sessional academic said:

    I want to think things through myself rather than trying to have a computer think for me […].

    Another academic echoed:

    I believe that writing and thinking is fundamental to the work we do. If we’re not doing that, then […] why do we need to exist as academics?

    How should universities respond?

    Universities are at a crucial juncture with generative AI. They face an uneven uptake of the technology by staff in different roles and divided opinions on how universities should respond.

    These different views suggest universities need to have a balanced response to AI that addresses both the benefits and concerns around this technology.

    Despite differing opinions in our survey, there was still agreement among respondents that universities need to develop clear, consistent policies and guidelines to help staff use AI. Staff also said it was crucial for universities to prioritise staff training and invest in secure AI tools.

    Alicia Feldman receives an Australian Government Research Training Program Scholarship and Fee Offset.

    Paula McDonald receives funding from the Australian Research Council.

    Abby Cathcart and Stephen Hay do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    ref. 71% of Australian uni staff are using AI. What are they using it for? What about those who aren’t? – https://theconversation.com/71-of-australian-uni-staff-are-using-ai-what-are-they-using-it-for-what-about-those-who-arent-240337

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: Is big tech harming society? To find out, we need research – but it’s being manipulated by big tech itself

    Source: The Conversation (Au and NZ) – By Timothy Graham, Associate Professor in Digital Media, Queensland University of Technology

    AlexandraPopova/Shutterstock

    For almost a decade, researchers have been gathering evidence that the social media platform Facebook disproportionately amplifies low-quality content and misinformation.

    So it was something of a surprise when in 2023 the journal Science published a study that found Facebook’s algorithms were not major drivers of misinformation during the 2020 United States election.

    This study was funded by Facebook’s parent company, Meta. Several Meta employees were also part of the authorship team. It attracted extensive media coverage. It was also celebrated by Meta’s president of global affairs, Nick Clegg, who said it showed the company’s algorithms have “no detectable impact on polarisation, political attitudes or beliefs”.

    But the findings have recently been thrown into doubt by a team of researchers led by Chhandak Bagch from the University of Massachusetts Amherst. In an eLetter also published in Science, they argue the results were likely due to Facebook tinkering with the algorithm while the study was being conducted.

    In a response eLetter, the authors of the original study acknowledge their results “might have been different” if Facebook had changed its algorithm in a different way. But they insist their results still hold true.

    The whole debacle highlights the problems caused by big tech funding and facilitating research into their own products. It also highlights the crucial need for greater independent oversight of social media platforms.

    Merchants of doubt

    Big tech has started investing heavily in academic research into its products. It has also been investing heavily in universities more generally. For example, Meta and its chief Mark Zuckerberg have collectively donated hundreds of millions of dollars to more than 100 colleges and universities across the United States.

    This is similar to what big tobacco once did.

    In the mid-1950s, cigarette companies launched a coordinated campaign to manufacture doubt about the growing body of evidence which linked smoking with a number of serious health issues, such as cancer. It was not about falsifying or manipulating research explicitly, but selectively funding studies and bringing to attention inconclusive results.

    This helped foster a narrative that there was no definitive proof smoking causes cancer. In turn, this enabled tobacco companies to keep up a public image of responsibility and “goodwill” well into the 1990s.

    Big tobacco ran a campaign to manufacture doubt about the health effects of smoking.
    Ralf Liebhold/Shutterstock

    A positive spin

    The Meta-funded study published in Science in 2023 claimed Facebook’s news feed algorithm reduced user exposure to untrustworthy news content. The authors said “Meta did not have the right to prepublication approval”, but acknowledged that The Facebook Open Research and Transparency team “provided substantial support in executing the overall project”.

    The study used an experimental design where participants – Facebook users – were randomly allocated into a control group or treatment group.

    The control group continued to use Facebook’s algorithmic news feed, while the treatment group was given a news feed with content presented in reverse chronological order. The study sought to compare the effects of these two types of news feeds on users’ exposure to potentially false and misleading information from untrustworthy news sources.

    The experiment was robust and well designed. But during the short time it was conducted, Meta changed its news feed algorithm to boost more reliable news content. In doing so, it changed the control condition of the experiment.

    The reduction in exposure to misinformation reported in the original study was likely due to the algorithmic changes. But these changes were temporary: a few months later in March 2021, Meta reverted the news feed algorithm back to the original.

    In a statement to Science about the controversy, Meta said it made the changes clear to researchers at the time, and that it stands by Clegg’s statements about the findings in the paper.

    Unprecedented power

    In downplaying the role of algorithmic content curation for issues such as misinformation and political polarisation, the study became a beacon for sowing doubt and uncertainty about the harmful influence of social media algorithms.

    To be clear, I am not suggesting the researchers who conducted the original 2023 study misled the public. The real problem is that social media companies not only control researchers’ access to data, but can also manipulate their systems in a way that affects the findings of the studies they fund.

    What’s more, social media companies have the power to promote certain studies on the very platform the studies are about. In turn, this helps shape public opinion. It can create a scenario where scepticism and doubt about the impacts of algorithms can become normalised – or where people simply start to tune out.

    This kind of power is unprecedented. Even big tobacco could not control the public’s perception of itself so directly.

    All of this underscores why platforms should be mandated to provide both large-scale data access and real-time updates about changes to their algorithmic systems.

    When platforms control access to the “product”, they also control the science around its impacts. Ultimately, these self-research funding models allow platforms to put profit before people – and divert attention away from the need for more transparency and independent oversight.

    Timothy Graham receives funding from the Australian Research Council (ARC) for his Discovery Early Career Researcher Award, ‘Combatting Coordinated Inauthentic Behaviour on Social Media’. He also receives ARC funding for the Discovery Project, ‘Understanding and combatting “Dark Political Communication”‘ (2024–2027).

    ref. Is big tech harming society? To find out, we need research – but it’s being manipulated by big tech itself – https://theconversation.com/is-big-tech-harming-society-to-find-out-we-need-research-but-its-being-manipulated-by-big-tech-itself-240110

    MIL OSI AnalysisEveningReport.nz

  • MIL-OSI Global: ‘Carbon contracts for difference’ are not a silver bullet for climate action

    Source: The Conversation – Canada – By Daniel Rosenbloom, Assistant Professor and Rosamond Ivey Research Chair in Sustainability Transitions, Carleton University

    Canadian federal climate policies and investments look increasingly fragile. Could ‘carbon contracts for difference’ help ensure the survival of long-term climate action in Canada? (Shutterstock)

    With the end of the supply-and-confidence agreement and plummeting support for the Liberals, Canada’s climate policy mix is becoming increasingly unstable with the future of everything from investment tax credits to carbon pricing seemingly in flux.

    Given this uncertainty, some industrial emitters have stated they will refrain from making final investment decisions for major emission reducing projects until they receive certain guarantees. Their rationale is that the potential reversal of any climate policy risks the return on investment for their proposed projects.

    Experts have pointed to an obscure mechanism known as a carbon contracts for difference (CCfDs) as an opportunity to allay such concerns.




    Read more:
    Emotions may matter more than facts in shaping individual support for renewable energy, new study shows


    Carbon contracts for difference

    CCfDs are contractual agreements designed to provide price stability for projects that reduce emissions. Under CCfDs, a government entity guarantees a fixed price for the emissions reductions achieved by an industrial project based on established climate policy (for example, the existing or future carbon price).

    If the market price for those reductions falls below this fixed price, the government pays the difference to the project proponents. If the market price exceeds the fixed price, the excess is paid back to the government.

    This type of mechanism is used by a number of governments around the world, including the United Kingdom, and some experts have suggested that a “broad-based contracts for difference program is the key to unlocking billions of dollars of investment in industrial decarbonization.”

    The elegance and deceptive simplicity of this instrument has made it a policy winner in the eyes of many.

    The Canada Growth Fund has allocated up to $7 billion for the issuance of CCfDs to unlock decarbonization projects. In theory, using a CCfD agreement gives an industry partner price stability on investment while the government gets to advance its goals of large emissions reducing projects. Seemingly, a win-win.

    However, growing interdisciplinary research suggests that CCfDs may not always be the obvious win many assume they are.

    Feedback

    There is a long-held understanding in political science that policies produce important feedback patterns that can either reinforce or erode their durability. For example, the social security program in the United States has created a significant voting bloc of beneficiaries that makes it difficult for policymakers to propose cuts to the program.

    Bridging these insights with transition perspectives, my research indicates that harnessing these positive feedbacks can play an important role in building durable climate action.




    Read more:
    What does the end of the Liberal-NDP agreement mean for Canadians?


    In Germany, scholars have found that incentives for new renewable energy (such as in the form of tariffs) helped build coalitions around alternative energy innovations. These coalitions in turn placed pressure upon leaders to ensure continued policy support. Similarly, scholars have shown that industrial policies that support alternative energy innovations and their networks can create positive feedbacks for the climate policy mix.

    Translating these insights to the broad-based use of CCfDs reveals that this instrument risks undermining positive feedbacks or encouraging industrial decarbonization projects with limited ability to contribute to a long-term transition to net-zero.

    Not a perfect solution

    There are three main issues with a CCfD-based approach.

    First, as CCfDs protect the recipient’s bottom line, they are not necessarily incentivized to support existing climate policy. Some experts suggest that a way around this issue is to set the guaranteed price for carbon below the genuine carbon pricing policy. However, it is unclear how low such a discounted price would need to be to maintain positive feedbacks, or if the proposed difference would be sufficient to motivate final investment decisions.

    Second, providing CCfDs for certain emissions reduction projects (such as carbon capture and storage) may inadvertently support industries that have an interest in reversing the direction of climate policy. This focus on opportunities that extend current systems or deliver least-cost emissions reductions reflects a common tendency in policymaking to misunderstand the climate crisis as simply a market failure, and not an issue requiring whole systems change.




    Read more:
    Why do we need a Net Zero Economy Authority? And how can it fulfil its promise?


    Third, the time required to issue CCfDs on a case-by-case basis may actually encourage industrial actors to hold off on making final investment decisions until they receive a guarantee, delaying action further.

    What this shows is that while CCfDs may have a targeted role to play in advancing critical emission reduction projects (such as those that unlock systems change in key sectors), policymakers should be wary of relying too heavily on this instrument.

    A more strategic approach is needed that involves charting pathways between where sectors are now and long-term desirable net-zero outcomes — an approach that is being actively advanced by Canada’s Transition Accelerator. A strategic approach would focus support on industries willing to hitch their carts to the future of the climate policy mix and defend climate action no matter who is in office.

    As the Ivey Research Chair in Sustainability Transitions, Daniel Rosenbloom would like to acknowledge the generous support of the Ivey Foundation. Rosenbloom is also a Steering Group member of the Sustainability Transitions Research Network, which is a scholarly network working toward the advancement of transition scholarship.

    ref. ‘Carbon contracts for difference’ are not a silver bullet for climate action – https://theconversation.com/carbon-contracts-for-difference-are-not-a-silver-bullet-for-climate-action-237437

    MIL OSI – Global Reports

  • MIL-OSI Global: Little kids, too little movement: Global study finds most children don’t meet guidelines for physical activity, screen time and sleep

    Source: The Conversation – Canada – By Mark S Tremblay, Professor of Pediatrics in the Faculty of Medicine and Senior Scientist at the CHEO Research Institute, L’Université d’Ottawa/University of Ottawa

    A recent study found that only 14 per cent of preschoolers around the world are meeting movement recommendations for physical activity, sleep and screen time. (Shutterstock)

    Appropriate levels of physical activity, sedentary behaviour and sleep (collectively termed movement behaviours) are essential for the healthy growth and development of preschool-aged children.

    This was the impetus for creating the Canadian 24-Hour Movement Guidelines for the Early Years (birth to four years). Likewise, this is why the World Health Organization adopted the Canadian guidelines when creating the global guidelines on physical activity, sedentary behaviour and sleep for children under five years of age.

    Considering the extensive benefits of movement behaviours, it is very alarming that a recent study found that only 14 per cent of preschoolers around the world are meeting movement behaviour guideline recommendations.

    A 24-hour day in the life of a preschooler meeting the guideline recommendations includes:

    • three or more hours of total physical activity (including at least one hour of energetic play or activities that make them slightly out of breath),
    • one hour or less of screen time, and
    • 10 to 13 hours of good quality sleep

    Importantly, preschoolers who meet these guidelines gain health benefits such as reduced risk of obesity, improved social and emotional skills, and proficient motor skills.

    Global levels

    Preschoolers with healthy movement behaviour habits meeting these guideline recommendations gain health benefits such as reduced risk of obesity, improved social and emotional skills, and proficient motor skills.
    (Pixabay/Oleksandr Pidvalnyi)

    A new global study shows most children around the world don’t meet these guidelines. The study included more than 7,000 preschoolers from 33 different countries, including Canada. The countries represented various World Bank income groups (e.g., high, middle and low income countries); and the geographical regions of Africa, Americas, Eastern Mediterranean, Europe, Southeast Asia and Western Pacific.

    When looking at each movement behaviour individually for preschoolers around the world, 49 per cent met the physical activity recommendations, 42 per cent met the screen time recommendation, and 81 per cent met the sleep recommendation.

    That most young children are not meeting each of these basic recommendations separately is cause for concern; that 86 per cent are not meeting all guideline recommendations combined is alarming and places preschoolers around the world at risk of sub-standard health and development.

    Globally, 81 per cent of preschoolers met sleep recommendations.
    (Shutterstock)

    Seventeen per cent of boys met all the guideline recommendations, compared to 13 per cent of girls. This slight difference was driven by more boys meeting the physical activity recommendation (56 per cent boys, 42 per cent girls), and protected from being even worse by more girls meeting the screen time (45 per cent girls, 38 per cent boys) and sleep (82 per cent girls, 79 per cent boys) recommendations.

    The fact that boys had more screen time and less good quality sleep could be related, as previous research has found screen time overall and screen time in the evening is associated with less sleep and lower sleep quality.

    Better screen time and sleep habits for girls protected their overall movement behaviour adherence from being even worse, showcasing the various paths to health through different movement behaviour combinations. However, the low number meeting all movement behaviour recommendations demonstrates the need for all preschoolers to routinely be more active, reduce screen time and accumulate good quality sleep in a day.

    By income

    Screen time in the evening is associated with less sleep and lower sleep quality.
    (Shutterstock)

    Low-income countries had the highest movement behaviour guideline adherence levels (17 per cent), compared to middle-income (12 per cent) and high-income (14 per cent) countries.

    While children from high-income countries were more active and had more quality sleep, they also had the worst screen time behaviours compared to low- and middle-income countries. It is a double-edged sword that in higher-income countries, children have more access to physical activity opportunities and quality sleep environments, but also more access to screen time devices.

    Likewise, middle-income countries with the lowest movement behaviour adherence rates could symbolize a region’s development transition where infrastructure in the homes and communities cannot yet support more physical activity and good quality sleep, but availability of cell phones, televisions and other screens leads to increased sedentary behaviours.

    By region

    The African and European regions had the highest movement behaviour adherence (24 per cent), while the Americas region had the lowest (eight per cent). With 17 per cent meeting the screen time recommendations and 68 per cent meeting the physical activity recommendations, the Americas region had the worst screen time and best physical activity.

    The physical activity levels of the Americas region preschoolers are higher compared to the 39 per cent of older Canadian children and youth as reported in the ParticipACTION Report Card on Physical Activity for Children and Youth. But these older Canadian children and youth did have slightly better, albeit still poor, screen time behaviours with 27 per cent meeting the guidelines.

    Sixty-eight per cent of preschool-aged children in the Americas were meeting the physical activity recommendations, compared to only 26 per cent of Southeast Asian children. However, it remains a concern that roughly half of all young children around the world are at risk of sub-optimal health and development from lack of physical activity.

    Roughly half of all young children around the world are at risk of sub-optimal health and development from lack of physical activity.
    (Shutterstock)

    Guidance for improvements can be drawn from the World Health Organization’s Global Action Plan on Physical Activity, where the goal of a 15 per cent relative reduction in global physical inactivity rates by 2030 relies on capacity-building collaborations within research organizations and alliances to strengthen our global understanding of movement behaviours.

    Along with the best movement behaviours overall, the African region had the best screen time levels with 63 per cent meeting the recommendations. This is potentially explained by limited access to screen time devices.

    However, to better understand why screen time behaviours are better in Africa, initiatives like the Active Healthy Kids Global Alliance Global Matrix project should be used as a model. Within the Global Matrix, region-level differences are an opportunity to learn the strengths of other regions, while addressing regional weaknesses at home.

    For instance, Canada could be a model for less active countries, while attempting to model the African region’s reduced screen time lifestyles. Further, projects such as the SUNRISE study — where researchers from more than 70 countries are collaborating to measure preschoolers’ movement behaviours, health and development — are excellent venues for this necessary capacity-building and global learning.

    Take home

    The WHO has Global Movement Guidelines for preschool children and a Global Action Plan to increase physical activity. Canada has similar guidelines and a similar plan.

    However, health movement behaviour levels in Canada and across the globe are unsatisfactory and forecast further global health challenges, inequalities, and distancing from United Nations Sustainable Development Goals. It’s time to get our little ones a little more active.

    Mark S Tremblay has received research funding from the Canadian Institutes of Health Research and the Public Health Agency of Canada for research distally related to this article. He is affiliated with the Canadian Society for Exercise Physiology who created the Canadian 24-hour Movement Guidelines for the Early Years, under his leadership. He was also on the expert panel for the World Health Organization for the development of the global guidelines cited in the article.

    Nicholas Kuzik does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Little kids, too little movement: Global study finds most children don’t meet guidelines for physical activity, screen time and sleep – https://theconversation.com/little-kids-too-little-movement-global-study-finds-most-children-dont-meet-guidelines-for-physical-activity-screen-time-and-sleep-240421

    MIL OSI – Global Reports

  • MIL-Evening Report: Down and under pressure: US and UK artists are taking over Australian charts, leaving local talent behind

    Source: The Conversation (Au and NZ) – By Tim Kelly, PhD Candidate, University of Technology Sydney

    Shutterstock

    Missy Higgins’ recent ARIA number-one album, The Second Act, represents an increasingly rare sighting: an Australian artist at the top of an Australian chart.

    My recently published analysis of Australia’s best-selling singles and albums from 2000 to 2023 shows a significant decline in the representation of artists from Australia and non-English-speaking countries.

    The findings suggest music streaming in Australia – together with algorithmic recommendation – is creating a monoculture dominated by artists from the United States and United Kingdom. This could spell bad news for our music industry if things don’t change.

    Who dominates Australian charts?

    In 2023, Australia’s recorded music industry was worth about A$676 million, up 10.9% year on year.

    Building a strong local music industry is important, not only to support diverse cultural expression, but also to create jobs and boost Australia’s reputation on a global stage.

    When Australian artists succeed, this attracts global investment, which in turn stimulates all aspects of the local music industry. Conversely, a weak music economy can lead to global disinvestment, thereby disadvantaging local companies, artists and consumers.

    My research shows how the rise of music streaming – which became the dominant format for Australian recorded music sales in 2017 – has had a noticeable impact on the diversity of artists represented in the ARIA top 100 single and album charts.

    In the year 2000, the top 100 singles chart featured hits from 14 different countries. By contrast, only seven countries were represented in 2023.

    The percentage of Australian and New Zealand artists in the top 100 single charts declined from an average of 16% in 2000–16 to around 10% in 2017–23, and just 2.5% in 2023.

    Album share also declined from an average of 29% in 2000–16 to 18% in 2017–23, and 4% in 2023.

    This chart shows changes in diveristy in the ARIA top 100 albums chart over 22 years.
    Author provided

    Similarly, the proportion of artists from outside the Anglo bloc of North America, the UK and Australia/New Zealand declined from an average of 11.1% in 2000–16 to 7.3% in 2017–23 – while album share declined from 5% in 2000–16 to 2.3% in 2017–23.

    My study also found representation of Indigenous artists remained low, but stable, over the period studied – and in line with population ratios.

    Concetration of power

    The findings suggest the decline in Australian and non-Anglo representation in the ARIA top 100 charts is linked.

    Some economists and academics have argued easier access to independent music and global distribution via streaming will lead to greater diversity in music. But this hasn’t been the case in Australia, at least as far as chart-topping artists are concerned.

    The global recorded music industry has consolidated in recent years. In the early 2000s there were five major music labels. Currently there are just three: Universal, Sony and Warner.

    Last year, these three labels were responsible for more than 95% of the Australian top 100 single and album charts. Meanwhile, Spotify, Apple Music and YouTube make up an estimated 97% of the Australian streaming market.

    These concentrations of power allow a handful of record labels and distributors to have a disproportionate influence over music design, production, distribution and governance – thereby limiting opportunities for diversity.

    The need for new policy

    My findings align with European research that found markets with a strong cultural differentiator of language are showing increased national diversity with streaming.

    However, countries without a distinctive language are being increasingly dominated by global music production. In Australia’s case, we’re becoming reliant on the star-making machinery of the US.

    Recently, Australia’s live music crisis came under scrutiny at a federal government inquiry, which highlighted the significant power imbalance between artists and multinational promoters.

    As I and many others have suggested, targeted cultural policies are necessary to combat our highly concentrated and US-dependent market.

    Relying on labels and streaming platforms will do little to preserve and promote our nation’s unique musical and cultural identity.

    Previous employment at Sony Music, Universal Music, Inertia Music. ARIA Chart Committee member 2005-2017. Employment at these labels ceased by 2017. No continued professional relationship with any of the companies.

    ref. Down and under pressure: US and UK artists are taking over Australian charts, leaving local talent behind – https://theconversation.com/down-and-under-pressure-us-and-uk-artists-are-taking-over-australian-charts-leaving-local-talent-behind-239822

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: ADHD prescribing has changed over the years – a new guide aims to bring doctors up to speed

    Source: The Conversation (Au and NZ) – By Brenton Prosser, Professor of Public Policy and Leadership, UNSW Sydney

    Ketut Subiyanto/Pexels

    Attention-deficit hyperactivity disorder (ADHD) is the most diagnosed childhood neurological disorder in Australia.

    Over the years, it has been the subject of controversy about potential misdiagnosis and overdiagnosis. There has also been variation in levels of diagnosis and drug prescription, depending on where you live and your socioeconomic status.

    To address these concerns and improve consistency in ADHD diagnosis and prescribing, the Australasian ADHD Professionals Association has released a new prescribing guide. This will help the health-care workforce to consistently get the right treatment to the right people, with the right mix of medical and non-medical supports.

    Here’s how ADHD prescribing has changed over time and what the new guidelines mean.

    What is ADHD and how is it treated?

    Up to one in ten young Australians experience ADHD. It is diagnosed due to inattention, hyperactivity and impulsivity that has negative effects at home, school or work.

    Psychostimulant medication is a central pillar of ADHD treatment.

    However, the internationally recognised approach is to combine medicines with non-medical interventions in a multimodal approach. These non-medical interventions include cognitive behavioural therapy (CBT), occupational therapy, educational strategies and other supports.

    Medication use has changed over time

    In Australia, Ritalin (methylphenidate) was originally the most prescribed ADHD medication. This changed in the 1990s after the introduction of dexamphetamine, along with the subsequent availability of Vyvance (lisdexamfetamine).

    Perhaps the most significant change has come with “slow release” versions of the above medications that can last more than eight hours (longer than a school day).

    When following clinical guidelines, prescribing medication for ADHD is safe practice. Yet the use of amphetamines to treat young people with ADHD has caused public concern. This highlights the importance of consistent guidelines for prescribing professionals.

    Medication for ADHD can be combined with other non-drug approaches.
    Caleb Woods/Unsplash

    Growth in diagnosis and prescribing

    Starting from low levels, there was a dramatic rise in diagnosis and drug treatment in the 1990s. Much of this was overseen by a small number of psychiatrists and paediatricians in each state or territory. While this promised the potential of consistency in the early days, it also raised concerns about best practice.

    This led to the development of the first ADHD clinical guidelines by the National Medical Health and Research Council in 1997.

    It was followed by several refinements as prescription expanded due to changing diagnostic criteria (expanding to include a dual diagnosis with autism) and the need for best practice with the growing prescription by GPs. These guidelines enhanced the consistency of approaches nationally and reduced the likelihood of misdiagnosis or overdiagnosis.

    However, a recent Senate inquiry found diagnosis and drug treatment continued to grow substantially in the five years to 2022. It emphasised the need for a more consistent approach to diagnosis and prescribing.

    First the ingredients, then the recipe

    The most recent clinical guidelines, released by the Australasian ADHD Professionals Association in 2022, outlined a roadmap for ADHD clinical practice, research and policy. They did so by drawing on the lived experience of those with ADHD. They also emphasised broader health questions, such as how to respond to ADHD as a holistic condition.

    It remains difficult to predict individual responses to different medication. So the new prescribing guide offers practical advice about safe and responsible prescribing. This aims to reduce the potential for incorrect prescribing, dosing and adjusting of ADHD medication, across different age groups, settings and individuals.

    To put this visually, the clinical guidelines describe what the ingredients of the cake should be, while the prescribing guidelines provide step-by-step recipes.

    So what do they recommend?

    An important principle in both these documents is that medication should not be the first and only treatment. Not every drug works the same way for every child. In some cases they do not work at all.

    The possible side effects of medication vary and include poor appetite, sleep problems, headaches, stomach aches, moodiness and irritability. These guidelines assist in adapting medication to reduce these side effects.

    Medication provides an important window of opportunity for many young people to gain maximum value from psychosocial and psychoeducational supports. These supports can, among others, include:

    Support for ADHD can also include parent training. This is not to suggest parents cause ADHD. Rather, they can support more effective treatment, especially since the rigours of ADHD can be a challenge to even the “perfect” parent.

    Getting the right diagnosis

    There have been reports of people seeking to use TikTok to self-diagnose, as well as a rise in people using ADHD stimulants without a prescription.

    However, the message from these new guidelines is that ADHD diagnosis is a complex process that takes a specialist at least three hours. Online sources might be useful to prompt people to seek help, but diagnosis should come from a qualified health-care professional.

    Finally, while we have moved beyond unhelpful past debate about whether ADHD is real to consolidate best diagnostic and prescribing practice, there is some way to go in reducing stigma and changing negative community attitudes to ADHD.

    Hopefully in future we’ll be better able to cherish diversity and difference, and not just see it as a deficit.

    Brenton Prosser is a Board Member of the Council of Academic Public Health Institutions Australasia and affiliated with the School of Population Health at UNSW.

    ref. ADHD prescribing has changed over the years – a new guide aims to bring doctors up to speed – https://theconversation.com/adhd-prescribing-has-changed-over-the-years-a-new-guide-aims-to-bring-doctors-up-to-speed-240313

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: Curious Kids: What does the edge of the universe look like?

    Source: The Conversation (Au and NZ) – By Sara Webb, Lecturer, Centre for Astrophysics and Supercomputing, Swinburne University of Technology

    Greg Rakozy/Shutterstock

    What does the edge of the universe look like?

    Lily, age 7, Harcourt

    What a great question! In fact, this is one of those questions humans will continue to ask until the end of time. That’s because we don’t actually know for sure.

    But we can try and imagine what the edge of the universe might be, if there is one.

    Looking back in time

    Before we begin, we do need to go back in time. Our night sky has looked the same for all of human history. It’s been so reliable, humans from all around the world came up with patterns they saw in the stars as a way to navigate and explore.

    To our eyes, the sky looks endless. With the invention of telescopes about 400 years ago, humans were able to see farther – more than just our eyes ever could. They continued to discover new things in the sky. They found more stars, and then eventually started to notice that there were a lot of strange-looking cosmic clouds.

    Astronomers gave them the name “nebula” from the Latin word for “mist” or “cloud”.

    It was less than 100 years ago that we first confirmed these cosmic clouds or nebulas were actually galaxies. They are just like Milky Way, the galaxy our own planet is in, but very far away.

    What is amazing is that in every direction we look in the universe, we see more and more galaxies. In this James Webb Space Telescope image, which is looking at a part of the sky no bigger than a grain of sand, you can see thousands of galaxies.

    It’s hard to imagine there is an edge where all of this stops.

    The edge of the universe

    However, there is technically an edge to our universe. We call it our “observable” universe.

    This is because we don’t actually know if our universe is infinite – meaning it continues forever and ever.

    Unfortunately, we might never know because of one pesky thing: the speed of light.

    We can only ever see light that’s had enough time to travel to us. Light travels at exactly 299,792,458 metres per second. Even at those speeds, it still takes a long time to cross our universe. Scientists estimate the size of the universe is at least 96 billion light years across, and likely even bigger.

    You can learn a little more about that and our universe as a whole in this video below.

    What would we see if there was an edge?

    If we were to travel to the very, very edge of the universe we think exists, what would there actually be?

    Many other scientists and I theorise that there would just be … more universe!

    As I said, there is a theory that our universe doesn’t actually have an edge, and might continue on indefinitely.

    But there are other theories, too. If our universe does have an edge, and you cross it, you might just end up in a completely different universe altogether. (That is best saved for science fiction for now.)

    Even though there isn’t a straightforward answer to your question, it is precisely questions like these that help us continue to explore and discover the universe, and allow us to understand our place within it. You’re thinking like a true scientist.

    Sara Webb does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Curious Kids: What does the edge of the universe look like? – https://theconversation.com/curious-kids-what-does-the-edge-of-the-universe-look-like-233111

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: NSW will remove 65,000 years of Aboriginal history from its syllabus. It’s a step backwards for education

    Source: The Conversation (Au and NZ) – By Michael Westaway, Australian Research Council Future Fellow, Archaeology, School of Social Science, The University of Queensland

    The NSW Education Standards Authority has announced that teaching of the Aboriginal past prior to European arrival will be excluded from the Year 7–10 syllabus as of 2027.

    Since 2012, the topic “Ancient Australia” has been taught nationally in Year 7 as part of the Australian Curriculum. In 2022, a new topic called the “deep time history of Australia” was introduced to provide a more detailed study of 65,000 years of First Nations’ occupation of the continent.

    However, New South Wales has surprisingly dropped this topic from its new syllabus, which will be rolled out in 2027. Instead, students will only learn First Nations’ history following European colonisation in 1788.

    This directly undermines the Alice Springs (Mparntwe) Education Declaration of 2020. This is a national agreement, signed by education ministers from all jurisdictions, which states:

    We recognise the more than 60,000 years [sic] of continual connection by Aboriginal and Torres Strait Islander peoples as a key part of the nation’s history, present and future.

    If the planned change to the syllabus goes through, the only Aboriginal history taught to NSW students would be that which reflects the destruction of traditional Aboriginal society. It also means Aboriginal students in NSW will be denied a chance to learn about their deep ancestral past.

    The significance of Australia’s deep time past

    Bruce Pascoe’s groundbreaking 2014 book Dark Emu (which sold more than 500,000 copies), and the associated documentary, have highlighted an enormous appetite for learning about Australia’s deep time past.

    Hundreds of thousands of Australians engaged with Dark Emu. As anthropologist Paul Memmott notes, the book prompted a debate that encouraged a better understanding of Aboriginal society and its complexity.

    It also generated research that investigated whether terms such as “hunter-gatherers” are appropriate for defining past Aboriginal society and economic systems.




    Read more:
    Farmers or foragers? Pre-colonial Aboriginal food production was hardly that simple


    In schools, teachers have used Pascoe’s book Young Dark Emu to introduce students to sophisticated land and aquaculture systems used by First Peoples prior to colonisation.

    The book raises an important question. If you lived in a country that invented bread and the edge-ground axe – a culture that independently developed early trade and social living – and did all of this without resorting to land war – wouldn’t you want your children to know about it?

    For many students, the history they learn at school is knowledge they carry into their adult lives – and knowledge is the strongest antidote to ignorance. Rather than abandoning the Aboriginal deep time story, schools should be encouraging students to engage with it.

    Learning on Country

    One of the strengths of the current NSW history syllabus is the requirement for students to undertake a “site study” in Years 8 and 9. Currently, NSW is the only jurisdiction that has made this mandatory.

    Site studies are an excellent opportunity for students to learn on Country. Many teachers organise excursions to Aboriginal cultural sites where students can directly engage with local Traditional Owners and Elders.

    New South Wales is brimming with sites of cultural significance to Aboriginal people. The map below highlightssome of these, ranging from megafauna sites, to extensive fish traps, to the enigmatic rock art galleries and ceremonial engravings (petroglyphs).



    How students will miss out

    The Ngambaa people and archaeologists from the University of Queensland are currently investigating one of the largest midden complexes in Australia. This complex, located at Clybucca and Stuart’s Point on the north coast, spans some 14 kilometres and dates back to around 9,000 years ago.

    Middens, or “living sites”, are accumulations of shell that were built over time through thousands of discarded seafood meals. Since the shells help reduce the acidic chemistry of the soil, animal bones and plant remains are more likely to be preserved in middens.

    For instance, the Clybucca-Stuarts Point midden complex contains remains from seals and dugongs. Both of these animals were once part of the local ecosystem, but no longer are.

    The middens also extend back to before the arrival of dingoes, so studying them could help us understand how biodiversity changed once dingoes replaced thylacines and Tasmanian devils on the mainland.

    Local school students, especially Aboriginal students, will be actively participating in this cutting-edge research alongside the Ngambaa people, archaeologists and teachers. Among other things, the students will learn how the Ngambaa people sustainably managed land and sea Country over thousand of years during periods of dramatic environmental change.

    But innovative programs like this will no longer be as relevant if Australia’s deep time history is removed from the NSW syllabus.

    An opportunity for leadership

    The study of First Nations archaeological sites, history and cultures tells us a broader human story of continuity and adaptability over deep time. Indigenising the curriculum – wherein Aboriginal knowledge is braided with historical and archaeological inquiry – is a powerful way to reconcile different approaches to understanding the past.

    The NSW Education Standards Authority’s proposed changes risk sending young people the message that Australia’s “history” before colonisation is not an important part of the country’s historic narrative.

    But there is still time to show leadership – by reversing the decisions and by connecting teachers and students to powerful stories from Australia’s deep time past.

    Michael Westaway receives funding from the Australian Research Council and Humanities and Social Science at the University of Queensland .

    Bruce Pascoe is the author of the texts mentioned in this article, Dark Emu and Young Dark Emu: A Truer History. He also has positions on the boards of Black Duck Foods, the Twofold Aboriginal Corporation and First Languages Australia.

    Louise Zarmati receives research funding from the ARC Centre of Excellence of Australian Biodiversity and Heritage.

    ref. NSW will remove 65,000 years of Aboriginal history from its syllabus. It’s a step backwards for education – https://theconversation.com/nsw-will-remove-65-000-years-of-aboriginal-history-from-its-syllabus-its-a-step-backwards-for-education-240111

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: New video shows sharks making an easy meal of spiky sea urchins, shedding light on an undersea mystery

    Source: The Conversation (Au and NZ) – By Jeremy Day, PhD researcher, University of Newcastle

    Author provided

    Long-spined sea urchins have emerged as an environmental issue off Australia’s far south coast. Native to temperate waters around New South Wales, the urchins have expanded their range south as oceans warm. There, they devour kelp and invertebrates, leaving barren habitats in their wake.

    Lobsters are widely accepted as sea urchins’ key predator. In efforts to control urchin numbers, scientists have been researching this predator-prey relationship. And the latest research by my colleagues and I, released today, delivered an unexpected result.

    We set up several cameras outside a lobster den and placed sea urchins in it. We filmed at night for almost a month. When we checked the footage, most sea urchins had been eaten – not by lobsters, but by sharks.

    This suggests sharks have been overlooked as predators of sea urchins in NSW. Importantly, sharks seem to very easily consume these large, spiky creatures – sometimes in just a few gulps! Our findings suggest the diversity of predators eating large sea urchins is broader than we thought – and that could prove to be good news for protecting our kelp forests.

    A puzzling picture

    The waters off Australia’s south-east are warming at almost four times the global average. This has allowed long-spined sea urchins (Centrostephanus rodgersii) to extend their range from NSW into waters off Victoria and Tasmania.

    Sea urchins feed on kelp and in their march south, have reduced kelp cover. This has added to pressure on kelp forests, which face many threats.

    Scientists have been looking for ways to combat the spread of sea urchins. Ensuring healthy populations of predators is one suggested solution.

    Overseas research on different urchin species has focused on predators such as lobsters and large fish. It found kelp cover can be improved by protecting or reinstating these predators.

    Sea urchins feed on kelp.
    Nathan Knott

    In NSW, eastern rock lobsters are thought to be important urchin predators. The species has been over-fished in the past but stocks have significantly bounced back in recent years.

    But despite this, no meaningful reduction in urchin populations, or increase in kelp growth, has been observed in NSW.

    Why not? Could it be that lobsters are not eating urchins in great numbers after all? Certainly, there is little empirical evidence on how often predators eat urchins in the wild.

    What’s more, recent research in NSW suggested the influence of lobsters on urchin populations was low, while fish could be more important.

    Our project aimed to investigate the situation further.

    Eastern rock lobsters are thought to be major urchin predators.
    Flickr/Richard Ling, CC BY

    What we did

    We tied 100 urchins to blocks outside a lobster den off in Wollongong for 25 nights. This tethering meant the urchins were easily available to predators and stayed within view of our cameras.

    Then we set multiple cameras to remotely turn on at sunset and turn after sunrise each day, to capture nocturnal feeding. We used a red-filtered light to film the experiments because invertebrates don’t like the white light spectrum.

    We expected our cameras would capture lobsters eating the urchins. But in fact, the lobsters showed little interest in the urchins and ate just 4% of them. They were often filmed walking straight past urchins in search of other food.

    Sharks, however, were very interested in the urchins. Both crested horn sharks (Heterodontus galeatus) and Port Jackson sharks (H. portusjacksonii) entered the den and ate 45% of the urchins.

    As the footage below shows, sharks readily handled very large urchins (wider then 12 centimetres) with no hesitation.

    Until now, it was thought few or no predators could handle urchins of this size. Larger urchins have longer spines, thicker shells and attach more strongly to the seafloor, making them harder to eat.

    But the sharks attacked urchins from their spiny side, showing little regard for their sharp defences. This approach differs from other predators, such as lobsters and wrasses, which often turn urchins over and attack them methodically from their more vulnerable underside.

    In fact, some sharks were so eager to eat urchins, they started feeding before the cameras turned on at sunset. This meant we had to film by hand.

    Footage captured by the researchers showing crested horn sharks eating sea urchins. Horn sharks generally do not pose a threat to humans.

    A complex food web

    Our experiment showed the effect of lobsters on urchins in the wild is less than previously thought.
    This may explain why efforts to encourage lobster numbers have not helped control urchin numbers.

    We also revealed a little-considered urchin predator: sharks.

    Lobsters are capable but hesitant predators, whereas sharks seem eager to eat urchins. And crested horn sharks are an abundant, hardy species that is not actively fished.

    When interpreting these findings, however, a few caveats must be noted.

    First, sharks (and lobsters) are not the only animals to prey on urchins. Other predators include bony fishes, and more are likely to be identified in future.

    Second, other factors can control urchin numbers, such as storm damage and the influx of fresh water.

    And finally, it is unsurprising that we found a key predator when we intentionally searched for it by laying out food. Tethering urchins creates an artificial environment. We don’t know if the results would be replicated in the wild.

    And even though we now know some shark species eat sea urchins, we don’t yet know if they can control urchins numbers.

    But our research does confirm predators capable of handling large urchins may be more widespread than previously thought.

    Jeremy Day received funding from University of Newcastle, Ecological Society of Australia, Royal Zoological Soceity of New South Wales and Fisheries Research and Development Corporation.

    ref. New video shows sharks making an easy meal of spiky sea urchins, shedding light on an undersea mystery – https://theconversation.com/new-video-shows-sharks-making-an-easy-meal-of-spiky-sea-urchins-shedding-light-on-an-undersea-mystery-240205

    MIL OSI AnalysisEveningReport.nz

  • MIL-OSI Global: Iran’s strike on Israel was retaliatory – but it was also about saving face and restoring deterrence

    Source: The Conversation – USA – By Aaron Pilkington, Fellow at the Center for Middle East Studies, University of Denver

    Israel and Iran are at war. In truth, the two sides have been fighting for decades, but the conflict has played out largely under the cover of covert and clandestine operations.

    The recent actions of both sides in this once “shadow war” have changed the nature of the conflict. It is not clear that de-escalation is on the horizon.

    On Oct 1, 2024, Iran launched a massive, direct attack against Israel notionally in retribution for Israel’s dual assassinations of Hamas leader Ismail Haniyeh and Hezbollah’s chief, Secretary General Hassan Nasrallah.

    It was the second such barrage in six months.

    By many accounts, the previous Iranian attack against Israel on April 13 – which consisted of over 300 ballistic and cruise missiles and attack drones – caused very little damage to Israel. Perhaps because of this, and likely in part due to U.S. encouragement of restraint, Israel’s immediate military response then – an airstrike against a single advanced Iranian air defense system in the Isfahan province – was somewhat measured.

    Many onlookers saw the calibrated exchange in April as a possible indication that both sides would prefer to de-escalate rather than engage in ongoing open warfare.

    But further Israeli military operations since then have prompted escalatory Iranian military responses, forcing the conflict back out of the shadows.

    With Hamas’ capabilities and leadership degraded in the Gaza Strip, Israel’s military leaders announced in June that they were “ready to face” Hezbollah – the Iranian-backed Lebanese militant group whose persistent rocket attacks against northern Israel have caused tens of thousands to evacuate the area.

    Israel pivots north

    Israel’s pivot from Gaza toward Lebanon coincided with the July 31, 2024, assassination of Hamas’ political bureau chairman, Haniyeh, during his stay in Tehran. The purported Israeli operation was seen as an affront to Iran’s sovereignty. It was also an embarrassment that highlighted the vulnerability and permeability of Iran’s internal security apparatus.

    Even though Iran Supreme Leader Ayatollah Khamenei vowed a “harsh response” against Israel, by September Iran had taken no action.

    Tehran’s inaction caused many Middle East analysts to question if the Iranian response would ever materialize – and by extension, what that would mean for Khamenei’s commitment to his proxy forces.

    If indeed Iran’s leadership opted for restraint following the assassination of Hamas’ top political leader, the same could not be said for its reaction to Israel’s multiphase operation against Hezbollah in mid-September.

    Israel began with a clandestine operation to sow chaos and confusion in Hezbollah’s command and control through the means of sabotaged explosive communications devices. Israel then carried out airstrikes eliminating Hezbollah’s top leaders including Nasrallah. The Israeli military then launched what the country’s leaders describe as a “limited [ground] operation” into southern Lebanon to remove Hezbollah positions along the northern border.

    Tehran’s Oct 1. attack in response against Israel was, according to many Middle East experts and indeed Iranian military leaders, primarily a retaliation for the two high-profile assassinations against Hamas and Hezbollah leaders.

    These were certainly key factors. But as an expert on Iran’s defense strategy, I argue that Iran’s leaders also felt compelled to attack Israel for three equally, if not more important, reasons: to slow Israel’s advance in Lebanon, to save face, and to restore deterrence.

    Challenging Israel’s advance

    Iran hopes to slow and potentially reverse Israel’s successes against Hezbollah, especially as Israel embarks on ground operations into southern Lebanon. Of course, Israeli ground troops must now deal with what is perhaps the world’s most capable guerrilla fighting force – one that performed quite successfully during the 2006 Israel-Hezbollah war.

    Nevertheless, Israel’s ability to achieve a tactical surprise and eliminate Hezbollah’s top leaders – even in the midst of an ongoing localized war, and even after Israel’s leaders announced their intention to engage Hezbollah – reveals a far superior Israeli strategy and operational planning and execution capability than that of Hezbollah.

    And that presents a huge blow to what is seen in Iran as the Islamic Republic’s crown jewel within its “Axis of Resistance.”

    In this respect, the Oct. 1 retaliatory strike by Iran can be seen as an attempt to afford Hezbollah time to appoint replacement leadership, regroup and organize against Israel’s ground invasion.

    The brutal art of save face?

    It also serves to help Iran save face, especially in how it’s seen by other parts of its external proxy network.

    Orchestrated by the Islamic Revolutionary Guards Corps, or IRGC – Tehran’s primary arm for coordinating external operations – Iranian money, training, guidance and ideological support enabled and encouraged the Oct. 7, 2023, Hamas attack against Israel – even, as it has claimed, Iran had no prior warning of the assault.

    Since then, Hamas fighters have received almost no real-time support from Tehran. This lack of support has no doubt contributed to Hamas being successfully degraded as a threat by Israel, with many of its members either dead or in hiding and unable to mount a coherent offensive campaign, leading Israel’s military leaders to claim the group has been effectively defeated.

    Unsurprisingly, Iran is glad to enable Palestinians to fight Tehran’s enemies and absorb the human costs of war, because this arrangement primarily benefits the Islamic Republic.

    Once the fighting in Gaza started, the IRGC was nowhere to be found.

    Rockets fired from Iran are seen over Jerusalem on Oct. 1, 2024.
    Wisam Hashlamoun/Anadolu via Getty Images

    Now that Israel has shifted its attention to Lebanon and scored several initial tactical successes against Hezbollah, Iran cannot afford to stand back and watch for two main reasons. First, a year of fighting in Gaza has demonstrated that Israel is willing to do whatever it takes to eliminate threats along its borders – including a willingness to withstand international political pressure or operate within Iran’s borders.

    And second, Iran’s proxy groups elsewhere are watching to see if Tehran will continue supporting them – or will abandon them, as it seemingly has done with Hamas.

    Reclaiming deterrence

    Perhaps above all, in Tehran’s calculus over how to respond is Iran’s need to restore a deterrence.

    The two defining features of Iran’s interrelated external, or “forward defense,” and deterrence strategies is its regional network of militant proxies and its long-range weapons arsenal, which includes a large number of advanced ballistic missiles, cruise missiles and attack-capable drones.

    These Iranian defense strategies seek to dissuade enemies from attacking Iran proper in two ways: first, by threatening Israel and other regional U.S. allies with punishment via proxy militia or long-range weapon attacks; and second, by offering scapegoat targets against which Iran’s enemies can express their rage. In effect, Iran’s proxy forces act as proxy targets that pay the costs for Iran’s hostile policies.

    Israel’s degradation of Hamas and ongoing operations against Hezbollah threaten to undermine Iran’s ability to deter attacks against the homeland. For the Islamic Republic’s leaders, this is an unacceptable risk.

    Who plays the next move?

    These interweaving imperatives likely prompted Iran’s leaders to launch a second massive, direct missile attack on Oct. 1 against Israel. How effective the strike will be in achieving any of Tehran’s aims is unknown.

    The Islamic Republic claimed that as many as 90% of the ballistic missiles reached their intended targets, while Israel and the United States characterize the attack as having been “defeated and ineffective,” despite unverified cellphone videos showing several ballistic missiles detonating after reaching land in Israel.

    What is almost certain, however, is that this will not be the last move in the conflict. Israel is unlikely to halt its Lebanon operation until it achieves its border security objectives. And Israeli Prime Minister Benjamin Netanyahu has vowed retaliation against Iran for its latest retaliatory attack.

    IRGC leaders met this warning with a counterthreat of their own that if Israel responds to the Oct. 1 attack militarily, Iran will again respond with unspecified “crushing and destructive attacks.”

    Rhetorically, neither side is backing down; militarily this may be true, too. The nature and scope of Israel’s next move will dictate how the war with Iran develops – but make no mistake, it is a war.

    Dr. Aaron Pilkington is a U.S. Air Force analyst of Middle East affairs and a non-resident fellow at the Center for Middle East Studies at the University of Denver’s Korbel School of International Studies. Dr. Pilkington will soon join the Military & Strategic Studies department at the U.S. Air Force Academy. The views expressed are those of the author and do not reflect the official position of the Department of Defense, Department of the Air Force, the United States Air Force Academy, or any other organizational affiliation.

    ref. Iran’s strike on Israel was retaliatory – but it was also about saving face and restoring deterrence – https://theconversation.com/irans-strike-on-israel-was-retaliatory-but-it-was-also-about-saving-face-and-restoring-deterrence-240302

    MIL OSI – Global Reports

  • MIL-Evening Report: There’s a renewed push to scrap junior rates of pay for young adults. Do we need to rethink what’s fair?

    Source: The Conversation (Au and NZ) – By Kerry Brown, Professor of Employment and Industry, School of Business and Law, Edith Cowan University

    NT_Studio/Shutterstock

    Should young people be paid less than their older counterparts, even if they’re working the same job? Whether you think it’s fair or not, it’s been standard practice in many industries for a long time.

    The argument is that young people are not fully “work-ready” and require more intensive employer support to develop the right skills for their job.

    But change could be on the horizon. Major unions and some politicians are pushing for reform – arguing “youth wages” should be scrapped entirely for adults.

    Why? They say the need to be fairly paid for equal work effort, as well as economic considerations such as the high cost of living and ongoing housing crisis, mean paying young adults less based on their age is out of step with modern Australia.

    So is there a problem with our current system, and if so, how might we go about fixing it?

    What are youth wages?

    In Australia, a youth wage or junior pay rate is paid as an increasing percentage of an award’s corresponding full adult wage until an employee reaches the age of 21.

    This isn’t the case in every industry – some awards require all adults to be paid the same minimum rates.

    But for those not covered by a specific award, as well as those working in industries including those covered by the General Retail Industry Award, Fast Food Industry Award and Pharmacy Industry Award, employees younger than 21 are not paid the full rate.

    Why pay less?

    Conventionally, junior rates have been thought of as a “training wage”. Younger people are typically less experienced, so as they gain more skills on the job over time, they are paid a higher hourly rate.

    But there are a few key problems with this approach, which may not be relevant given many employers’ expectations for their workers to start “job-ready” and a lack of consistency in the training they provide.

    Training up and developing skills is an important part of building any career. But it isn’t always provided by their employers.

    Many young adults undergo training prior to starting work and at their own expense.
    Best smile studio/Shutterstock

    Many young workers train themselves in job-related technical education and short courses, often at their own expense and prior to starting work.

    Employers reap the benefit of this pre-employment training and so a “wage discount” for younger workers may be irrelevant in this instance.

    None of this is to say employers aren’t offering something important when they take on young employees.

    Younger workers coming into employment relatively early have access to more than just a paid job, but also become part of a team, with responsibilities and job requirements that support “bigger-picture” life skills.

    Those who employ them may be contributing to their broader social and cultural engagement, something that could be considered part of a more inclusive training package. Whether that justifies a significant wage discount is less clear.




    Read more:
    Why real wages in Australia have fallen while they’ve risen in most other OECD countries


    Calls for a rethink

    There are growing calls for a rethink on the way we compensate young people for their efforts.

    An application by the Shop Distributive and Allied Employees’ Association – the union for retail, fast food and warehousing workers – seeks to remove junior rates for adult employees on three key awards. This action will be heard by the Fair Work Commission next year.

    Sally McManus, Secretary of the Australian Council of Trade Unions, said the peak union body will lobby the government to legislate such changes if this application fails. The Greens have added their support.

    That doesn’t have to mean abolishing youth wages altogether. But 21 years of age is a high threshold, especially given we get the right to major adult responsibilities such as voting and driving by 18.

    A transition strategy could consider gradually lowering this threshold, or increasing the wage percentages over time.

    Lessons from New Zealand

    We wouldn’t be the first to make such a bold change if we did.

    Our geographically and culturally close neighbour, New Zealand, has already removed the “youth wage” – replacing it with a “first job” rate and a training wage set at 80% of the full award rate in 2008.

    A common argument against abolishing youth wages – and increasing the minimum wage in general – is that it will stop businesses hiring young people and thus increase unemployment.

    But a 2021 study that examined the effects of New Zealand’s experience with increasing minimum wages – including this change – found little discernible difference in employment outcomes for young workers.

    The authors did note, however, that New Zealand’s economic downturn post-2008 had a marked effect on the employment of young workers more generally.

    New Zealand has already taken major steps in reforming junior pay rates.
    Stephan Roeger/Shutterstock

    What’s fair?

    It’s easy to see how we arrived at the case for paying younger adults less. But younger workers should not bear the burden of intergenerational inequity by “losing out” on wages in the early part of their working life.

    The debate we see now echoes the discussions about equal pay for equal work value run in the 1960s and ‘70s in relation to women’s unequal pay.

    We were warned that paying women the same as men would cause huge economic dislocation. Such a catastrophe simply did not come to pass.

    Kerry Brown is a member of the National Tertiary Education Union.

    ref. There’s a renewed push to scrap junior rates of pay for young adults. Do we need to rethink what’s fair? – https://theconversation.com/theres-a-renewed-push-to-scrap-junior-rates-of-pay-for-young-adults-do-we-need-to-rethink-whats-fair-240548

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: OECD comparisons reveal an unflattering picture of inequality in NZ – could that change?

    Source: The Conversation (Au and NZ) – By Colin Campbell-Hunt, Emeritus Professor in Business, University of Otago

    Getty Images

    Recent research showing the richest New Zealanders pay less tax than their counterparts in nine similar OECD countries raises, yet again, serious questions about wealth, equality and fairness.

    How unequal is the distribution of income in New Zealand? How do we compare with some of the countries we might benchmark against? And, if we don’t like what we see, can we change it?

    The metric most widely used by economists to measure inequality in incomes is called the Gini coefficient (named after the Italian statistician Corrado Gini who developed it).

    It brings together income data across all households, typically divided into groupings of 10% or 20% of the total. When there is no inequality of incomes between groups, Gini equals zero. When the top group captures all income, Gini equals 1.

    Measuring inequality

    The graph below shows Gini coefficients, before taxes and welfare payments (known as “transfers”), for all 37 countries in the OECD in 2019 (before the COVID pandemic disrupted household surveys). Ginis are ranked left to right, from least to most unequal.



    The Gini before taxes and transfers is a measure of the inequality produced by the structures of a country’s economy: the way value chains operate, the markets for products and services, the scarcity of certain skills, rates of unionisation, and so on.

    This gives us a measure of structural inequalities in a country. Governments, however, use taxes and transfers to shift income between households. They take taxes from some and boost incomes of the more disadvantaged.

    Ginis of incomes after taxes and transfers give us a measure of how well members of a society can support similar standards of living. They are shown in the following graph, again from least to most unequal. These give us a measure of social inequalities.



    Focusing just on social inequality, it is no surprise Scandinavian countries are among the least unequal, as well as Canada and Ireland. Neither is it surprising the UK and US approach the highest levels of social inequality in the OECD.

    Inequalities in Australia and New Zealand lie between these, but further from the Scandinavians and closer to the Anglo-Americans.

    Social inequality in NZ

    When we look at the difference between structural and social inequalities, we can see the extent to which taxes and transfers – government redistribution of income – reduce inequality.

    As we can see, New Zealand’s structural inequality, shaped by the economic reforms of the mid-1980s, is middling by comparison to other OECD countries.

    But New Zealand’s social inequality lies near the bottom third of OECD measures. A halving of top income tax rates in the mid-1980s and the rollback of the welfare state in the 1990s (after then finance minister Ruth Richardson’s 1991 “mother of all budgets”) significantly contributed to this.

    The downward columns in the following graph show the effect of government redistributive measures, ranked from most to least active. The result of these government redistributions in New Zealand is weaker even than in the laissez-faire economies of the United Kingdom and United States.



    Where does NZ sit?

    How do New Zealand’s inequalities compare with countries we might choose to benchmark against?

    Below, the Scandinavian countries famous for their egalitarian social systems are shown in orange. In green are countries that tolerate slightly higher social inequality: Sweden, Canada and Ireland.

    And the UK and US – exemplars of free-market capitalism that were the models for New Zealand’s reforms of the mid-1980s – are highlighted in grey.



    Reducing inequality

    How hard would it be to change? Could New Zealand, for example, reduce its level of social inequality to match Canada? Absolutely, yes.

    Other OECD data show Canada significantly cut its inequalities between 2010 and 2019. The country moved from a position identical to Luxembourg (haven for Europe’s wealthy) to be roughly level with Sweden.

    To match Canada’s level now, New Zealand would need to reduce structural inequalities further, or redistribute about as much as Norway and Denmark do. It can be done, in other words.

    Indeed, Finland shows government redistributions can transform some of the worst levels of structural inequality to produce outcomes comparable to other Scandinavian countries.

    New Zealand can aspire to goals for social equality matching those in the upper half of OECD countries. Beyond revisions to taxation and transfers, inequalities in health and education would also need to come down to reduce the social and economic costs of poverty and disadvantage that should bring shame to us all.


    The author acknowledges the contribution of data provided by Max Rashbrooke.


    Colin Campbell-Hunt does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. OECD comparisons reveal an unflattering picture of inequality in NZ – could that change? – https://theconversation.com/oecd-comparisons-reveal-an-unflattering-picture-of-inequality-in-nz-could-that-change-239306

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: How can we improve public health communication for the next pandemic? Tackling distrust and misinformation is key

    Source: The Conversation (Au and NZ) – By Shauna Hurley, PhD candidate, School of Public Health, Monash University

    Pexels/The Conversation

    There’s a common thread linking our experience of pandemics over the past 700 years. From the black death in the 14th century to COVID in the 21st, public health authorities have put emergency measures such as isolation and quarantine in place to stop infectious diseases spreading.

    As we know from COVID, these measures upend lives in an effort to save them. In both the recent and distant past they’ve also given rise to collective unrest, confusion and resistance.

    So after all this time, what do we know about the role public health communication plays in helping people understand and adhere to protective measures in a crisis? And more importantly, in an age of misinformation and distrust, how can we improve public health messaging for any future pandemics?

    Last year, we published a Cochrane review exploring the global evidence on public health communication during COVID and other infectious disease outbreaks including SARS, MERS, influenza and Ebola. Here’s a snapshot of what we found.




    Read more:
    Why are we seeing more pandemics? Our impact on the planet has a lot to do with it


    The importance of public trust

    A key theme emerging in analysis of the COVID pandemic globally is public trust – or lack thereof – in governments, public institutions and science.

    Mounting evidence suggests levels of trust in government were directly proportional to fewer COVID infections and higher vaccination rates across the world. It was a crucial factor in people’s willingness to follow public health directives, and is now a key focus for future pandemic preparedness.

    Here in Australia, public trust in governments and health authorities steadily eroded over time.

    Initial information from governments and health authorities about the unfolding COVID crisis, personal risk and mandated protective measures was generally clear and consistent across the country. The establishment of the National Cabinet in 2020 signalled a commitment from state, territory and federal governments to consensus-based policy and public health messaging.

    During this early phase of relative unity, Australians reported higher levels of belonging and trust in government.

    But as the pandemic wore on, public trust and confidence fell on the back of conflicting state-federal pandemic strategies, blame games and the confusing fragmentation of public health messaging. The divergence between lockdown policies and public health messaging adopted by Victoria and New South Wales is one example, but there are plenty of others.

    When state, territory and federal governments have conflicting policies on protective measures, people are easily confused, lose trust and become harder to engage with or persuade. Many tune out from partisan politics. Adherence to mandated public health measures falls.

    Our research found clarity and consistency of information were key features of effective public health communication throughout the COVID pandemic.

    We also found public health communication is most effective when authorities work in partnership with different target audiences. In Victoria, the case brought against the state government for the snap public housing tower lockdowns is a cautionary tale underscoring how essential considered, tailored and two-way communication is with diverse communities.




    Read more:
    What pathogen might spark the next pandemic? How scientists are preparing for ‘disease X’


    Countering misinformation

    Misinformation is not a new problem, but has been supercharged by the advent of social media.

    The much-touted “miracle” drug ivermectin typifies the extraordinary traction unproven treatments gained locally and globally. Ivermectin is an anti-parasitic drug, lacking evidence for viruses like COVID.

    Australia’s drug regulator was forced to ban ivermectin presciptions for anything other than its intended use after a sharp increase in people seeking the drug sparked national shortages. Hospitals also reported patients overdosing on ivermectin and cocktails of COVID “cures” promoted online.

    The Lancet Commission on lessons from the COVID pandemic has called for a coordinated international response to countering misinformation.

    As part of this, it has called for more accessible, accurate information and investment in scientific literacy to protect against misinformation, including that shared across social media platforms. The World Health Organization is developing resources and recommendations for health authorities to address this “infodemic”.

    National efforts to directly tackle misinformation are vital, in combination with concerted efforts to raise health literacy. The Australian Medical Association has called on the federal government to invest in long-term online advertising to counter health misinformation and boost health literacy.

    People of all ages need to be equipped to think critically about who and where their health information comes from. With the rise of AI, this is an increasingly urgent priority.

    Many people turned to unproven treatments for COVID.
    Alina Kruk/Shutterstock

    Looking ahead

    Australian health ministers recently reaffirmed their commitment to the new Australian Centre for Disease Control (CDC).

    From a science communications perspective, the Australian CDC could provide an independent voice of evidence and consensus-based information. This is exactly what’s needed during a pandemic. But full details about the CDC’s funding and remit have been the subject of some conjecture.

    Many of our key findings on effective public health communication during COVID are not new or surprising. They reinforce what we know works from previous disease outbreaks across different places and points in time: tailored, timely, clear, consistent and accurate information.

    The rapid rise, reach and influence of misinformation and distrust in public authorities bring a new level of complexity to this picture. Countering both must become a central focus of all public health crisis communication, now and in the future.

    This article is part of a series on the next pandemic.

    Rebecca Ryan receives funding from the National Health and Medical Research Council through funding to Australian Cochrane entities, and was previously commissioned by the World Health Organization to undertake a rapid evidence review on communication for COVID-19 prevention and control (2020).

    Shauna Hurley does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. How can we improve public health communication for the next pandemic? Tackling distrust and misinformation is key – https://theconversation.com/how-can-we-improve-public-health-communication-for-the-next-pandemic-tackling-distrust-and-misinformation-is-key-226718

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: Return-to-office mandates may not be the solution to downtown struggles that Canadian cities are banking on

    Source: The Conversation (Au and NZ) – By Alexander Wray, PhD Candidate in Geography, Western University

    In recent months, many Canadian employers in both the public and private sectors have implemented return-to-office mandates, requiring workers that transitioned to remote or hybrid work during the COVID-19 pandemic to work in-person again.

    Employers are justifying these mandates by arguing they improve productivity, build more collaborative teams and improve mentorship for junior employees.

    Employers are not the only group ecstatic about these mandates. Municipalities and business owners are also expressing hope that the presence of office workers will spin off into greater consumer spending at restaurants and other businesses near office buildings. The expectation is that office workers will once again start spending money on coffee, lunch or after-work beverages.

    In 2022, the mayor of Ottawa partially blamed the downtown core’s economic struggles on the fact that federal public service workers were still largely working remotely. Federal workers have since been mandated to return to work in-person three days a week in late fall.

    The Canadian Federation of Independent Business similarly criticized the slow return to offices as a leading factor behind why small and medium-size businesses, especially restaurants and bars, are facing challenges in downtown areas.

    Insight into restaurant success

    During the pandemic, there were predictions that more than half of Canada’s independent restaurants would fail as part of their customer base — office workers — shifted to working from home.

    Our recent study investigated which operational, demographic and land use factors affected restaurant survival during the first year of the pandemic in London, Ont.

    We found no significant differences between restaurants that failed and restaurants that survived based on proximity to office uses. Instead, operational decisions made by restaurants individually were much more predictive of their survival than any geographic factor, including the presence of offices.

    Restaurants are seen along Richmond Street in downtown London, Ontario, in June 2021.
    (Alexander Wray), CC BY-NC-SA

    We found that restaurants located in areas receiving more CERB (Canadian Emergency Response Benefit) payments, and with a higher density of entertainment venues around them, were less likely to survive.

    Restaurants that adapted by offering pickup and delivery options were more likely to survive, though only for those that did their own delivery in-house rather than relying on platforms like UberEats and SkipTheDishes. Restaurants that had drive-thrus, held liquor licenses, or had been established for more than five years were more likely to survive. These older, more established restaurants were likely more resilient because of financial stability and customer loyalty.

    Table-service restaurants fared better than fast food outlets, likely because they could offer large patio dining spaces during the summer. Restaurants with liquor licenses substantially benefited, especially after a regulatory change by the Ontario government that allowed alcohol sales with takeout and delivery — a first for the province.

    In short, restaurant success was driven more by individual business decisions rather than being in a specific location. People working remotely instead of in the office did not significantly affect restaurant survival during the first year of the pandemic.

    Downtown struggles

    As Canadian downtowns look to recover, many face ongoing challenges. Activity levels are down by about 20 per cent from pre-pandemic levels in many places, lagging behind many similarly sized downtowns in the United States.

    This downturn has been partially attributed to a combination of higher office building vacancies and fewer workers downtown. For the first time, downtown office vacancy rates have exceeded suburban rates in the Greater Toronto Area. There has also been tremendous housing growth within many downtown cores.

    At the same time, downtowns have become a highly visible focal point of Canada’s growing addictions, mental health and housing crises. The pandemic fully revealed the deeper social, economic and health challenges happening in Canadian society.

    While violent incidents are rare, the social incivilities and disorder on display — public urination and defecation, open drug use, visible tents and property crime — contributes to a perception that Canadian downtowns are unsafe. This perception, whether accurate or not, has an impact on the willingness of people to engage with their downtowns.

    A way forward

    The damage to the reputation of Canada’s downtowns has been done. Downtown London now has the highest office vacancy rate in the country. The Workplace Safety Insurance Board of Ontario, for instance, recently chose to consolidate its offices in the outskirts of London, rather than downtown.

    Many people now elect to spend their time and money in areas that have embraced the “experience economy.” These are places that provide highly manicured entertainment and shopping destinations, with restaurants being the bedrock of enabling high quality experiences in these areas.

    Foot traffic is at an all-time high in suburban shopping centres. The downtowns of cities that are widely known as global tourist destinations — Las Vegas, Miami and Nashville — have activity levels close to or higher than their pre-pandemic levels.

    These are places that are developing highly attractive economies that provide people with the safe, fun and exciting experiences they are looking for locally and internationally. Instead of trying to force unwilling workers back to the office, Canadian cities should instead focus on developing downtowns that people genuinely want to visit and experience.

    One potential way to do this is to provide wrap-around support services and direct pathways to stable housing across the entire community, as the City of London has done. By spreading care and outreach services across the entire city, rather than concentrating them exclusively in downtown areas, the negative effects from Canada’s homelessness crisis can be reduced on urban cores.

    This type of strategy will direct those who need help away from downtowns, and may even permanently lift them out of poverty. In turn, Canadian downtowns can return to being places for everyone to shop, eat, relax, and work in comfort.

    Alexander Wray is President of the Town and Gown Association of Ontario, and a Board Member of Mainstreet London.

    Jamie Seabrook, Jason Gilliland, and Sean Doherty do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    ref. Return-to-office mandates may not be the solution to downtown struggles that Canadian cities are banking on – https://theconversation.com/return-to-office-mandates-may-not-be-the-solution-to-downtown-struggles-that-canadian-cities-are-banking-on-239682

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: How to help your child return to school after a long illness, new diagnosis or an accident

    Source: The Conversation (Au and NZ) – By Sarah Jefferson, Senior Lecturer in Education, Edith Cowan University

    It is very common for children to have a day or two away from school due to illness. But children can also miss much longer periods of schooling if they have a serious illness or injury.

    This could be a severe episode of mental illness, a diagnosis of Type 1 diabetes or in my family’s case, our youngest child being hit by a car at a pedestrian crossing, requiring months of rehab.

    After the initial shock, treatment and recovery, families then need to navigate a complex return to school – to make things as normal as possible for the student while handling their ongoing medical needs.

    How can families support their child?

    How many students are missing school?

    There are many reasons why children may need to have a significant break from school.

    At least one in every ten children under the age of 14 live with a chronic health condition.

    These conditions, which can include heart disease, diabetes and asthma, mental illness and cancers can lead to weeks or months in hospital.

    A 2018 study found 70,000 Australians under 16 are also hospitalised with a serious injury each year.

    Students can end up missing a significant amount of school due to injury or chronic illness.
    moonmovie/Shutterstock

    Come back with a plan

    We know going to school is central to children’s social and emotional wellbeing, as well as their academic progress. So getting back to school is a key part of a student’s ongoing health and wellbeing.

    The Royal Children’s Hospital Melbourne warns children can get mentally and physically tired after a long or serious illness.

    So they recommended returning to school gradually. Students may just go for half days or for a few hours initially.

    To make this as smooth as possible, parents or caregivers should meet with the school before you hope to return. This meeting should include the student if possible, relevant teachers (such as class teachers and year-level coordinators) and school nurse.

    Not all schools have a dedicated nurse. But if there is one available, they can play an important liaison role and manage a child’s medications or situation at school. If there is no nurse, make sure you include the school’s administration team.

    The meeting with the school should make a clear plan around what new support the student needs and how they will receive this. They may need changes to their uniform, timetable or where they physically go in the school. Students may also need extra time to do work, extra academic help and extra breaks.

    Families may also want to schedule regular catch-ups with the school.

    Students may not initially be able to return to school full time.
    engagestock/Shutterstock

    How is the student feeling?

    Children can be worried about not fitting in, especially if something significant has happened to them that makes them feel different from their peers. They may not want a huge fuss when they come back.

    Arranging time to talk to or see friends before they come back can help ease a student into their new routine.

    Depending on the situation, you could enlist a trusted buddy to help with bags or walk a bit more slowly with them between classes.

    Or students may get special permission to leave class a bit early to avoid crowds, or to be able to go and see the nurse without asking the teacher each time and drawing attention to themselves.

    As your child returns, make sure the focus is not just on catching up academically but catching up with friends as well. If their hours are reduced at school, try and allow for social time (such as including recess or lunch) as well as lessons.

    Your child will likely be dealing with a lot, both mentally and physically. So keep talking to them as much as possible about how they are feeling and going as they return.

    Things may have changed for them (and for you), but with time and support, school can feel like a normal part of life again.

    Sarah Jefferson does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. How to help your child return to school after a long illness, new diagnosis or an accident – https://theconversation.com/how-to-help-your-child-return-to-school-after-a-long-illness-new-diagnosis-or-an-accident-240012

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: Limestone and iron reveal puzzling extreme rain in Western Australia 100,000 years ago

    Source: The Conversation (Au and NZ) – By Milo Barham, Associate Professor, Earth and Planetary Sciences, Curtin University

    Limestone pinnacles of the Nambung National Park karst. Matej Lipar

    Almost one-sixth of Earth’s land surface is covered in otherworldly landscapes with a name that may also be unfamiliar: karst. These landscapes are like natural sculpture parks, with dramatic terrain dotted with caves and towers of bedrock slowly sculpted by water over thousands of years.

    Karst landscapes are beautiful and ecologically important. They also represent a record of Earth’s past temperature and moisture levels.

    However, it can be quite challenging to figure out exactly when karst landscapes formed. In our new work published today in Science Advances, we show a new way to find the age of these enigmatic landscapes, which will help us understand our planet’s past in more detail.

    Flowstones, stalactites and caverns within Jenolan Caves, NSW, Australia.
    Matej Lipar

    The challenge

    Karst is defined by the removal of material. The rock towers and caves we see today are what is left after water dissolved the rest during wet periods of the past.

    This is what makes their age hard to determine. How do you date the disappearance of something?

    Traditionally, scientists have loosely bracketed the age of a karst surface by dating the material above and beneath. However, this approach blurs our understanding of ancient climate events and how ecosystems responded.

    Geological clocks

    In our study, we found a way to measure the age of pebble-sized iron nodules that formed at the same time as a karst landscape.

    This method has the technical name of (U/Th)-He geochronology. In it, we measure how much helium is produced by the natural radioactive decay of tiny amounts of the elements uranium and thorium in the iron nodules. By comparing the amounts of uranium, thorium and helium in a sample, we can very accurately calculate the age of the nodules.

    How iron nodules can reveal their age.
    Milo Barham

    We dated microscopic fragments of iron-rich nodules from the iconic Pinnacles Desert in Nambung National Park, Western Australia.

    This world-famous site is renowned for its otherworldly karst landscape of acres of limestone pillars towering metres above a sandy desert plain. The Pinnacles form part of the most extensive belt of wind-blown carbonate rock in the world, stretching more than 1,000km along coastal southwestern WA.

    The Western Australia ThermoChronology Hub (WATCH) ultra-high vacuum gas extraction line for measurements of radiogenic helium.
    Martin Danišik

    We examined multiple microscopic shards of iron nodules that were removed from the surface of limestone pinnacles. These nodules formed in the soil that lay on top of the limestone during the period of intense weathering that created the karst. As a result, they serve as time capsules of the environmental conditions that shaped the area.

    A scanning electron microscope image of iron-rich cement (lighter grey in centre) binding darker grey, rounded quartz sand grains within an analysed nodule.
    Aleš Šoster

    The big wet

    We consistently found an age of around 100,000 years for the growth of the iron nodules. This date is supported by known ages from the rocks above and beneath the karst surface, proving the reliability of our new approach.

    At the same time as chemical reactions caused growth of the iron-rich nodules within the ancient soil, limestone bedrock was rapidly and extensively dissolved to leave only remnant limestone pinnacles seen today.

    From examining the entire rock sequence in the area, we think this period of intensive weathering was the wettest time in this part of WA over at least the past half a million years.

    We don’t know what drove this increased rainfall. It may have been changes to atmospheric circulation patterns, or the greater influence of the ancient Leeuwin Current that runs along the shore.

    Such a humid interval is in dramatic contrast to the recent droughts and increasingly dry climate of the region today.

    Implications for our past

    Iron-rich nodules are not unique to the Nambung Pinnacles. They have recently been used to track dramatic past environmental change elsewhere in Australia.

    Dating these iron nodules will help to better document the dramatic fluctuations in Earth’s climate over the past three million years as ice sheets have grown and shrunk.

    Understanding the timing and environmental context of karst formation throughout this time offers profound insights into past climate conditions, environments and the landscapes in which ancient creatures lived.

    Dark iron-rich nodules attached to the side of the base of a limestone pinnacle in the Nambung National Park.
    Matej Lipar

    Climate changes and resulting environmental shifts have been crucial in shaping ecosystems. In particular, they have had a profound influence on our ancient hominin and human ancestors.

    By linking karst formation to specific climatic intervals, we can better understand how these environmental changes may have affected early human populations.

    Looking forward

    The more we know about the conditions that led to the formation of past landscapes and the flora and fauna that inhabited them, the better we can appreciate the evolutionary pressures that shaped the ecosystems we see today. This in turn offers valuable information for preparing for future changes.

    As human-driven climate change accelerates, learning about past climate variability and biosphere responses equips us with knowledge to anticipate and mitigate future impacts.

    The ability to date karst features with greater precision may seem like a small thing – but it will help us understand how today’s landscapes and ecosystems might respond to ongoing and future climate changes.

    Milo Barham has previously received research funding from the Minerals Research Institute of Western Australia.

    Andrej Šmuc, John Allan Webb, Kenneth McNamara, Martin Danisik, and Matej Lipar do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    ref. Limestone and iron reveal puzzling extreme rain in Western Australia 100,000 years ago – https://theconversation.com/limestone-and-iron-reveal-puzzling-extreme-rain-in-western-australia-100-000-years-ago-238801

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: More consumption, more demand for resources, more waste: why urban mining’s time has come

    Source: The Conversation (Au and NZ) – By Michael Odei Erdiaw-Kwasie, Lecturer in Sustainability| Business and Accounting Discipline, Charles Darwin University

    Lynda Disher/Shutterstock

    Pollution and waste, climate change and biodiversity loss are creating a triple planetary crisis. In response, UN Environment Programme executive director Inger Andersen has called for waste to be redefined as a valuable resource instead of a problem. That’s what urban mining does.

    We commonly think of mining as drilling or digging into the earth to extract precious resources. Urban mining recovers these materials from waste. It can come from buildings, infrastructure and obsolete products.

    An urban mine, then, is the stock of precious metals or materials in the waste cities produce. In particular, electronic waste, or e‑waste, has higher concentrations of precious metals than many mined ores. Yet the UN Global E‑waste Monitor estimates US$62 billion worth of recoverable resources was discarded as e‑waste in 2022.

    Urban mining can recover these “hidden” resources in cities around the world. It offers sustainable solutions to the problems of resource scarcity and waste management. And it happens in the very cities that are centres of overconsumption and hotspots for the greenhouse gas emissions driving climate change.

    What sort of waste can be mined?

    Materials such as concrete, pipes, bricks, roofing materials, reinforcements and e‑waste can be recovered for reuse. Urban waste can be “mined” for metals such as gold, steel, copper, zinc, aluminium, cobalt and lithium, as well as glass and plastic. Mechanical or chemical treatments are used to retrieve these metals and materials.

    Simply disposing of this waste has high financial and environmental costs. In Australia, about 10% of waste is hazardous. Landfill costs are soaring as cities run out of space to discard their waste.

    The extent of this fast-growing problem is driving the growth of urban mining around the world. We are then salvaging materials whose supply is finite, while reducing the impacts of waste disposal.

    Many plastics can be recycled and turned into new products.
    MAD.vertise/Shutterstock

    What’s happening globally?

    In Europe, the focus is largely on construction and demolition waste. Europe produces 450 million to 500 million tonnes of this waste each year – more than a third of all the region’s waste. Through its urban mining strategy, the European Commission aims to increase the recovery of non-hazardous construction and demolition waste to at least 70% across member countries by 2030.

    In Asia, urban mining has focused on e‑waste. However, the region recovers only about 12% of its e‑waste stock. Rates of e‑waste recycling vary greatly: 20% for East Asia, 1% for South Asia, and virtually zero for South-East Asia. China, Japan and South Korea are leading the way in Asia.

    Australia is on the right track. Our recovery rate for construction and demolition materials climbed to 80% by 2022 — the highest among all types of waste streams. However, we recover only about a third of the value of materials in our e-waste.

    Africa has also recognised the growing value of urban mining resources. Regional initiatives include the Nairobi Declaration on e‑waste, the Durban Declaration on e‑Waste Management in Africa and the Abuja Platform on e‑Waste.

    Urban mining solves many problems

    The OECD forecasts that global materials demand will almost double from 89 billion tonnes in 2019 to 167 billion tonnes in 2060. The United Nations’ Global Waste Management Outlook 2024 shows the amount of waste and costs of managing it are soaring too. It’s estimated the world will have 82 million tonnes of e‑waste to deal with by 2030.

    These trends mean urban mining is becoming ever more relevant and important.

    Urban mining also helps cut greenhouse gas emissions. Unlocking resources near where they are needed reduces transport costs and emissions. Urban mining also provides resource independence and creates employment.

    In addition, increasing recovery and recycling rates reduce the pressure on finite natural resources.

    Urban mining underpins circular economy alternatives such as the “deposit and return” schemes that give people financial incentives to return e‑waste and containers for recycling in cities such as Singapore, Sydney, Darwin and San Francisco. By 2030, San Francisco aims to halve disposal to landfill or incineration and cut solid waste generation by 15%.

    What more needs to be done?

    Governments have a role to play by adopting and enforcing policies, laws and regulations that encourage recycling through urban mining instead of sending waste to landfill. European Union laws, for example, mandate increased recycling targets for municipal waste overall and for packaging waste, including 80% for ferrous metals and 60% for aluminium.

    In Australia, 2019 legislation prohibits landfills from accepting anything with a plug, battery or cord. Anything with a plug is designated as e-waste.

    Product design is an important consideration. A designer must balance a product’s efficiency with making it easy to recycle. Products with greater efficiency and easy-to-recycle parts are more likely to use less energy, lead to less waste and hence less natural resource extraction.

    Our urban mining research documents a more sustainable approach to product design. Increasing product stewardship initiatives are expected to encourage better product design and standards that promote reuse and recycling, producer responsibility and changes in consumer behaviour.

    Good information about the available resources is essential too. The Urban Mine Platform, ProSUM and Waste and Resource Recovery Data Hub collect data on e‑waste, end-of-life vehicles, batteries and building and mining waste. These centralised databases allow easy access to data on the sources, stocks, flows and treatment of waste.

    Traditional mining is not the only method for extracting raw materials for the green transition. Waste is set to be increasingly recycled, reducing demand for virgin materials. A truly circular economy can become a reality if governments develop and apply an urban mining agenda.

    Michael Odei Erdiaw-Kwasie receives funding from the Foundation for Rural and Regional Renewal (FRRR).

    Matthew Abunyewah receives funding from the Foundation for Rural and Regional Renewal (FRRR) and Northern Western Australia and Northern Territory Drought Resilience Adoption and Innovation Hub (Northern Hubb)

    Patrick Brandful Cobbinah receives funding from Lincoln Institute of Land Policy. He is a member of Planning Institute of Australia.

    ref. More consumption, more demand for resources, more waste: why urban mining’s time has come – https://theconversation.com/more-consumption-more-demand-for-resources-more-waste-why-urban-minings-time-has-come-232484

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: Joker: Folie à Deux as ‘ruin porn’ – how the new sequel plays with duplication and disintegration

    Source: The Conversation (Au and NZ) – By Anna-Sophie Jürgens, Senior Lecturer in Science Communication (Pop Culture Studies), Australian National University

    Warner

    Like two-headed playing cards, Joker stories are about dual identity, doubles and duplicity.

    Throughout DC comics and films, the Joker turns others into facsimiles of himself, grinning widely. He shares his state of mind through infectious laughter and mass “clownification”, creating copies as he goes.

    Film sequel Joker: Folie à Deux, directed by Todd Phillips and released in cinemas today, participates in this rich tradition. It also challenges it by introducing a Joker haunted by his own lost futures – the glam clown, homicidal entertainer and irresistible lover he could have become.

    What can we learn from the Joker character about our cultural fascination with duplication and disintegration?

    Madness by imitation

    Doubling, split consciousness and double meanings have been ingredients in Joker stories since the character’s creation in the 1940s.

    He offers different origin stories himself in the 2008 movie blockbuster The Dark Knight (with Heath Ledger as the Joker). He is presented as many in the recent comic series Three Jokers. The Joker shuffles his own “selves like a croupier deals cards” in the 2007 Batman comic The Clown at Midnight.

    Within the DC clowniverse, the Joker turns others into Joker copies and clowns, usually through the use of biological or chemical weapons or poisons, virology, hypnotism or sheer charisma. Joker copies include Joker fans and followers in clown costumes and masks, as in the 2019 film starring Joaquin Phoenix. In comics he is described as having an influence that

    […] affects people, on an almost subconscious, primal level. For most people – regular people – he inspires fear. For the less stable people – he simply inspires.

    For more than 80 years, his laughter has spread like a virus and caused mass-clownification countless times.

    ‘The whole world smiles with you.’ The new Joker sequel plays with dual identity and shadow selves.

    Multiplying his potency

    Joker stories tend to revolve around three scenarios of imitation, doubling and multiplication: several people acting as one (that is, the Joker), one person acting as many (as in Batman: R.I.P. when Batman tries to understand the Joker by experiencing his state of mind like a second consciousness), and a number of personalities nestled within the Joker wreaking havoc. All of these scenarios are powerful reminders clown laughter and humour need not be funny.

    The Joker character was inspired by famous films from the 1920s and ’30s, including Robert Wiene’s The Cabinet of Dr Caligari (1920), F.W. Murnau’s Nosferatu (1922), Fritz Lang’s Metropolis (1926), Roland West’s The Bat (1926) and Paul Leni’s The Man Who Laughs (1928). Many of these works feature hapless or unhappy (comic) performers, who all struggle with identity.

    The cultural mould to which the Joker belongs is linked with the more than century-old fascination with doppelgangers, male nervousness, violent and involuntary laughter and the loss of agency and sense of the self.

    The Joker has long played with ideas of duality.
    IMDB/Warner

    Haunting through absence

    The new sequel, Joker: Folie à Deux, draws on all these very Joker traditions. Arthur Fleck and his Joker (Phoenix again) struggles with his split identities.

    Set two years after the events of the previous film, Fleck is a patient at Arkham State Hospital, where he meets the dual character Lee Quinzel/Harley Quinn (played by Lady Gaga). She wants him to lean into his Joker self.

    Although she is neither the clown nor a scientist as she’s portrayed in other stories, she also wants to be a Joker version. Arthur himself wants to be the Joker, but for reasons both external and internal he ends up not really becoming the Joker we recognise from the first film.

    The sequel is ultimately a trick played on the audience. “There is no Joker,” Arthur confirms at the end, just Arthur. Folie à Deux is about a broken dream’s loveliness.

    The Joker is a collective dream that fails to come true. He appears in the form of fantasies. He is the past, but at the same time present and absent. This is how the concept of hauntology has been defined – a split between realities. The film glamorises and exploits disillusion as we watch the Joker and his future possibilities disintegrate.

    In this way, Joker: Folie à Deux is a clown version of ruin porn, inviting us to enjoy the “decay” of a character. It gives us glimpses of a post-double version of the Joker, a non-Joker, left in pieces.

    Joker: Folie à Deux is in cinemas now.

    Anna-Sophie Jürgens does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Joker: Folie à Deux as ‘ruin porn’ – how the new sequel plays with duplication and disintegration – https://theconversation.com/joker-folie-a-deux-as-ruin-porn-how-the-new-sequel-plays-with-duplication-and-disintegration-240311

    MIL OSI AnalysisEveningReport.nz

  • MIL-OSI Global: Pharma company funding for patient advocacy groups needs to be transparent

    Source: The Conversation – Canada – By Joel Lexchin, Professor Emeritus of Health Policy and Management, York University, Canada

    As a first step in determining whose interests patient groups align with, we need more transparency about the source of their revenue. (Shutterstock)

    Patient groups should be playing a central role in Canada’s health-care system, advocating for their members by promoting the visibility of their conditions, pushing for more rapid and accurate diagnoses and lobbying for the introduction and funding of new treatments and drugs that may help relieve their members’ symptoms and extend their lives.

    However, all of this requires resources. In the past, groups could turn to the federal government for funding, but that option dried up in the late 1980s and early 1990s.

    Pharmaceutical industry funding

    In response, patient groups looked to the pharmaceutical industry to be able to continue functioning. How much money Canadian groups get from drug companies is largely unknown.

    Neither the federal government nor the major industry association, Innovative Medicines Canada (IMC), require companies to report on payments to groups and similarly there are no rules saying that patient groups must reveal who gives them money or how much. Even if groups are registered charities, that type of granular information is not collected in reports they have to file with the Canada Revenue Agency.

    How much money Canadian patient advocacy groups get from drug companies is largely unknown.
    (Shutterstock)

    There is one source of partial information that has not been investigated until now. Since 2016, six companies have voluntarily released detailed annual statements about which groups they give money to and the value of those payments — GlaxoSmithKline, Merck, Novartis, Roche, Sanofi and Teva.

    I have analyzed the available reports from these companies. Because pharma companies have a history of trying to buy influence — a topic I’ve researched extensively — it’s important to look at what and who they are funding. All told, from 2016 to 2023, they gave more than $30 million in 671 separate payments to 263 groups. The $30 million figure is a minimum because not all of the six companies report in any individual year. There are also an additional 42 member companies in IMC that don’t file any reports. (Teva does not belong to IMC.)

    The median amount that a patient group received was $26,000 but that number hides the extremes. The Black Health Alliance received a single payment of $250 in 2023 from Novartis whereas the World Federation of Hemophilia, based in Montréal, got over $4.5 million from Roche and Sanofi between 2020 to 2023. Fourteen groups accounted for almost one-half of all payments groups received. Although Novartis only reported in three years (2021-23) it gave the largest amount of money, over $7.5 million.

    Conflicts of interest

    Receiving money creates a conflict-of-interest (COI), where a COI is defined by the U.S. Institute of Medicine (now the National Academy of Medicine) as “a set of circumstances that creates a risk that…judgment or actions regarding a primary interest will be unduly influenced by a secondary interest.” In this case, that would mean that the patient group was looking out for the interests of the drug company that gave it money as opposed to the interests of its patient members.

    However, just because groups received money from drug companies does not necessarily equate to the positions and actions that they took. There is a wide range of positions taken by patient groups that have received pharma funding, and when their positions align with those of their sponsors, these associations do not establish cause and effect.

    The Canadian Organization for Rare Disorders that received just shy of $450,000 between 2018 and 2023 from a combination of GlaxoSmithKline, Novartis, Roche and Sanofi has publicly criticized the legislation that potentially creates the first steps to a universal, first-dollar coverage pharmacare plan.

    Twenty-eight patient groups, including Save Your Skin Foundation and Myeloma Canada, lobbied the Patented Medicine Prices Review Board to try to stop the board from instituting reforms to how it regulated drug prices. Save Your Skin Foundation got just over $750,000 in drug company money and Myeloma Canada got $831,000.

    Pharma companies have a history of offering funding and other resources that have been shown to influence health-care professionals.
    (Shutterstock)

    Some groups that take drug company money do not necessarily align with the interests of their funders. The president of the Canadian Spondylitis Association (CSA) pulled his organization out of a focus-group project organized by Janssen and AbbVie because he refused to sign off on a report claiming that patients were strongly opposed to switching from the medication Humira, sold by AbbVie, to a less expensive biosimilar.

    Arthritis Consumer Experts (ACE) used to receive grants from Janssen and AbbVie until it also came out in favour of switching to biosimilars. (CSA received over $100,000 from Merck and Novartis, while ACE $267,000 from Merck and Novartis as well as Teva.)

    How pharma funds buy influence

    Pharma companies have a history of offering funding and other resources that have been shown to influence health-care professionals, which has extended the reach of pharma companies’ interests into virtually all aspects of health care. Funding patient groups may be another strategy to further extend the reach of those interests, which do not always align with those of patients and the public.

    As a first step in trying to determine whose interests patient groups align with, we need more transparency about the source of their revenue. The European Federation of Pharmaceutical Industries and Associations (EFPIA) code requires that member companies disclose on their websites a list of patient organizations to which they provide financial support, the amount of the payment and a description of the nature of the support or services provided.

    However, a study of industry payments in Nordic countries concluded that the EFPIA code fails to ensure transparency and compliance. EFPIA allows national industry associations the freedom to determine how its code will be implemented and how much oversight is required, leading to disparate transparency practices. EFPIA has not created a disclosure template to standardize reporting. Finally, EPFIA’s code does not apply to companies that are not members.

    Industry codes are not the answer.

    Before the Ontario election in 2019, the government was finalizing regulations for Bill 160 that required all drug and device manufacturers to disclose payments to patient groups. The legislative process stopped when the government changed post-election. The federal government should pick up the mandate on this issue and pass similar legislation to make reporting mandatory on a national basis.

    Between 2021-2024, Joel Lexchin received payments for writing a brief on the role of promotion in generating prescriptions for a legal firm, for being on a panel about pharmacare and for co-writing an article for a peer-reviewed medical journal. He is a member of the Boards of Canadian Doctors for Medicare and the Canadian Health Coalition. He receives royalties from University of Toronto Press and James Lorimer & Co. Ltd. for books he has written.

    ref. Pharma company funding for patient advocacy groups needs to be transparent – https://theconversation.com/pharma-company-funding-for-patient-advocacy-groups-needs-to-be-transparent-239197

    MIL OSI – Global Reports

  • MIL-Evening Report: Is stress turning my hair grey?

    Source: The Conversation (Au and NZ) – By Theresa Larkin, Associate Professor of Medical Sciences, University of Wollongong

    Oksana Klymenko/Shutterstock

    When we start to go grey depends a lot on genetics.

    Your first grey hairs usually appear anywhere between your twenties and fifties. For men, grey hairs normally start at the temples and sideburns. Women tend to start greying on the hairline, especially at the front.

    The most rapid greying usually happens between ages 50 and 60. But does anything we do speed up the process? And is there anything we can do to slow it down?

    You’ve probably heard that plucking, dyeing and stress can make your hair go grey – and that redheads don’t. Here’s what the science says.

    What gives hair its colour?

    Each strand of hair is produced by a hair follicle, a tunnel-like opening in your skin. Follicles contain two different kinds of stem cells:

    • keratinocytes, which produce keratin, the protein that makes and regenerates hair strands
    • melanocytes, which produce melanin, the pigment that colours your hair and skin.

    There are two main types of melanin that determine hair colour. Eumelanin is a black-brown pigment and pheomelanin is a red-yellow pigment.

    The amount of the different pigments determines hair colour. Black and brown hair has mostly eumelanin, red hair has the most pheomelanin, and blonde hair has just a small amount of both.

    So what makes our hair turn grey?

    As we age, it’s normal for cells to become less active. In the hair follicle, this means stem cells produce less melanin – turning our hair grey – and less keratin, causing hair thinning and loss.

    As less melanin is produced, there is less pigment to give the hair its colour. Grey hair has very little melanin, while white hair has none left.

    Unpigmented hair looks grey, white or silver because light reflects off the keratin, which is pale yellow.

    Grey hair is thicker, coarser and stiffer than hair with pigment. This is because the shape of the hair follicle becomes irregular as the stem cells change with age.

    Interestingly, grey hair also grows faster than pigmented hair, but it uses more energy in the process.

    Can stress turn our hair grey?

    Yes, stress can cause your hair to turn grey. This happens when oxidative stress damages hair follicles and stem cells and stops them producing melanin.

    Oxidative stress is an imbalance of too many damaging free radical chemicals and not enough protective antioxidant chemicals in the body. It can be caused by psychological or emotional stress as well as autoimmune diseases.

    Environmental factors such as exposure to UV, pollution, as well as smoking and some drugs, can also play a role.

    Melanocytes are more susceptible to damage than keratinocytes because of the complex steps in melanin production. This explains why ageing and stress usually cause hair greying before hair loss.

    Scientists have been able to link less pigmented sections of a hair strand to stressful events in a person’s life. In younger people, whose stems cells still produced melanin, colour returned to the hair after the stressful event passed.

    4 popular ideas about grey hair – and what science says

    1. Does plucking a grey hair make more grow back in its place?

    No. When you pluck a hair, you might notice a small bulb at the end that was attached to your scalp. This is the root. It grows from the hair follicle.

    Plucking a hair pulls the root out of the follicle. But the follicle itself is the opening in your skin and can’t be plucked out. Each hair follicle can only grow a single hair.

    It’s possible frequent plucking could make your hair grey earlier, if the cells that produce melanin are damaged or exhausted from too much regrowth.

    2. Can my hair can turn grey overnight?

    Legend says Marie Antoinette’s hair went completely white the night before the French queen faced the guillotine – but this is a myth.

    It is not possible for hair to turn grey overnight, as in the legend about Marie Antoinette.
    Yann Caradec/Wikimedia, CC BY-NC-SA

    Melanin in hair strands is chemically stable, meaning it can’t transform instantly.

    Acute psychological stress does rapidly deplete melanocyte stem cells in mice. But the effect doesn’t show up immediately. Instead, grey hair becomes visible as the strand grows – at a rate of about 1 cm per month.

    Not all hair is in the growing phase at any one time, meaning it can’t all go grey at the same time.

    3. Will dyeing make my hair go grey faster?

    This depends on the dye.

    Temporary and semi-permanent dyes should not cause early greying because they just coat the hair strand without changing its structure. But permanent products cause a chemical reaction with the hair, using an oxidising agent such as hydrogen peroxide.

    Accumulation of hydrogen peroxide and other hair dye chemicals in the hair follicle can damage melanocytes and keratinocytes, which can cause greying and hair loss.

    4. Is it true redheads don’t go grey?

    People with red hair also lose melanin as they age, but differently to those with black or brown hair.

    This is because the red-yellow and black-brown pigments are chemically different.

    Producing the brown-black pigment eumelanin is more complex and takes more energy, making it more susceptible to damage.

    Producing the red-yellow pigment (pheomelanin) causes less oxidative stress, and is more simple. This means it is easier for stem cells to continue to produce pheomelanin, even as they reduce their activity with ageing.

    With ageing, red hair tends to fade into strawberry blonde and silvery-white. Grey colour is due to less eumelanin activity, so is more common in those with black and brown hair.

    Your genetics determine when you’ll start going grey. But you may be able to avoid premature greying by staying healthy, reducing stress and avoiding smoking, too much alcohol and UV exposure.

    Eating a healthy diet may also help because vitamin B12, copper, iron, calcium and zinc all influence melanin production and hair pigmentation.

    Theresa Larkin does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Is stress turning my hair grey? – https://theconversation.com/is-stress-turning-my-hair-grey-239100

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: Lessons from Cyclone Gabrielle: 5 key health priorities for future disaster response

    Source: The Conversation (Au and NZ) – By Holly Thorpe, Professor in Sociology of Sport and Gender, University of Waikato

    Getty Images

    “The climate crisis is a health crisis.” So says World Health Organization Director-General Tedros Ghebreyesus.

    The World Economic Forum agrees. Its report this year highlighted how climate change is taking a toll on global health due to increasingly frequent extreme weather events.

    These issues are on the official agenda here too, especially since severe tropical cyclone Gabrielle caused extensive damage in the South-west Pacific and northern New Zealand in early 2023.

    Between February 13 and 14 it slammed into Te Tairāwhiti/East Coast and Te Matau a Māui/Hawkes Bay, with disastrous results for the land and its inhabitants. Communities were displaced, homes destroyed, power and telecommunications cut, water systems compromised, and many roads and bridges badly damaged.

    Shortly after Gabrielle hit, Manatū Hauora/Ministry of Health commissioned us to investigate the impacts of adverse weather events on health systems and community health and wellbeing.

    Our community research teams interviewed 143 residents in the two affected regions. They included first responders, heath workers, council staff and members of the public. Their stories were emotional, powerful and insightful.

    Our recently published report amplifies these community voices and local knowledge, and offers recommendations about planning for future, inevitable events. Here we offer five key messages.

    1. Prioritise vulnerable people

    Many older people and those with disabilities or existing health conditions were deprioritised or simply forgotten during evacuations and in the days and weeks after the cyclone. As one community responder in Tairāwhiti recalled:

    Some of them couldn’t move out because they were so old and frail. The water was so powerful, they couldn’t move anywhere. Some just stayed in their room until somebody turned up. For instance, there was a lady [who] was stuck in her wheelchair, and by the time people found her, the water was at her neck.

    Our report identified the need for health and social services to work more closely to ensure at-risk, vulnerable older people and those with disabilities or complex needs are prioritised during evacuations, so their medical and physical needs are met during and after an extreme weather event.

    2. Invest in mental health support and trauma recovery

    Those in the most affected communities had high levels of stress, grief and trauma during and after emergencies and evacuations.

    Staff and volunteers in front-line roles during the state of emergency experienced similar mental health effects. Many felt mental health support was not there when they needed it most.

    Almost everyone we spoke to had some negative mental health impacts. These included sleep disruption, rain anxiety and stress from road closures, insurance claims and land instability.

    Māori participants also told of their grief over environmental damage and destruction, highlighting the links between whenua (land) and hauora (health). They described drawing on cultural practices to support whānau recovery. For example, a leader of local volunteer efforts spoke about the personal impact of the cyclone:

    I was not good […] it was seeing the impact on how it was for your own community whānau. I think it hit me quite a bit later on. I fell into depression […] It just built up over time. I’m still in healing therapy for the last probably six to seven months since Gabrielle, just trying to get my wairua [spirit] and my tinana [body] and everything back in place.

    Overall, the research shows a need for greater awareness and investment in weather-related trauma recovery and mental health support.

    3. Ensure medical supplies can reach remote areas

    Rural and isolated communities had heightened health challenges, particularly due to road and communication failures.

    Transporting medical staff into these communities often required creative solutions (driving, using helicopters or hiking through bush and across farmland when roads were damaged, for example).

    Access to medicines was a major concern. It took co-ordinated effort to get pharmaceuticals to such communities. Helicopters were crucial in getting supplies and patients in and out of remote areas. Not everyone who needed attention received it, however.

    The most effective responses involved organisations (such as the NZ Police and Civil Defence) working together with communities. As one police officer told us:

    Our whānau up the coast needed medicine, prescriptions. Getting access from the helicopter to the home was a challenge. So, the police leant in and helped out. We used [an all-terrain vehicle] to get to places and spaces to get medicine in.

    People need to be prepared for power and telcommunications failures.
    Getty Images

    4. Resource and co-ordinate local support networks

    Fiscally challenged health systems were stretched during the emergency and struggled with power and telecommunications outages. But we heard of many health workers going “above and beyond” to care for patients and communities.

    Many continued working even when their own families, homes and communities were directly under threat. Anticipating this and supporting these workers will be important as adverse weather becomes more frequent with climate change.

    We also found marae, schools, local social services and non-profit organisations played key roles after the cyclone, but were often outside the direct ambit of the health system.

    Often the people working in these organisations have strong community relationships and knowledge that is essential to supporting emergency and recovery processes. These connections should be mapped and integrated for future events.

    5. Shift resources and build common will

    Local communities are full of knowledge. Many have learnt from recent events to better prepare their families, workplaces and organisations.

    Whānau told us about the importance of having cash in case of power outages and telecommunications failure. Others identified battery-powered radio as a critical source of information when systems were down. Pharmacists and doctors told of the importance of hard-copy evidence of prescriptions, to be able to dispense when electronic systems are out.

    Checking in on neighbours, sharing resources and making time for a cup of tea were all important for people in the recovery and rebuilding phases. A key lesson is to harness the power of community connections, trust and relationships in climate change resilience and recovery.

    Although knowledge, experience and wisdom lie in the hands of communities, our research highlights how financial resources mostly sit with central government. The challenge is to shift resources and build common will for climate action, before the inevitable next event.

    The report is receiving attention in parliament. We hope local experience can be central to planning around the health impacts of climate change and decision-making at all levels.


    We acknowledge the important contributions of our wider research team and community partners, particularly Manu Caddie (Te Weu Charitable Trust), Josie McClutchie (project lead), Dayna Chaffey, Haley Maxwell and Hiria Philip-Barbara (community researchers) in Tairāwhiti, and Emma Horgan and John Bell (Sustainable HB Centre for Climate & Resilience) in Hawkes Bay.


    Holly Thorpe received support from the Manatū Hauora/Ministry of Health funding secured to conduct this research.

    Fiona Langridge received support from the Ministry of Health funding secured to conduct this research.

    George Laking received funding from The Ministry of Health to conduct the research. He is an Executive Board member of OraTaiao, the New Zealand Climate and Health Council.

    Judith McCool receives funding from the Ministry of Health (Polynesia Health Corridors) and the Health Research Council.

    ref. Lessons from Cyclone Gabrielle: 5 key health priorities for future disaster response – https://theconversation.com/lessons-from-cyclone-gabrielle-5-key-health-priorities-for-future-disaster-response-239392

    MIL OSI AnalysisEveningReport.nz

  • MIL-Evening Report: When even fringe festival venues exclude people with disability, cities need to act on access

    Source: The Conversation (Au and NZ) – By Shane Clifton, Associate Professor of Practice, School of Health Sciences and the Centre for Disability Research and Policy, University of Sydney

    Sanit Fuangnakhon/Shutterstock

    It’s about time city councils did more to make our cities accessible. I recently tried to buy tickets to two Sydney Fringe Festival events, only to be told by the box office that the venues were not wheelchair-accessible.

    Sydney remains a place where people with disability feel like they don’t belong. The same is true of other Australian cities. But local councils don’t bear all the blame.

    Event organisers are responsible for selecting venues. In the case of the Fringe Festival, they chose locations inaccessible to wheelchair users and others with mobility challenges. It’s a bitter irony that a fringe festival, which ostensibly empowers artists and creatives on the margins, would exclude people with disability.

    If event organisers (and every one of us) decided never to hire inaccessible venues, then the market might solve the issue. But those of us with disability are realistic enough to know most people don’t care – or don’t give us a thought. The market hasn’t solved the problem, so it’s up to governments.

    The problems go beyond arts venues

    Inaccessible venues are only the tip of the iceberg. Countless restaurants, shops and offices are inaccessible, with steps on entry, inaccessible bathrooms and narrow and cluttered aisles.

    “Spend the day in my wheelchair” programs are sometimes criticised for trivialising the challenge of disability. However, they do unmask how frustrating and alienating our cities and towns can be.

    Google Maps now indicates whether premises are accessible. Those that are bear the universal symbol of disability access – the stylised blue wheelchair. Even then, a person with a disability is just as likely as not to turn up and discover a lift has broken down, a doorway has been blocked off, a bathroom has been used for storage, or a venue is only partially accessible (it’s always the cool spaces that are out of reach).

    The Commonwealth and states brought in disability discrimination laws in the 1990s. These have made some difference, but their many exemptions let businesses off the hook. (See the Disability Royal Commission’s recommendations to amend the Disability Discrimination Act 1992.)

    More than 30 years down the track, our cities and towns remain bastions of exclusion.

    Newtown Hotel is marked as accessible on Google Maps, but the upstairs room used for a Sydney Fringe Festival event was not.
    Slow Walker/Shutterstock



    Read more:
    What does a building need to call itself ‘accessible’ – and is that enough?


    Better access benefits everyone

    Landowners and businesses typically complain providing access for the few affected people is too costly. In reality, making our public spaces accessible often requires little more than determined creative design. The costs are a mere fraction of what we spend on other things we judge as more important.

    We also underestimate the value added by accessible design.

    The Kerb-Cut Effect, for example, describes how designing for people with disability often benefits everyone. The term refers to the impact of activist action in California in the 1970s. Disability advocates in the city of Berkeley poured concrete onto road kerbs to create ramps giving wheelchair users access to footpaths.

    These ramps also proved valuable to parents pushing children in strollers, older people and cyclists. Refined into kerb cuts, they spread rapidly around the world.

    There are many other examples. Television captioning, developed for people who are deaf and hard of hearing, is now widely used by non-disabled people. Audiobooks, developed for people who are blind, are now a common way that many other people enjoy books.

    Accessible venues will not just benefit wheelchair users. Older people, those with impaired mobility and people who push prams and tow suitcases all benefit. Indeed, if we make venues accessible to those on the margins, no one is excluded.

    The UN Convention on the Rights of Persons with Disabilities highlights the importance of universal design. The convention insists on

    the design of products, environments, programs and services to be usable by all people, to the greatest extent possible, without the need for adaptation or specialised design.

    Why use steps that exclude some people when everyone can use a ramp or a lift?

    Kerb cuts are now common since it became obvious how many people benefited from designing ramps into road-crossing points.
    John Robert McPherson/Wikimedia Commons, CC BY-SA

    Why councils must lead the way

    Accessibility in cities is about more than just wheelchairs; it requires a comprehensive approach to urban planning to meet the varied needs of all citizens. This includes providing sensory aids like audio signals, braille signage and visual measures for people who are blind, deaf or hard of hearing. It’s also crucial that information on public services and events is easily available to everyone in formats they can access and understand.

    My focus has been on access to public spaces, but we also need to turn our attention to private homes. Wheelchair users and people with other mobility impairments can’t access most private houses in Australia. There is a drastic lack of accessible housing for people with disability and the cost of retrofitting access is exorbitant.

    New South Wales is yet to follow the lead of other states and territories by signing up to the Silver Liveable Housing Design Standards. These standards are part of the revised National Construction Code. They require new housing developments to offer basic accessibility for all people.

    We can and must do better. Every level of government can contribute to change.

    However, new builds and renovations are often decided upon at the regional level. This means local councils should bear much of the responsibility.

    A determined effort by our mayors and councillors to insist premises are accessible will be better for everyone. From a selfish perspective, it might mean I could go out to dinner or a festival without worrying if I can get in the door.

    Shane Clifton is affiliated with the Centre for Disability Research and Policy at the University of Sydney.

    ref. When even fringe festival venues exclude people with disability, cities need to act on access – https://theconversation.com/when-even-fringe-festival-venues-exclude-people-with-disability-cities-need-to-act-on-access-239937

    MIL OSI AnalysisEveningReport.nz

  • MIL-OSI Global: Lebanon: the killing of Hassan Nasrallah leaves Hezbollah leaderless and vulnerable

    Source: The Conversation – UK – By Ori Wertman, Research fellow, Faculty of Life Sciences and Education, University of South Wales

    The assassination of Hezbollah chief, Hassan Nasrallah, in an Israeli airstrike on September 28 is a decisive blow – not only to Hezbollah, but also to Iran, which has lost its greatest ally in the Middle East.

    In recent days, the conflict between Israel and Hezbollah has risen to its most intense level since the end of the second Lebanon war in the summer of 2006. The day after Hamas’ brutal October 7 terror attack, in which 1,200 Israelis were massacred – many of them civilians murdered in their homes in towns near the Gaza border or at the nearby Nova music festival – Hezbollah opened another front against Israel.

    Hezbollah, which has been designated by the US and UK governments as a terror organisation, was quick to express support and solidarity with Hamas and immediately began launching rockets at civilian and military targets in northern Israel.

    Fearing that Hezbollah might carry out a similar incursion in Galilee, resulting in a massacre of the Jewish civilian population, the Israeli government evacuated roughly 100,000 citizens living near the Lebanese border. These people have now been displaced from their homes for a year.

    Until recently, the fighting between the parties was characterised by a relatively low intensity. Hezbollah has launched thousands of rockets and drones at Israeli civilian and military targets. These have mainly been in the north of the country, killing dozens of Israelis since October 2023. The IDF has responded with airstrikes and artillery fire against Hezbollah targets in Lebanon, including rocket depots and other military infrastructure. But to an extent, the exchanges were seen as being below the level that might escalate into all-out war betweeen Israel and Hezbollah.

    In July, a Hezbollah rocket attack killed 12 children in a football field in the Druze village of Majdal Shams in the Golan Heights. In response, three days later, Israel assassinated Hezbollah’s most senior commander, the head of its strategic unit, Fuad Shukr, in an airstrike in Beirut.

    The violence has steadily escalated since. On August 25, as Hezbollah was preparing a major rocket attack on the north and centre of Israel, the IDF launched a preemptive strike against Hezbollah missile launchers that were poised to strike at targets within Israel. In mid-September, the Israeli security cabinet announced it had added the return of displaced residents from the cuntry’s north to its war goals.

    Days later, in a highly complex operation thousands of Hezbollah pagers exploded, killing dozens and wounding thousands of Hezbollah militants. The following day Hezbollah’s network of walkie talkies was targeted in the same way. Israel has not claimed responsibility for either of these incidents, but what cannot be denied is that they caused considerable damage to Hezbollah’s command and control.

    Two days after that, on September 20, Shukr’s successor, Ibrahim Akil, was killed in an Israeli airstrike in the Dahieh suburb of Beirut, along with dozens of senior commanders of Hezbollah’s elite Radwan force.

    Operation Northen Arrows

    Yet all these moves were only the prelude to Operation Northern Arrows, which began on September 23. The Israeli air force attacked 1,600 Hezbollah targets, including thousands of rocket and missile launchers that had been stored among the civilian population throughout Lebanon.

    Hezbollah has responded by firing rockets at Israel, most of which were intercepted by Israel’s air defence systems. It is estimated that Hezbollah had an arsenal of 150,000 rockets, including medium and long-range missiles. Many of these have now been eliminated by Israeli airstrikes. Hezbollah still has precision-guided munitions and drones, but recent Israeli strikes have eliminated much of Hezbollah’s chain of command and severely disrupted its operational equilibrium. The assassination of many of Hezbollah’s senior leadership – and now Nasrallah himself – has all but destroyed the group’s military chain of command.

    So far there has been no sign from Tehran that Iran intends to intervene militarily to help Hezbollah. This must call into question the advantage of acting as one of the country’s most important proxies in the region. In this context, many in Beirut, Damascus, Sana’a and Gaza are surely asking themselves now what is the advantage of being Iran’s emissaries, if the latter leaves them alone to face Israel.

    Ceasefire unlikely?

    As a result, the main hope for Hezbollah – and Lebanon itself, into whose economic and political structures Hezbollah has become so firmly embedded – is that the international community will impose a ceasefire on both sides in an effort to avoid this becoming a wider regional conflict. The US and France have pushed for a 21-day ceasefire. But it seems that, like its fight against Hamas in Gaza, Israel is determined to continue the military operation against Hezbollah.

    Now the world is waiting to see whether Israel will send troops into in Lebanon. Already thousands of citizens in the south of the country have fled north. But despite a statement from IDF chief of staff, Maj Gen Herzi Halevi, that the IDF is preparing to launch a ground operation in Lebanon, it is not at all certain that Israel wants to return to Lebanese soil.

    In May 2000 the IDF pulled back from southern Lebanon to the international border after 18 years of occupation and in 2006 it did the same in compliance with UN security council resolution 1701.

    There’s also a good chance that, given the success of its campaign of airstrikes in neutralising the military threat from Hezbollah, an actual ground invasion may be postponed for now.

    The US and other countries, including the UK, have urged Israel to put a hold on any invasion plans and agree a ceasefire. It presents the Biden administration, which is keenly aware of the need to keep both Jewish and Arab voters onside, with a tough choice. But it is hard to believe that Biden, especially during an election campaign and in light of the special relationship between the countries, will put pressure on Jerusalem to stop its fight against Iranian proxy terrorism.

    Ori Wertman does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. Lebanon: the killing of Hassan Nasrallah leaves Hezbollah leaderless and vulnerable – https://theconversation.com/lebanon-the-killing-of-hassan-nasrallah-leaves-hezbollah-leaderless-and-vulnerable-239992

    MIL OSI – Global Reports

  • MIL-OSI Global: How Lebanon’s national identity is exploited to justify violence against it

    Source: The Conversation – Canada – By Rayyan Dabbous, PhD student, Centre for Comparative Literature, University of Toronto

    The Lebanese armed group Hezbollah confirmed on Sept. 28 that its leader, Hassan Nasrallah, had been killed in an Israeli airstrike in Beirut a day earlier. Nasrallah is the highest-ranking Hezbollah leader to have been killed since Israel began targeting the group’s leadership.

    Several Hezbollah commanders, and hundreds of Lebanese civilians, have been killed in Israeli attacks in recent weeks. On Sept. 20, Israel launched its heaviest aerial bombing on Lebanon since 2006, killing hundreds of civilians. The attack followed the Sept. 17 coordinated explosions of hand-held wireless pagers allegedly carried by members of Hezbollah (but still also carried by many medical professionals). That assault maimed thousands of Lebanese people.

    Israel says the violent strikes were necessary to preemptively thwart Hezbollah from launching rockets into northern Israel. Israel’s Prime Minister Benjamin Netanyahu addressed the Lebanese population: “Israel’s war is not with you, it’s with Hezbollah,” which has long “been using you as human shields.”

    The Telegraph in the United Kingdom proclaimed Israel’s war against Hezbollah as a brave move on behalf of the “West” to “uphold civilization.” Other news outlets, both western and Israeli, also framed the conflict as one for civilization. They also mentioned religion.

    Wars have always required these types of false dichotomies: Christian and Muslim, civilization and barbarism, West and East.

    Generations of Orientalists from the “West” constructed the “East” as a place with distinct cultural identities and values, and one over which the West must triumph.

    The way East and West has historically been framed in Lebanon can help us understand the way the conflict there is being discussed in the Global North. To do this, I briefly outline three time periods to attempt to shed some light on how this framing can be used to justify violence against the nation.

    1. Premodern times: Caught between two empires

    Lebanon has frequently been a battleground between West and East. For aristocracies and clergies in France and Italy, Lebanon first became part of the East under Byzantium (the eastern half of the Roman empire). Later, Lebanon became part of the Islamic and Ottoman empires. It was not religion that defined these West/East splits but aspirations for wealth, resources, power and hegemony.

    Following the collapse of the Roman Empire, in which modern-day Lebanon was situated, economic and political power remained in Christian hands but was transferred from Rome to Constantinople (modern day Istanbul). After eight major waves of Crusades, notorious for their pillages and “collateral damage” even in Christian cities, Western observers came to regard the East as a “treasure” that had been regained.

    In his seminal book Europe and Islam, first published in French in 1978, pre-eminent Tunisian historian Hichem Djaït showed how Christianity in Europe was, from its inception, a political project aimed to both unite against and catch up to Islamic cultural, scientific and economic advancement.

    The East, Djaït emphasized, was regarded as a deformed West, a “parvenu” and “a primitive newcomer” whose civilization was an aberration in Medieval Christian eyes. They regarded Islam’s prophet Muhammad as an internal traitor rather than an external threat. For example, in Dante’s Inferno Muhammad is punished for contributing to the West/East schism.

    Western interest in the East was also, for Djaït, rooted in an envy for how diverse groups co-existed for centuries in the east but not the west.

    II. Caught within colonial expansion

    Following the defeat of the Ottoman Empire in the First World War, Lebanon came under French rule. By this point, the Ottomans had been regarded as “the Sick Man of Europe” since at least the mid-19th century. Global powers exploited this characterization of Lebanon and were activated to send missionaries, build missionary schools, and revamp ports. The French also intervened with the work of sectarian groups. Therefore, especially in the 1920s, the French led a rapid modernizing of Lebanon, characterized as a trade-off between West and East.

    The Syrian playwright Saadallah Wannous dramatized this trade-off in The Drunken Days in a dialogue between an old Lebanese man in his Eastern headwear, the tarbush, and a young Lebanese woman urging him to wear a Western hat:

    Him: The tarbush is a symbol of religion.

    Her: The hat is a symbol of urbanization.

    Him: The tarbush indicates devotion.

    Her: The hat indicates civilization.

    Lebanese intellectuals at the time were aware of this dangerous equation of West with civilization. Palestinian-Lebanese writer May Ziadeh actively worked in the 1920s and 1930s to dispel the false dichotomy between West and East. She encouraged her students to “learn Western languages without forgetting their own” and she believed that “not a single nation in the world has been able to create itself without the input of others.”

    Ziadeh belonged to a time referred to as the Nahda, or Arab Renaissance, when Arab writers wanted to revive the human flourishing once experienced in the medieval Islamic world. These intellectuals favoured a balanced approach between West and East and recognized the modernity the West ushered as a continuation of Eastern achievements.

    III. 1975-2005: Caught between civil war and 9/11

    Whereas questioning the West/East divide united a previous generation of Lebanese Christians and Muslims, the generations that went through the Lebanese civil war (1975–1990) affirmed that divide.

    Western media capitalized on the newly divided allegiances of Lebanese Christians and framed them as torn in a West/East clash.

    Some Lebanese political leaders also promoted this narrative and appealed to the West for support. Meanwhile, the emergence of Hezbollah after Israel’s 1982 invasion of Lebanon became synonymous with a resistance against the West.

    But this narrative obscures the realities of how and why these divides were created. These divides are created by Lebanese groups, including Hezbollah, as well as the West. They boosted, hindered and created each other. For example, in 2018, western media ignored claims of election fraud in Lebanon and instead sensationalized Hezbollah’s victory.

    In a 1985 piece for the London Review of Books, Edward Said, author of Orientalism, cautioned against seeing Beirut as the Paris of the Middle East and Lebanon as its Switzerland, comparisons popular since the 1960s. Such comparisons have been recently recirculated and mourned by both Israeli and Lebanese media.

    For Said, this representation of Lebanon threatened solidarity movements with Arabs and Palestinians by characterizing it as something fundamentally different from the rest of the Arab world.

    But two years after the end of the Lebanese Civil War, American political scientist Samuel P. Huntington promoted the simplistic logic Said warned against and declared a clash of civilizations. The aftermath of the Sept. 11 attacks saw a resurgence of Huntington’s theory. It revived in the West the Medieval Christian view of the East, and a desire to act as crusaders who export human rights and defend the world against terrorists.

    We need to once and for all dispose of the West and the East as a clash of civilizations. Militaries and militias should not have to race to eliminate either side. They should instead realize that their fate is as intertwined as their past, and that only dialogue can solve conflict.

    Rayyan Dabbous does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    ref. How Lebanon’s national identity is exploited to justify violence against it – https://theconversation.com/how-lebanons-national-identity-is-exploited-to-justify-violence-against-it-239697

    MIL OSI – Global Reports